Skip to content

The Indexing Crawler Service (ICS) repository is part of the Corporate Linked Data Catalog - short: COLID - application. It is responsible to extract data from a RDF storage system, transform and enrich the data and finally to send it via a message queue to the DMP Webservice for indexing.

License

Notifications You must be signed in to change notification settings

Bayer-Group/COLID-Indexing-Crawler-Service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

COLID Indexing Crawler Service

The Indexing Crawler Service is part of the Corporate Linked Data Catalog - short: COLID - application. Here you can find an introduction to the application. A description of all its functions is here.

The complete guide can be found at the following link.

The Indexing Crawler Service (ICS) is responsible to extract data from a RDF storage system, transform and enrich the data and finally to send it via a message queue to the DMP Webservice for indexing.

Getting Started

Demo

Want to see the COLID application in action and play around with API? The quickest way to get started is to checkout our setup repository.

Building

A complete guide can be found at the following link. It describes a short guide for the build of the application.

Running

A complete guide can be found at the following link. It describes a short guide to run the application.

How to Contribute

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

About

The Indexing Crawler Service (ICS) repository is part of the Corporate Linked Data Catalog - short: COLID - application. It is responsible to extract data from a RDF storage system, transform and enrich the data and finally to send it via a message queue to the DMP Webservice for indexing.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •