A central location of data collected from several different systems and offered up through an API
The aggregator is the core of our data hub — a large internal project to consolidate data across many disparate systems at the Art Institute of Chicago into a single, unified source. This offers our products a rich set of data that can be accessed in one way, in one location. For more information about our data hub, please peruse the following paper:
You are in the right place! The aggregator contains all of our public APIs, which power our public-facing applications, such as our website and mobile app. As part of our Open Access efforts, we are making them available to the general public.
For example, here's an endpoint that lists all of our published artworks:
https://api.artic.edu/api/v1/artworks
...and here's a query that shows identifiers, titles, and last modified dates for all artworks that have been updated in our collections system in the past seven days from this moment, sorted in reverse chronological order:
Our API is a wrapper around Elasticsearch's Query DSL. Depending on your needs, these queries can get quite complex.
Here are some resources to get you started:
- Art Institute of Chicago — API Documentation (fields and endpoints)
- Elasticsearch 6.0 — Query DSL (query syntax)
- Art Institute of Chicago — Open Access — Public API (example projects)
We are currently working on improving our documentation. In the meantime, feel free to open an issue here or reach out to engineering@artic.edu with any questions. We would love to hear about any projects you pursue with our API.
- All data is available via a JSON-based RESTful API
- Most data is searchable via an Elasticsearch wrapper
- Complex data types can be "included" in requests
- Large lists are paginated
- Unit tests for all endpoints
The aggregator interfaces with several internal APIs to collect its data. All data is imported and served up locally so that at runtime the API doesn't have dependencies on other systems. artisan
commands have been set up to import data from various sources, either en masse or incrementally. One of the greatest benefits of an aggregator like this one is the ability to provide relationship between resources across systems. Our /artworks
endpoint is a great example, as you can see relationships they have to a number of different things, like mobile tours, digital publications, and historic static sites.
The project has been built in Laravel, and includes the following requirements:
- Laravel 5.8
- PHP 7.1
- MySQL 5.7
- Composer
- Elasticsearch 6.0
For development, we recommend that you use Laravel Homestead. It includes everything you need to run this project. Note that you will need to enable the optional Elasticsearch feature in your Homestead.yaml.
To get started with this project, use the following commands:
# Clone the repo to your computer
git clone https://github.com/art-institute-of-chicago/data-aggregator.git
# Enter the folder that was created by the clone
cd data-aggregator
# Install PHP dependencies
composer install
First you'll need to create a .env
file and update it to reflect your environment. We've provided an example file to get you started:
# Copy the example file
cp .env.example .env
# Generate a new key for your Laravel project
php artisan key:generate
Then, to create the database tables and seed them with fake data, run:
php artisan migrate --seed
This will create all the tables and relationships, and fill the tables with data from the Faker PHP library.
We've created a series of artisan
tasks to import data from source systems. You can see all the available imports like so:
php artisan list import
To import all data from all systems, run:
php artisan import:all
npm run docs-dev
npm run docs-build
We encourage your contributions. Please fork this repository and make your changes in a separate branch. To better understand how we organize our code, please review our version control guidelines.
# Clone the repo to your computer
git clone git@github.com:your-github-account/data-aggregator.git
# Enter the folder that was created by the clone
cd data-aggregator
# Install PHP dependencies
composer install
# Start a feature branch
git checkout -b feature/good-short-description
# ... make some changes, commit your code
# Push your branch to GitHub
git push origin feature/good-short-description
Then on GitHub, create a Pull Request to merge your changes into our develop
branch.
Our internal team uses php-cs-fixer
to ensure our code meets various PHP Standards Recommendations. You're welcome to integrate php-cs-fixer
into your workflow as you work on this project, but it is not required to make a contribution.
This project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
We welcome bug reports and questions under GitHub's Issues. For other concerns, you can reach our engineering team at engineering@artic.edu
This project is licensed under the GNU Affero General Public License Version 3.