Skip to content

JaneliaSciComp/zarrcade

Repository files navigation

Zarrcade

logoz@0 1x

Python CI

Zarrcade is a web application for easily browsing, searching, and visualizing collections of OME-NGFF (i.e. OME-Zarr) images. It implements the following features:

  • Automatic discovery of OME-Zarr images on any storage backend supported by fsspec including file system, AWS S3, Azure Blob, Google Cloud Storage, Dropbox, etc.
  • MIP/thumbnail generation
  • Web-based MIP gallery with convenient viewing links to NGFF-compliant viewers
  • Searchable/filterable metadata and annotations
  • Neuroglancer state generation for multichannel images
  • Build-in file proxy for non-public storage backends
  • Integration with external file proxies (e.g. x2s3)

screenshot

Getting Started

1. Install miniforge

Install miniforge if you don't already have it.

2. Initialize the conda environment

conda env create -f environment.yml
conda activate zarrcade

3. Create OME-Zarr images

If necessary, convert your image(s) to OME-Zarr format, e.g. using bioformats2raw:

bioformats2raw -w 128 -h 128 -z 64 --compression zlib /path/to/input /path/to/zarr

If you have many images to convert, we recommend using the nf-omezarr Nextflow pipeline to efficiently run bioformats2raw on a collection of images. This pipeline also lets you scale the conversion processes to your available compute resources (cluster, cloud, etc).

4. Import images and metadata into Zarrcade

You can import images into Zarrcade using the provided command line script:

bin/import.py -d /root/data/dir -c mycollection

This will automatically create a local Sqlite database containing a Zarrcade collection named "mycollection" and populate it with information about the images in the specified directory.

By default, this will also create MIPs and thumbnails for each image in a folder named .zarrcade within the root data directory. You can change this location by setting the --aux-path parameter. You can disable the creation of MIPs and thumbnails by setting the --no-aux flag. The brightness of the MIPs can be adjusted using the --p-lower and --p-upper parameters.

To add extra metadata about the images, you can provide a CSV file with the -i flag:

bin/import.py -d /root/data/dir -c collection_name -i input.csv

The CSV file's first column must be a relative path to the OME-Zarr image within the root data directory. The remaining columns can be any annotations that will be searched and displayed within the gallery, e.g.:

Path,Line,Marker
relative/path/to/ome1.zarr,JKF6363,Blu
relative/path/to/ome2.zarr,JDH3562,Blu

To try an example, use the following command:

bin/import.py -d s3://janelia-data-examples/fly-efish -c flyefish -m docs/flyefish-example.csv

5. Run the Zarrcade web application

Start the development server, pointing it to your OME-Zarr data:

uvicorn zarrcade.serve:app --host 0.0.0.0 --reload

Your images and annotations will be indexed and browseable at http://0.0.0.0:8000. Read the documentation below for more details on how to configure the web UI and deploy the service in production.

Documentation

  • Overview - learn about the data model and overall architecture
  • Configuration - configure the Zarrcade service using settings.yaml or environment variables
  • Deployment - instructions for deploying the service with Docker and in production mode
  • Development Notes - technical notes for developers working on Zarrcade itself

Known Limitations

  • Zarrcade has so far only been tested with OME-Zarr images generated by the bioformats2raw tool.
  • The OmeZarrAgent does not currently support the full OME-Zarr specification, and may fail with certain types of images. If you encounter an error with your data, please open an issue on the Github repository.

Attributions