Skip to content

OpenOmics/HiIP

Repository files navigation

HiIP 🔬

long pipeline name

tests docs GitHub issues GitHub license

This is the home of the pipeline, HiIP. Its long-term goals: to accurately ...insert goal, to infer ...insert goal, and to boldly ...insert goal like no pipeline before!

Overview

Welcome to HiIP! Before getting started, we highly recommend reading through HiIP's documentation.

The ./HiIP pipeline is composed several inter-related sub commands to setup and run the pipeline across different systems. Each of the available sub commands perform different functions:

  • HiIP run: Run the HiIP pipeline with your input files.
  • HiIP unlock: Unlocks a previous runs output directory.
  • HiIP install: Download reference files locally.
  • HiIP cache: Cache remote resources locally, coming soon!

HiIP is a comprehensive ...insert long description. It relies on technologies like Singularity1 to maintain the highest-level of reproducibility. The pipeline consists of a series of data processing and quality-control steps orchestrated by Snakemake2, a flexible and scalable workflow management system, to submit jobs to a cluster.

The pipeline is compatible with data generated from Illumina short-read sequencing technologies. As input, it accepts a set of FastQ files and can be run locally on a compute instance or on-premise using a cluster. A user can define the method or mode of execution. The pipeline can submit jobs to a cluster using a job scheduler like SLURM (more coming soon!). A hybrid approach ensures the pipeline is accessible to all users.

Before getting started, we highly recommend reading through the usage section of each available sub command.

For more information about issues or trouble-shooting a problem, please checkout our FAQ prior to opening an issue on Github.

Dependencies

Requires: singularity>=3.5 snakemake>=6.0

At the current moment, the pipeline uses a mixture of enviroment modules and docker images; however, this will be changing soon! In the very near future, the pipeline will only use docker images. With that being said, snakemake and singularity must be installed on the target system. Snakemake orchestrates the execution of each step in the pipeline. To guarantee the highest level of reproducibility, each step of the pipeline will rely on versioned images from DockerHub. Snakemake uses singularity to pull these images onto the local filesystem prior to job execution, and as so, snakemake and singularity will be the only two dependencies in the future.

Installation

Please clone this repository to your local filesystem using the following command:

# Clone Repository from Github
git clone https://github.com/OpenOmics/HiIP.git
# Change your working directory
cd HiIP/
# Add dependencies to $PATH
# Biowulf users should run
module load snakemake singularity
# Get usage information
./HiIP -h

Contribute

This site is a living document, created for and by members like you. HiIP is maintained by the members of OpenOmics and is improved by continous feedback! We encourage you to contribute new content and make improvements to existing content via pull request to our GitHub repository.

Cite

If you use this software, please cite it as below:

@BibText
Citation coming soon!
@APA
Citation coming soon!

References

1. Kurtzer GM, Sochat V, Bauer MW (2017). Singularity: Scientific containers for mobility of compute. PLoS ONE 12(5): e0177459.
2. Koster, J. and S. Rahmann (2018). "Snakemake-a scalable bioinformatics workflow engine." Bioinformatics 34(20): 3600.

Releases

No releases published

Packages

No packages published