Skip to content

Commit

Permalink
Merge branch 'main' into feature/damian/benchmark_llm
Browse files Browse the repository at this point in the history
  • Loading branch information
dbogunowicz authored Aug 7, 2023
2 parents 28c4b41 + d7f037c commit 360db72
Show file tree
Hide file tree
Showing 18 changed files with 613 additions and 203 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/test-check.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
- name: "Clean sparsezoo directory"
run: rm -r sparsezoo/
- name: ⚙️ Install dependencies
run: pip3 install .[dev,server,image_classification,transformers] opencv-python
run: pip3 install .[dev,server,image_classification,transformers,clip] opencv-python
- name: Run base tests
run: make test
cli-smoke-tests:
Expand Down
2 changes: 1 addition & 1 deletion examples/aws-sagemaker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN python3 -m venv $VIRTUAL_ENV && \
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir "deepsparse-nightly[server]" # TODO: switch to deepsparse[server] >= 0.12
pip3 install deepsparse[transformers,server]>=1.5.2

# create 'serve' command for sagemaker entrypoint
RUN mkdir /opt/server/
Expand Down
1 change: 0 additions & 1 deletion examples/aws-sagemaker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ After the endpoint has been staged (~1 minute), you can start making requests by
```python
from qa_client import Endpoint


qa = Endpoint("us-east-1", "question-answering-example-endpoint")
answer = qa.predict(question="who is batman?", context="Mark is batman.")

Expand Down
7 changes: 2 additions & 5 deletions examples/google-cloud-run/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,8 @@ COPY ${config_path} /root/server-config.yaml
ENV VIRTUAL_ENV=/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

COPY topo-four-core.json ./
ENV NM_ARCH_FILE=./topo-four-core.json

RUN python3 -m venv $VIRTUAL_ENV && \
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir "deepsparse-nightly[server]"
pip3 install --no-cache-dir deepsparse[transformers,server]>=1.5.2

ENTRYPOINT deepsparse.server config /root/server-config.yaml --port 8080
ENTRYPOINT deepsparse.server --config-file /root/server-config.yaml --port 8080
7 changes: 7 additions & 0 deletions examples/google-cloud-run/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,13 @@ The listed steps can be easily completed using `Python` and `Bash`. The followin
gcloud beta billing accounts list
```

Before starting, check your permissions for interacting with Google Cloud and Docker from your local machine:

```bash
gcloud auth login
gcloud auth configure-docker
```

## Installation
```bash
git clone https://github.com/neuralmagic/deepsparse.git
Expand Down
182 changes: 0 additions & 182 deletions examples/google-cloud-run/topo-four-core.json

This file was deleted.

2 changes: 1 addition & 1 deletion examples/sparsestream/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
tweepy==4.8.0
deepsparse>=1.1.0
deepsparse[transformers]>=1.5.2
rich>=12.2.0
3 changes: 2 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ def _parse_requirements_file(file_path):
"haystack_reqs.txt",
)
_haystack_integration_deps = _parse_requirements_file(_haystack_requirements_file_path)

_clip_deps = ["open_clip_torch==2.20.0", "scipy==1.10.1"]

_torch_deps = ["torch>=1.7.0,<=2.0"]

Expand Down Expand Up @@ -280,6 +280,7 @@ def _setup_extras() -> Dict:
"yolov8": _yolov8_integration_deps,
"transformers": _transformers_integration_deps,
"torch": _torch_deps,
"clip": _clip_deps,
}


Expand Down
75 changes: 75 additions & 0 deletions src/deepsparse/clip/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# CLIP Inference Pipelines

DeepSparse allows inference on [CLIP](https://github.com/mlfoundations/open_clip) models.

The CLIP integration currently supports the following task:
- **Zero-shot Image Classification** - Classifying images given possible classes

## Getting Started

Before you start your adventure with the DeepSparse Engine, make sure that your machine is compatible with our [hardware requirements](https://docs.neuralmagic.com/deepsparse/source/hardware.html).

### Installation
```pip install deepsparse[clip]```

### Model Format
By default, to deploy CLIP models using the DeepSparse Engine, it is required to supply the model in the ONNX format. This grants the engine the flexibility to serve any model in a framework-agnostic environment. To see examples of pulling CLIP models and exporting them to ONNX, please see the [sparseml documentation](https://github.com/neuralmagic/sparseml/tree/main/integrations/clip). For the Zero-shot image classification workflow, two ONNX models are required, a visual model for CLIP's visual branch, and a text model for CLIP's text branch. Both of these model should be produced through the sparseml integration linked above.

### Deployment examples:
The following example uses pipelines to run the CLIP models for inference. As input, the pipeline ingests a list of images and a list of possible classes. A class is returned for each of the provided images.

If you don't have images ready, pull down the sample images using the following commands:

```bash
wget -O basilica.jpg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/src/deepsparse/yolo/sample_images/basilica.jpg

wget -O buddy.jpeg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/tests/deepsparse/pipelines/sample_images/buddy.jpeg
```

This will pull down two images, one with a happy dog and one with St.Peter's basilica.

#### Zero-shot Prediction

Let's run an example to clasify the images. We'll provide the images in a list with their file names as well as a list of possible classes. We'll also provide paths to the exported ONNX models.

```python
import numpy as np

from deepsparse import BasePipeline
from deepsparse.clip import (
CLIPTextInput,
CLIPVisualInput,
CLIPZeroShotInput
)

possible_classes = ["ice cream", "an elephant", "a dog", "a building", "a church"]
images = ["basilica.jpg", "buddy.jpeg"]

model_path_text = "zeroshot_research/text/model.onnx"
model_path_visual = "zeroshot_research/visual/model.onnx"

kwargs = {
"visual_model_path": model_path_visual,
"text_model_path": model_path_text,
}
pipeline = BasePipeline.create(task="clip_zeroshot", **kwargs)

pipeline_input = CLIPZeroShotInput(
image=CLIPVisualInput(images=images),
text=CLIPTextInput(text=possible_classes),
)

output = pipeline(pipeline_input).text_scores
for i in range(len(output)):
prediction = possible_classes[np.argmax(output[i])]
print(f"Image {images[i]} is a picture of {prediction}")
```

Running the code above, we get the following outuput:

```
DeepSparse, Copyright 2021-present / Neuralmagic, Inc. version: 1.6.0.20230727 COMMUNITY | (3cb4a3e5) (optimized) (system=avx2, binary=avx2)
Image basilica.jpg is a picture of a church
Image buddy.jpeg is a picture of a dog
```
31 changes: 31 additions & 0 deletions src/deepsparse/clip/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# flake8: noqa

from deepsparse.clip.text_pipeline import (
CLIPTextInput,
CLIPTextOutput,
CLIPTextPipeline,
)
from deepsparse.clip.visual_pipeline import (
CLIPVisualInput,
CLIPVisualOutput,
CLIPVisualPipeline,
)
from deepsparse.clip.zeroshot_pipeline import (
CLIPZeroShotInput,
CLIPZeroShotOutput,
CLIPZeroShotPipeline,
)
19 changes: 19 additions & 0 deletions src/deepsparse/clip/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


__all__ = ["CLIP_RGB_MEANS", "CLIP_RGB_STDS"]

CLIP_RGB_MEANS = [0.48145466, 0.4578275, 0.40821073]
CLIP_RGB_STDS = [0.26862954, 0.26130258, 0.27577711]
Loading

0 comments on commit 360db72

Please sign in to comment.