Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipeline refactor to use BasePipeline #1081

Merged
merged 16 commits into from
Jul 18, 2023
Merged

Pipeline refactor to use BasePipeline #1081

merged 16 commits into from
Jul 18, 2023

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Jun 20, 2023

Summary

  • The following PR implements a refactoring of pipeline.py by introducing a new BasePipeline. A high-level summary of the refactor can be seen in the diagram below. The testing summary is described below. Please let me know if anything additional or more aggressive should be tested
    image

Tests

  • New test test_base_pipeline added to test_pipeline.py
  • Tested locally the tests under pipelines: (test_pipeline.py, test_bucketing.py, test_custom_pipeline.py, test_computer_vision_pipelines.py, test_transformers.py)

Local Testing using the following code:

New BasePipeline

from deepsparse import Pipeline, BasePipeline

@BasePipeline.register(
    task="clip"
)
class CLIPZeroShot(BasePipeline):
    def __init__(self, something_clip_specific, **kwargs):
        self.something_clip_specific = something_clip_specific
        super().__init__(**kwargs)

    def __call__(self, *args, **kwargs):
        print("Doing something")

    def input_schema(self):
        print("doing something with the input scheme")

    def output_schema(self):
        print("doing something with the output schema")

kwargs = {"alias": "clip_alias", "something_clip_specific": "something_clip_spec"}
zero_shot_pipeline = BasePipeline.create(
    task="clip",
    **kwargs
)
print(zero_shot_pipeline.alias, zero_shot_pipeline.something_clip_specific)

Existing Pipelines


from deepsparse import Pipeline, BasePipeline

## YOLO
model_stub = "zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/pruned-aggressive_98"
images = ["basilica.jpg"]

yolo_pipeline = Pipeline.create(
    task="yolo",
    model_path=model_stub,
)
pipeline_outputs = yolo_pipeline(images=images, iou_thres=0.6, conf_thres=0.001)

### YOLO8

model_path = "yolov8n.onnx"  # or "yolov8n_quant.onnx"
images = ["basilica.jpg"]
yolo_pipeline = Pipeline.create(
    task="yolov8",
    model_path=model_path,
)
pipeline_outputs = yolo_pipeline(images=images)

## YOLOACT
model_stub = "zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned82_quant-none"
images = ["thailand.jpg"]

yolact_pipeline = Pipeline.create(
    task="yolact",
    model_path=model_stub,
    class_names="coco",
)

predictions = yolact_pipeline(images=images, confidence_threshold=0.2,nms_threshold = 0.5)
# predictions has attributes `boxes`, `classes`, `masks` and `scores`
print(predictions.classes[0])


## Transformers
##QA
qa_pipeline = Pipeline.create(task="question-answering")
inference = qa_pipeline(question="What's my name?", context="My name is Dipika")
print(inference)

## SA
sa_pipeline = Pipeline.create(task="sentiment-analysis")
inference = sa_pipeline("I love pizza")
print(inference)

## TC
tc_pipeline = Pipeline.create(
   task="text-classification",
   model_path="zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/qqp/pruned80_quant-none-vnni",
)
inference = tc_pipeline(
    sequences=[
      [
         "Which is the best gaming laptop under 40k?",
         "Which is the best gaming laptop under 40,000 rs?",
      ]
   ]
)
print(inference)

## Token Classification 
tc_pipeline = Pipeline.create(task="token-classification")
inference = tc_pipeline("Drive from California to Texas!")
print(inference)


## OpenPifPaf  - not yet tested, installation errors
## pipeline = Pipeline.create(task="open_pif_paf")

## Image Classification
cv_pipeline = Pipeline.create(
  task='image_classification', 
  model_path='zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95-none', 
)

input_image = ["thailand.jpg", "basilica.jpg"]
print(cv_pipeline(images=input_image))

Server Testing:

Sample Config Used:

  num_cores: 2
  num_workers: 2
  batch_size: 2
  endpoints:
    - task: question_answering
      route: /unpruned/predict
      model: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned80_quant-none-vnni
      name: question_answering_pipeline_1
    - task: sentiment_analysis
      route: /sa/predict
      name: sa_pipeline
      model: zoo:nlp/sentiment_analysis/distilbert-none/pytorch/huggingface/sst2/pruned90-none
  • Served the two models above and responses were correct/working

@dsikka dsikka force-pushed the pipeline_refactor branch 2 times, most recently from 24947a1 to fe60a16 Compare June 21, 2023 18:05
@dsikka dsikka marked this pull request as ready for review June 21, 2023 18:30
Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very clean! This looks good to me over all - for the server tests, would like to see if we can try setting more task specific kwargs (ie sequence_length) to make sure that args are properly propagated and then for the engine specific kwargs (ie batch_size, num_cores) I wonder if a base pipeline alone (non engine) will choke since these are top level server args.

Similarly, might be a bit tricky overall but want to make sure we safely migrate any generic usage of pipeline attributes (ie if Pipeline.X is called in our repo for an arbitrary pipeline, want to make sure we either add a check or make X generic) - I believe most of this should be confined to the server though

src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
dbogunowicz
dbogunowicz previously approved these changes Jun 27, 2023
Copy link
Contributor

@dbogunowicz dbogunowicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, note the bucketing tests failing in GHA.

KSGulin
KSGulin previously approved these changes Jun 27, 2023
Copy link
Contributor

@KSGulin KSGulin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good pending failing tests

@dsikka
Copy link
Contributor Author

dsikka commented Jun 28, 2023

Very clean! This looks good to me over all - for the server tests, would like to see if we can try setting more task specific kwargs (ie sequence_length) to make sure that args are properly propagated and then for the engine specific kwargs (ie batch_size, num_cores) I wonder if a base pipeline alone (non engine) will choke since these are top level server args.

Similarly, might be a bit tricky overall but want to make sure we safely migrate any generic usage of pipeline attributes (ie if Pipeline.X is called in our repo for an arbitrary pipeline, want to make sure we either add a check or make X generic) - I believe most of this should be confined to the server though

@bfineran

For 1:

Updated to test the server with more pipeline-specific arguments, and verified the results were correct/pipelines were initialized as expected

num_cores: 2
num_workers: 2
endpoints:
  - task: question_answering
    route: /unpruned/predict
    model: zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned80_quant-none-vnni
    name: question_answering_pipeline_1
    kwargs: {"max_question_length": 50, "max_answer_length": 10}
  - task: sentiment_analysis
    route: /sa/predict
    name: sa_pipeline
    model: zoo:nlp/sentiment_analysis/distilbert-none/pytorch/huggingface/sst2/pruned90-none
    kwargs: {"return_all_scores": True}

For 2:
For Piipeline.X functions, I only see create, register, and from_config in the codebase which all seem to be working as expected. Any other I should be seeing/testing?

@bfineran
Copy link
Contributor

@dsikka for the Pipeline class directly this seems about right. Surprised the server doesn't access properties of instantiated pipelines directly such a input schema. I do think we should be good overall though

@dsikka dsikka requested a review from bfineran June 30, 2023 15:16
bfineran
bfineran previously approved these changes Jul 3, 2023
@bfineran
Copy link
Contributor

bfineran commented Jul 3, 2023

approved - test failures look unrelated. waiting to land after final major 1.6 features land

@dsikka
Copy link
Contributor Author

dsikka commented Jul 13, 2023

Rebased off of main since last approvals/PR review. This rebase includes the engine initialization changing, as well as joining/splitting engine inputs.

Retested pipeline tests and server examples (see PR description) and everything works as expected.

src/deepsparse/server/cli.py Outdated Show resolved Hide resolved
src/deepsparse/pipeline.py Outdated Show resolved Hide resolved
tests/deepsparse/pipelines/test_dynamic_import.py Outdated Show resolved Hide resolved
bfineran
bfineran previously approved these changes Jul 17, 2023
@dsikka dsikka merged commit dc788db into main Jul 18, 2023
7 checks passed
@dsikka dsikka deleted the pipeline_refactor branch July 18, 2023 11:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants