-
ray-project/ray provides a simple, universal API for building distributed applications, and also offers scalable and programmable serving with Ray Serve. I am curious not to find the answer from https://docs.bentoml.org/en/latest/faq.html. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @tangyong, Ray Serve is probably more comparable to BentoML, it is just a small component in Ray. Here are the three main differences between Ray Serve and BentoML:
|
Beta Was this translation helpful? Give feedback.
Hi @tangyong, Ray Serve is probably more comparable to BentoML, it is just a small component in Ray. Here are the three main differences between Ray Serve and BentoML:
Ray serve only works in a ray cluster, BentoML allows deploying to many different platforms, including Kubernetes, OpenShift, AWS SageMaker, AWS Lambda, Azure ML, GCP, Heroku - and batch inference job on Apache Spark, Apache Airflow, etc.
Ray serve only supports serving web HTTP traffic, BentoML supports online API serving via REST/HTTP and GRPC(roadmap), plus batch offline serving, programmatic access to your model(Python API, PyPI), as well as deploying model as a distributed batch or streaming job on Spark.
BentoML…