This quickstart demonstrates how to build a text summarization application with a Transformer model from the Hugging Face Model Hub.
Python 3.8+ and pip
installed. See the Python downloads page to learn more.
Perform the following steps to run this project and deploy it to BentoCloud.
-
Clone the repository:
git clone https://github.com/bentoml/quickstart.git cd quickstart
-
Install the required dependencies:
pip install -r requirements.txt
-
Serve your model as an HTTP server. This starts a local server at http://localhost:3000, making your model accessible as a web service.
bentoml serve .
-
Once your Service is ready, you can deploy it to BentoCloud. Make sure you have logged in to BentoCloud and run the following command to deploy it.
bentoml deploy .
Note: Alternatively, you can manually build a Bento, containerize it with Docker, and deploy it in any Docker-compatible environment.
For more information, see Quickstart in the BentoML documentation.