Skip to content

This repository includes an example of a simple FastAPI application containerized and served with Docker.

License

Notifications You must be signed in to change notification settings

modzy/fastapi-app-tech-talk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FastAPI Model Serving Example

Modzy Logo

FastAPI Logo

This repository provides an example of a simple FastAPI wrapper around a Scikit-learn model that is containerized and served with Docker.

GitHub contributors GitHub last commit GitHub issues GitHub

Getting Started

This repository provides an example of a simple FastAPI wrapper around a Scikit-learn model that is containerized and served with Docker. Below is a quick overview of the repository's contents:

  • app/: Directory for our FastAPI application
  • model/: Contains the training script and pickle file for our trained machine learning model
  • Dockerfile: File we will use to build a Docker container image to deploy our FastAPI application
  • requirements.txt: Python packages required to run the FastAPI app

Begin using the application in this repository right away by ensuring you have the following prerequisites installed:

  • Python>=3.7
  • Docker

Environment Setup

This section provides instructions for setting up your environment and installing dependencies you will need to execute the code in this repository.

Start by cloning this project into your directory and changing the directory:

git clone https://github.com/modzy/fastapi-app-tech-talk.git
cd fastapi-app-tech-talk

Next, in your Python environment (must be v3.7 or greater), create and activate a virtual environment with your preferred virtual environment tool (conda, pyenv, venv, etc.) These instructions will leverage Python's native venv module.

python -m venv venv

Activate environment.

For Linux or MacOS:

source venv/bin/activate

For Windows:

.\venv\Scripts\activate

Finally, use pip to install the python packages required to run the API:

pip install -r requirements.txt

You are all set! Continue following along to test out the API and containerize it with Docker.

Run FastAPI app

Within the app/ directory, you will find a main.py file we use to define our FastAPI application. This simple API is an inference wrapper around the Scikit-learn model to predict Iris species.

To run the app, simply run this command in your terminal:

uvicorn app.main:app --reload

This will spin up your FastAPI application on http://127.0.0.1:8000. Navigate to this URL, where you should see a message that looks like the following:

{"detail":"Not Found"}

This is expected since we did not implement a root function, which in this case is not needed.

Navigate to http://127.0.0.1:8000/docs to see the automatically generated Swagger API docs, which allow you to interact directly with the application.

Containerize FastAPI app

Now that we have tested our FastAPI app, it is time to package our app in a container, which will allow us to deploy our API in scalable manner and provide access to our application.

To do so, simply build the Docker container image:

docker build -t fastapi-ml-model .

Next, spin up your container and port-forward the ports so you can again interact with your running container via the Swagger docs interactive UI.

docker run --rm -it -p 8000:80 fastapi-ml-model

Just as before when you ran the uvicorn command, you should see your application spin up inside of the Docker container. Again, navigate to http://127.0.0.1:8000/docs and see how you can make interactive API calls to a running container!

About

This repository includes an example of a simple FastAPI application containerized and served with Docker.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published