layout | title | nav_order | has_children |
---|---|---|---|
default |
Lab 3. LLMOps for SLM with Azure AI Studio |
6 |
true |
This E2E example is for users who have just adopted Azure Open AI and want to build an LLM evaluation pipeline with Promptflow for quality assurance from scratch. It introduces the end-to-end processes of experimentation, model quality evaluation, deploying, and performance monitoring with Prompt flow and other tools after fine-tuned LLMs.
In this lab, you will learn how to set up, test, deploy, evaluate and monitor your fine-tuned models in the previous labs following your current use cases. By leveraging Azure AI studio and Prompt flow, you will establish a LLMOps pipeline for deploying and utilizing custom AI models. This E2E example is divided into five scenarios based on the yours current situation:
Scenario 1: Set Up Azure AI Studio for LLMOps
Scenario 2: Basic LLMOps for your first gen AI app with Promptflow
Scenario 3: Evaluate your models using Prompt Flow to keep optimizing
Scenario 4: Content Safety with Azure AI studio before production
TBD
LLMOps Prompt flow template github
https://serverspace.io/support/help/install-ruby-on-rails-ubuntu-20-04/