diff --git a/bootcamp/tutorials/integration/build_RAG_with_milvus_and_ollama.ipynb b/bootcamp/tutorials/integration/build_RAG_with_milvus_and_ollama.ipynb
new file mode 100644
index 000000000..9374b76c0
--- /dev/null
+++ b/bootcamp/tutorials/integration/build_RAG_with_milvus_and_ollama.ipynb
@@ -0,0 +1,648 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ " \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Build RAG with Milvus and Ollama\n",
+ "\n",
+ "[Ollama](https://ollama.com/) is an open-source platform that simplifies running and customizing large language models (LLMs) locally. It provides a user-friendly, cloud-free experience, enabling effortless model downloads, installation, and interaction without requiring advanced technical skills. With a growing library of pre-trained LLMs—from general-purpose to domain-specific—Ollama makes it easy to manage and customize models for various applications. It ensures data privacy and flexibility, empowering users to fine-tune, optimize, and deploy AI-driven solutions entirely on their machines.\n",
+ "\n",
+ "In this guide, we’ll show you how to leverage Ollama and Milvus to build a RAG (Retrieval-Augmented Generation) pipeline efficiently and securely."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "## Preparation\n",
+ "### Dependencies and Environment"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "vscode": {
+ "languageId": "shellscript"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "! pip install pymilvus ollama"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "> If you are using Google Colab, to enable dependencies just installed, you may need to **restart the runtime** (click on the \"Runtime\" menu at the top of the screen, and select \"Restart session\" from the dropdown menu)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Prepare the data\n",
+ "\n",
+ "We use the FAQ pages from the [Milvus Documentation 2.4.x](https://github.com/milvus-io/milvus-docs/releases/download/v2.4.6-preview/milvus_docs_2.4.x_en.zip) as the private knowledge in our RAG, which is a good data source for a simple RAG pipeline.\n",
+ "\n",
+ "Download the zip file and extract documents to the folder `milvus_docs`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "vscode": {
+ "languageId": "shellscript"
+ },
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:21.129074Z",
+ "start_time": "2024-11-27T02:47:19.934551Z"
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "--2024-11-26 21:47:19-- https://github.com/milvus-io/milvus-docs/releases/download/v2.4.6-preview/milvus_docs_2.4.x_en.zip\r\n",
+ "Resolving github.com (github.com)... 140.82.112.4\r\n",
+ "Connecting to github.com (github.com)|140.82.112.4|:443... connected.\r\n",
+ "HTTP request sent, awaiting response... 302 Found\r\n",
+ "Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/267273319/c52902a0-e13c-4ca7-92e0-086751098a05?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20241127%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241127T024720Z&X-Amz-Expires=300&X-Amz-Signature=7808b77cbdaa7e122196bcd75a73f29f2540333a350c4830bbdf5f286e876304&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dmilvus_docs_2.4.x_en.zip&response-content-type=application%2Foctet-stream [following]\r\n",
+ "--2024-11-26 21:47:20-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/267273319/c52902a0-e13c-4ca7-92e0-086751098a05?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20241127%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241127T024720Z&X-Amz-Expires=300&X-Amz-Signature=7808b77cbdaa7e122196bcd75a73f29f2540333a350c4830bbdf5f286e876304&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%3Dmilvus_docs_2.4.x_en.zip&response-content-type=application%2Foctet-stream\r\n",
+ "Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.108.133, ...\r\n",
+ "Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.\r\n",
+ "HTTP request sent, awaiting response... 200 OK\r\n",
+ "Length: 613094 (599K) [application/octet-stream]\r\n",
+ "Saving to: ‘milvus_docs_2.4.x_en.zip’\r\n",
+ "\r\n",
+ "milvus_docs_2.4.x_e 100%[===================>] 598.72K 1.20MB/s in 0.5s \r\n",
+ "\r\n",
+ "2024-11-26 21:47:20 (1.20 MB/s) - ‘milvus_docs_2.4.x_en.zip’ saved [613094/613094]\r\n",
+ "\r\n"
+ ]
+ }
+ ],
+ "source": [
+ "! wget https://github.com/milvus-io/milvus-docs/releases/download/v2.4.6-preview/milvus_docs_2.4.x_en.zip\n",
+ "! unzip -q milvus_docs_2.4.x_en.zip -d milvus_docs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We load all markdown files from the folder `milvus_docs/en/faq`. For each document, we just simply use \"# \" to separate the content in the file, which can roughly separate the content of each main part of the markdown file."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:25.104740Z",
+ "start_time": "2024-11-27T02:47:25.101395Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from glob import glob\n",
+ "\n",
+ "text_lines = []\n",
+ "\n",
+ "for file_path in glob(\"milvus_docs/en/faq/*.md\", recursive=True):\n",
+ " with open(file_path, \"r\") as file:\n",
+ " file_text = file.read()\n",
+ "\n",
+ " text_lines += file_text.split(\"# \")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Prepare the LLM and Embedding Model\n",
+ "\n",
+ "Ollama supports multiple models for both LLM-based tasks and embedding generation, making it easy to develop retrieval-augmented generation (RAG) applications. For this setup:\n",
+ "\n",
+ "- We will use **Llama 3.2 (3B)** as our LLM for text generation tasks.\n",
+ "- For embedding generation, we will use **mxbai-embed-large**, a 334M parameter model optimized for semantic similarity.\n",
+ "\n",
+ "Before starting, ensure both models are pulled locally:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[?25lpulling manifest ⠋ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠙ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠹ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠼ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠴ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest \r\n",
+ "pulling 819c2adf5ce6... 100% ▕████████████████▏ 669 MB \r\n",
+ "pulling c71d239df917... 100% ▕████████████████▏ 11 KB \r\n",
+ "pulling b837481ff855... 100% ▕████████████████▏ 16 B \r\n",
+ "pulling 38badd946f91... 100% ▕████████████████▏ 408 B \r\n",
+ "verifying sha256 digest \r\n",
+ "writing manifest \r\n",
+ "success \u001b[?25h\r\n"
+ ]
+ }
+ ],
+ "source": [
+ "! ollama pull mxbai-embed-large"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:27.543374Z",
+ "start_time": "2024-11-27T02:47:26.773196Z"
+ }
+ },
+ "execution_count": 3
+ },
+ {
+ "cell_type": "code",
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[?25lpulling manifest ⠋ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠙ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠹ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠼ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest ⠴ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest \r\n",
+ "pulling dde5aa3fc5ff... 100% ▕████████████████▏ 2.0 GB \r\n",
+ "pulling 966de95ca8a6... 100% ▕████████████████▏ 1.4 KB \r\n",
+ "pulling fcc5a6bec9da... 100% ▕████████████████▏ 7.7 KB \r\n",
+ "pulling a70ff7e570d9... 100% ▕████████████████▏ 6.0 KB \r\n",
+ "pulling 56bb8bd477a5... 100% ▕████████████████▏ 96 B \r\n",
+ "pulling 34bb5ab01051... 100% ▕████████████████▏ 561 B \r\n",
+ "verifying sha256 digest \r\n",
+ "writing manifest \r\n",
+ "success \u001b[?25h\r\n"
+ ]
+ }
+ ],
+ "source": [
+ "! ollama pull llama3.2"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:28.821964Z",
+ "start_time": "2024-11-27T02:47:27.994522Z"
+ }
+ },
+ "execution_count": 4
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "With these models ready, we can proceed to implement LLM-driven generation and embedding-based retrieval workflows.\n"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:37.869891Z",
+ "start_time": "2024-11-27T02:47:37.637416Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import ollama\n",
+ "\n",
+ "\n",
+ "def emb_text(text):\n",
+ " response = ollama.embeddings(model=\"mxbai-embed-large\", prompt=text)\n",
+ " return response[\"embedding\"]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Generate a test embedding and print its dimension and first few elements."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:40.957739Z",
+ "start_time": "2024-11-27T02:47:40.093056Z"
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "1024\n",
+ "[0.23276396095752716, 0.4257211685180664, 0.19724100828170776, 0.46120673418045044, -0.46039995551109314, -0.1413791924715042, -0.18261606991291046, -0.07602324336767197, 0.39991313219070435, 0.8337644338607788]\n"
+ ]
+ }
+ ],
+ "source": [
+ "test_embedding = emb_text(\"This is a test\")\n",
+ "embedding_dim = len(test_embedding)\n",
+ "print(embedding_dim)\n",
+ "print(test_embedding[:10])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Load data into Milvus"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Create the Collection"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:43.669801Z",
+ "start_time": "2024-11-27T02:47:42.118638Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from pymilvus import MilvusClient\n",
+ "\n",
+ "milvus_client = MilvusClient(uri=\"./milvus_demo.db\")\n",
+ "\n",
+ "collection_name = \"my_rag_collection\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
+ "source": [
+ "> As for the argument of `MilvusClient`:\n",
+ "> - Setting the `uri` as a local file, e.g.`./milvus.db`, is the most convenient method, as it automatically utilizes [Milvus Lite](https://milvus.io/docs/milvus_lite.md) to store all data in this file.\n",
+ "> - If you have large scale of data, you can set up a more performant Milvus server on [docker or kubernetes](https://milvus.io/docs/quickstart.md). In this setup, please use the server uri, e.g.`http://localhost:19530`, as your `uri`.\n",
+ "> - If you want to use [Zilliz Cloud](https://zilliz.com/cloud), the fully managed cloud service for Milvus, adjust the `uri` and `token`, which correspond to the [Public Endpoint and Api key](https://docs.zilliz.com/docs/on-zilliz-cloud-console#free-cluster-details) in Zilliz Cloud."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Check if the collection already exists and drop it if it does."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:45.796899Z",
+ "start_time": "2024-11-27T02:47:45.787086Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "if milvus_client.has_collection(collection_name):\n",
+ " milvus_client.drop_collection(collection_name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Create a new collection with specified parameters. \n",
+ "\n",
+ "If we don't specify any field information, Milvus will automatically create a default `id` field for primary key, and a `vector` field to store the vector data. A reserved JSON field is used to store non-schema-defined fields and their values."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:47.144411Z",
+ "start_time": "2024-11-27T02:47:46.620312Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "milvus_client.create_collection(\n",
+ " collection_name=collection_name,\n",
+ " dimension=embedding_dim,\n",
+ " metric_type=\"IP\", # Inner product distance\n",
+ " consistency_level=\"Strong\", # Strong consistency level\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Insert data\n",
+ "Iterate through the text lines, create embeddings, and then insert the data into Milvus.\n",
+ "\n",
+ "Here is a new field `text`, which is a non-defined field in the collection schema. It will be automatically added to the reserved JSON dynamic field, which can be treated as a normal field at a high level."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:51.481223Z",
+ "start_time": "2024-11-27T02:47:48.221138Z"
+ }
+ },
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Creating embeddings: 100%|██████████| 72/72 [00:03<00:00, 22.56it/s]\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": "{'insert_count': 72, 'ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71], 'cost': 0}"
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from tqdm import tqdm\n",
+ "\n",
+ "data = []\n",
+ "\n",
+ "for i, line in enumerate(tqdm(text_lines, desc=\"Creating embeddings\")):\n",
+ " data.append({\"id\": i, \"vector\": emb_text(line), \"text\": line})\n",
+ "\n",
+ "milvus_client.insert(collection_name=collection_name, data=data)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Build RAG\n",
+ "\n",
+ "### Retrieve data for a query\n",
+ "\n",
+ "Let's specify a frequent question about Milvus."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:51.983084Z",
+ "start_time": "2024-11-27T02:47:51.977698Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "question = \"How is data stored in milvus?\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Search for the question in the collection and retrieve the semantic top-3 matches."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:53.074097Z",
+ "start_time": "2024-11-27T02:47:52.987898Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "search_res = milvus_client.search(\n",
+ " collection_name=collection_name,\n",
+ " data=[\n",
+ " emb_text(question)\n",
+ " ], # Use the `emb_text` function to convert the question to an embedding vector\n",
+ " limit=3, # Return top 3 results\n",
+ " search_params={\"metric_type\": \"IP\", \"params\": {}}, # Inner product distance\n",
+ " output_fields=[\"text\"], # Return the text field\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's take a look at the search results of the query\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:54.530671Z",
+ "start_time": "2024-11-27T02:47:54.525077Z"
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[\n",
+ " [\n",
+ " \" Where does Milvus store data?\\n\\nMilvus deals with two types of data, inserted data and metadata. \\n\\nInserted data, including vector data, scalar data, and collection-specific schema, are stored in persistent storage as incremental log. Milvus supports multiple object storage backends, including [MinIO](https://min.io/), [AWS S3](https://aws.amazon.com/s3/?nc1=h_ls), [Google Cloud Storage](https://cloud.google.com/storage?hl=en#object-storage-for-companies-of-all-sizes) (GCS), [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs), [Alibaba Cloud OSS](https://www.alibabacloud.com/product/object-storage-service), and [Tencent Cloud Object Storage](https://www.tencentcloud.com/products/cos) (COS).\\n\\nMetadata are generated within Milvus. Each Milvus module has its own metadata that are stored in etcd.\\n\\n###\",\n",
+ " 231.9398193359375\n",
+ " ],\n",
+ " [\n",
+ " \"How does Milvus flush data?\\n\\nMilvus returns success when inserted data are loaded to the message queue. However, the data are not yet flushed to the disk. Then Milvus' data node writes the data in the message queue to persistent storage as incremental logs. If `flush()` is called, the data node is forced to write all data in the message queue to persistent storage immediately.\\n\\n###\",\n",
+ " 226.48316955566406\n",
+ " ],\n",
+ " [\n",
+ " \"What is the maximum dataset size Milvus can handle?\\n\\n \\nTheoretically, the maximum dataset size Milvus can handle is determined by the hardware it is run on, specifically system memory and storage:\\n\\n- Milvus loads all specified collections and partitions into memory before running queries. Therefore, memory size determines the maximum amount of data Milvus can query.\\n- When new entities and and collection-related schema (currently only MinIO is supported for data persistence) are added to Milvus, system storage determines the maximum allowable size of inserted data.\\n\\n###\",\n",
+ " 210.60745239257812\n",
+ " ]\n",
+ "]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import json\n",
+ "\n",
+ "retrieved_lines_with_distances = [\n",
+ " (res[\"entity\"][\"text\"], res[\"distance\"]) for res in search_res[0]\n",
+ "]\n",
+ "print(json.dumps(retrieved_lines_with_distances, indent=4))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Use LLM to get a RAG response\n",
+ "\n",
+ "Convert the retrieved documents into a string format."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:56.619344Z",
+ "start_time": "2024-11-27T02:47:56.614058Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "context = \"\\n\".join(\n",
+ " [line_with_distance[0] for line_with_distance in retrieved_lines_with_distances]\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Define system and user prompts for the Lanage Model. This prompt is assembled with the retrieved documents from Milvus."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:47:57.596480Z",
+ "start_time": "2024-11-27T02:47:57.592721Z"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "SYSTEM_PROMPT = \"\"\"\n",
+ "Human: You are an AI assistant. You are able to find answers to the questions from the contextual passage snippets provided.\n",
+ "\"\"\"\n",
+ "USER_PROMPT = f\"\"\"\n",
+ "Use the following pieces of information enclosed in tags to provide an answer to the question enclosed in tags.\n",
+ "\n",
+ "{context}\n",
+ "\n",
+ "\n",
+ "{question}\n",
+ "\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Use the `llama3.2` model provided by Ollama to generate a response based on the prompts.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "ExecuteTime": {
+ "end_time": "2024-11-27T02:48:03.947222Z",
+ "start_time": "2024-11-27T02:48:00.029787Z"
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "According to the provided context, data in Milvus is stored in two types:\n",
+ "\n",
+ "1. **Inserted data**: Storing data in persistent storage as incremental log. It supports multiple object storage backends such as MinIO, AWS S3, Google Cloud Storage (GCS), Azure Blob Storage, Alibaba Cloud OSS, and Tencent Cloud Object Storage.\n",
+ "\n",
+ "2. **Metadata**: Generated within Milvus and stored in etcd.\n"
+ ]
+ }
+ ],
+ "source": [
+ "from ollama import chat\n",
+ "from ollama import ChatResponse\n",
+ "\n",
+ "response: ChatResponse = chat(\n",
+ " model=\"llama3.2\",\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": SYSTEM_PROMPT},\n",
+ " {\"role\": \"user\", \"content\": USER_PROMPT},\n",
+ " ],\n",
+ ")\n",
+ "print(response[\"message\"][\"content\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Great! We have successfully built a RAG pipeline with Milvus and Ollama."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}