Skip to content

How to build a chat assistant with Amazon Bedrock using LLMs, Embeddings model, and Knowledge Bases for Amazon Bedrock.

Notifications You must be signed in to change notification settings

aws-samples/reinvent2023-aim-329

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Build a chat assistant with Amazon Bedrock

This repository contains the code samples that will let participants explore how to use the Retrieval Augmented Generation (RAG) architecture with Amazon Bedrock and Amazon OpenSearch Serverless (AOSS) to quickly build a secure chat assistant that uses the most up-to-date information to converse with users. Participants will also learn how this chat assistant will use dialog-guided information retrieval to respond to users.

Overview

Amazon Bedrock is a fully managed service that offers a choice of high-performing Foundation Models (FMs) from leading AI companies accessible through a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security.

Large Language Models (LLMs) are a type of Foundation Model that can take natural langauge as input, with the ability to process and understand it, and produce natural language as the output. LLMs can also can perform tasks like classification, summarization, simplification, entity recognition, etc.

LLMs are usually trained offline with data that is available until that point of time. As a result, LLMs will not have knowledge of the world after that date. Additionally, LLMs are trained on very general domain corpora, making them less effective for domain-specific tasks. And then, LLMs have the tendency to hallucinate where the model generates text that is incorrect, nonsensical, or not real. Using a Retrieval Augment Generation (RAG) mechanism can help mitigate all these issues. A RAG architecture involves retrieving data that closely matches the text in the user's prompt, from an external datasource, and using it to augment the prompt before sending to the LLM. This prompt augmentation will provide the context that the LLM can use to respond to the prompt.

This repository contains code that will walk you through the process of building a chat assistant using a Large Language Model (LLM) hosted on Amazon Bedrock and using Knowledge Bases for Amazon Bedrock for vectorizing, storing, and retrieving data through semantic search. Amazon OpenSearch Serverless will be used as the vector index.

To get started

  1. Choose an AWS Account to use and make sure to create all resources in that Account.
  2. Identify an AWS Region that has Amazon Bedrock with Anthropic Claude 3 and Titan Embeddings G1 - Text models.
  3. In that Region, create a new or use an existing Amazon S3 bucket of your choice. Make sure that this bucket can be read by AWS CloudFormation.
  4. Create the Lambda layer file named py312_opensearch-py_requests_and_requests-aws4auth.zip using the following procedure and upload it to the same Amazon S3 bucket as in step 3.
    • On Windows 10 or above:
      1. Make sure Python 3.12 and pip are installed and set in the user's PATH variable.
      2. Download 7-zip and install it in C:/Program Files/7-Zip/.
      3. Open the Windows command prompt.
      4. Create a new directory and cd into it.
      5. Run the lambda_layer_file_create.bat from inside of that directory.
      6. This will create the Lambda layer file named py312_opensearch-py_requests_and_requests-aws4auth.zip.
    • On Linux:
      1. Make sure Python 3.12 and pip are installed and set in the user's PATH variable.
      2. Open the Linux command prompt.
      3. Create a new directory and cd into it.
      4. Run the lambda_layer_file_create.sh from inside of that directory.
      5. This will create the Lambda layer file named py312_opensearch-py_requests_and_requests-aws4auth.zip.
  5. Take the provided AWS CloudFormation template standard-rag-cfn.yaml and update the following parameter,
    • DeploymentArtifactsS3BucketName - set this to the name of the Amazon S3 bucket from step 3.
  6. Create an AWS CloudFormation stack with the updated template.
  7. Open the Jupyter notebook named rag-router.ipynb by navigating to the Amazon SageMaker notebook instances console and clicking on the Open Jupyter link on the instance named rag-router-instance.

Repository structure

This repository contains

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.