Skip to content

Commit

Permalink
Merge pull request #117 from Azure/sept2024
Browse files Browse the repository at this point in the history
Updating package versions
  • Loading branch information
MarkWme authored Sep 11, 2024
2 parents db0b463 + a6a8422 commit e585f52
Show file tree
Hide file tree
Showing 18 changed files with 436 additions and 122 deletions.
4 changes: 2 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
OPENAI_API_VERSION = "2023-09-01-preview"
OPENAI_EMBEDDING_API_VERSION = "2023-05-15"
OPENAI_API_VERSION = "2024-07-01-preview"
OPENAI_EMBEDDING_API_VERSION = "2024-07-01-preview"

AZURE_OPENAI_API_KEY = "<YOUR AZURE OPENAI API KEY - If using Azure AD auth, this can be left empty>"
AZURE_OPENAI_ENDPOINT = "<YOUR AZURE OPENAI ENDPOINT>"
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -414,4 +414,4 @@ solutions/AzureOpenAIUtil/

# Qdrant
.lock
labs/03-orchestration/03-Qdrant/qdrantstorage/**
labs/03-orchestration/03-VectorStore/qdrantstorage/**
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.14"
},
"orig_nbformat": 4
},
Expand Down
8 changes: 2 additions & 6 deletions labs/02-integrating-ai/01-AzureOpenAIAPI/azureopenaiapi.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,7 @@
"source": [
"# 01 - Working with the Azure OpenAI API directly\n",
"\n",
"In this lab, we will perform a couple of simple calls to the Azure OpenAI API.\n",
"- The first call will allow us to find out which Model Deployments are available for the Azure OpenAI API.\n",
"- The second call will send a prompt to the Azure OpenAI API.\n",
"\n",
"This will also prove that everything is setup correctly and working for the rest of the labs.\n",
"In this lab, we will perform a simple call to the Azure OpenAI API. This will prove that everything is setup correctly and working for the rest of the labs.\n",
"\n",
"## Setup\n",
"\n",
Expand Down Expand Up @@ -194,7 +190,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
"version": "3.10.14"
},
"orig_nbformat": 4
},
Expand Down
6 changes: 3 additions & 3 deletions labs/02-integrating-ai/02-OpenAIPackages/openai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll start to bring in the values from our `.env` file."
"Next, we'll bring in the values from our `.env` file."
]
},
{
Expand All @@ -57,7 +57,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll create a new `AzureOpenAI` object and pass in the API key and version and the endpoint URL to be used."
"We'll create a new `AzureOpenAI` object and pass in the API key, API version and the endpoint URL to be used."
]
},
{
Expand Down Expand Up @@ -168,7 +168,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.14"
},
"orig_nbformat": 4
},
Expand Down
2 changes: 1 addition & 1 deletion labs/02-integrating-ai/03-Langchain/langchain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.14"
},
"orig_nbformat": 4
},
Expand Down
34 changes: 28 additions & 6 deletions labs/02-integrating-ai/04-SemanticKernel/semantickernel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
},
"outputs": [],
"source": [
"#r \"nuget: dotenv.net, 3.1.2\"\n",
"#r \"nuget: dotenv.net, 3.2.0\"\n",
"\n",
"using dotenv.net;\n",
"\n",
Expand Down Expand Up @@ -76,7 +76,7 @@
},
"outputs": [],
"source": [
"#r \"nuget: Microsoft.SemanticKernel, 1.10.0\""
"#r \"nuget: Microsoft.SemanticKernel, 1.19.0\""
]
},
{
Expand All @@ -91,7 +91,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {
"dotnet_interactive": {
"language": "csharp"
Expand Down Expand Up @@ -125,6 +125,12 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"dotnet_interactive": {
"language": "csharp"
},
"polyglot_notebook": {
"kernelName": "csharp"
},
"vscode": {
"languageId": "polyglot-notebook"
}
Expand All @@ -144,7 +150,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {
"dotnet_interactive": {
"language": "csharp"
Expand Down Expand Up @@ -213,7 +219,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"metadata": {
"dotnet_interactive": {
"language": "csharp"
Expand Down Expand Up @@ -286,10 +292,26 @@
}
],
"metadata": {
"kernelspec": {
"display_name": ".NET (C#)",
"language": "C#",
"name": ".net-csharp"
},
"language_info": {
"name": "python"
},
"orig_nbformat": 4
"orig_nbformat": 4,
"polyglot_notebook": {
"kernelInfo": {
"defaultKernelName": "csharp",
"items": [
{
"aliases": [],
"name": "csharp"
}
]
}
}
},
"nbformat": 4,
"nbformat_minor": 2
Expand Down
16 changes: 8 additions & 8 deletions labs/03-orchestration/01-Tokens/tokens.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,9 @@
"\n",
"| Encoding name | Azure OpenAI Service models |\n",
"| ------------- | -------------- |\n",
"| gpt2 (or r50k_base) | Most GPT-3 models |\n",
"| cl100k_base | gpt-4, gpt-3.5-turbo, text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large |\n",
"| p50k_base | Code models, text-davinci-002, text-davinci-003 |\n",
"| cl100k_base | text-embedding-ada-002 |\n",
"| r50k_base (or gpt2) | Most GPT-3 models |\n",
"\n",
"You can use `tiktoken` as follows to tokenize a string and see what the output looks like."
]
Expand All @@ -77,7 +77,7 @@
"source": [
"import tiktoken\n",
"\n",
"encoding = tiktoken.get_encoding(\"p50k_base\")\n",
"encoding = tiktoken.get_encoding(\"cl100k_base\")\n",
"encoding.encode(\"Hello world, this is fun!\")"
]
},
Expand Down Expand Up @@ -141,7 +141,7 @@
"metadata": {},
"outputs": [],
"source": [
"def get_num_tokens_from_string(string: str, encoding_name: str='p50k_base') -> int:\n",
"def get_num_tokens_from_string(string: str, encoding_name: str='cl100k_base') -> int:\n",
" \"\"\"Returns the number of tokens in a text by a given encoding.\"\"\"\n",
" encoding = tiktoken.get_encoding(encoding_name)\n",
" return len(encoding.encode(string))\n",
Expand Down Expand Up @@ -183,7 +183,7 @@
"content = open(movie_data, \"r\", encoding=\"utf-8\").read()\n",
"\n",
"# Use tiktoken to tokenize the content and get a count of tokens used.\n",
"encoding = tiktoken.get_encoding(\"p50k_base\")\n",
"encoding = tiktoken.get_encoding(\"cl100k_base\")\n",
"print (f\"Token count: {len(encoding.encode(content))}\")"
]
},
Expand All @@ -199,7 +199,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -245,7 +245,7 @@
"source": [
"You can see we've output the first few lines of the prompt and we've printed the overall size (in characters) of the prompt. Now, let's see what happens if we submit that query to the AI.\n",
"\n",
"**NOTE:** Don't be surprised to see an error message!"
"**NOTE:** This exercise is intended to fail, but it does depend on the model you're using. Some newer models have a higher token limit than others. If you're using a model with a high token limit, you might want to increase the size of the `movies.csv` file to make this fail, or you could try using a different model."
]
},
{
Expand Down Expand Up @@ -322,7 +322,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.14"
},
"orig_nbformat": 4
},
Expand Down

This file was deleted.

Loading

0 comments on commit e585f52

Please sign in to comment.