You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I use local data, including reference and response data, use Azure, and then use ragas to obtain accuracy indicators, but errors are reported: Evaluating: 0%| | 1/301 [00:04<21:46, 4.36s/it]Exception raised in Job[7]: APIConnectionError(Connection error.)
I checked the ragas and here and couldn't find an answer to my question.
Your Question
2 questions,
One is, I want to know what this error is and how to solve it. In the following source code, I have tried to set all the places where llm can be set to my configuration, but the problem is still reported, and there is no other prompt. (The code is changed from the example of the official website.)
The other is that if you want to get the value of Semantic Similarity, you should use answer_similarity or Semantic Similarity. I think both of them work, but I don't know which one is right and the difference between them.
azure_config = {
"base_url": "https://yaotalai.openai.azure.com/", # your endpoint
"model_deployment": "gpt-35-turbo-16k", # your model deployment name
"model_name": "gpt35", # your model name
"embedding_deployment": "text-embedding-ada-002", # your embedding deployment name
"embedding_name": "embedding", # your embedding name
}
Hello, I use local data, including reference and response data, use Azure, and then use ragas to obtain accuracy indicators, but errors are reported:
Evaluating: 0%| | 1/301 [00:04<21:46, 4.36s/it]Exception raised in Job[7]: APIConnectionError(Connection error.)
I checked the ragas and here and couldn't find an answer to my question.
Your Question
2 questions,
One is, I want to know what this error is and how to solve it. In the following source code, I have tried to set all the places where llm can be set to my configuration, but the problem is still reported, and there is no other prompt. (The code is changed from the example of the official website.)
The other is that if you want to get the value of Semantic Similarity, you should use answer_similarity or Semantic Similarity. I think both of them work, but I don't know which one is right and the difference between them.
Code Examples
`
os.environ['AZURE_OPENAI_API_KEY'] = 'xxxxxxx'.
os.environ['HTTP_PROXY'] = 'http://xxxxx'
os.environ['HTTPS_PROXY'] = 'http://xxxxx'
azure_config = {
"base_url": "https://yaotalai.openai.azure.com/", # your endpoint
"model_deployment": "gpt-35-turbo-16k", # your model deployment name
"model_name": "gpt35", # your model name
"embedding_deployment": "text-embedding-ada-002", # your embedding deployment name
"embedding_name": "embedding", # your embedding name
}
evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_endpoint=azure_config["base_url"],
azure_deployment=azure_config["model_deployment"],
model=azure_config["model_name"],
validate_base_url=False,
))
init the embeddings for answer_relevancy, answer_correctness and answer_similarity
evaluator_embeddings = LangchainEmbeddingsWrapper(AzureOpenAIEmbeddings(
openai_api_version="2023-05-15",
azure_endpoint=azure_config["base_url"],
azure_deployment=azure_config["embedding_deployment"],
model=azure_config["embedding_name"],
))
file_path = 'ts.xlsx'
data = pd.read_excel(file_path)
selected_columns = data[['reference', 'response']]
dataset = Dataset.from_pandas(selected_columns)
Define the metrics you want to evaluate
answer_similarity.llm = evaluator_llm
answer_similarity.embeddings = evaluator_embeddings
metrics = [
# answer_similarity,
FactualCorrectness(llm=evaluator_llm),
SemanticSimilarity(llm=evaluator_llm, embeddings=evaluator_embeddings),
]
Evaluate the dataset using the selected metrics
results = evaluate(
dataset = dataset,
llm = evaluator_llm,
embeddings = evaluator_embeddings,
metrics = metrics)
print(results)
`
The text was updated successfully, but these errors were encountered: