Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception raised in Job APIConnectionError(Connection error.) #1692

Open
ldzh-97 opened this issue Nov 20, 2024 · 1 comment
Open

Exception raised in Job APIConnectionError(Connection error.) #1692

ldzh-97 opened this issue Nov 20, 2024 · 1 comment
Labels
bug Something isn't working question Further information is requested

Comments

@ldzh-97
Copy link

ldzh-97 commented Nov 20, 2024

Hello, I use local data, including reference and response data, use Azure, and then use ragas to obtain accuracy indicators, but errors are reported:
Evaluating: 0%| | 1/301 [00:04<21:46, 4.36s/it]Exception raised in Job[7]: APIConnectionError(Connection error.)
I checked the ragas and here and couldn't find an answer to my question.

Your Question
2 questions,
One is, I want to know what this error is and how to solve it. In the following source code, I have tried to set all the places where llm can be set to my configuration, but the problem is still reported, and there is no other prompt. (The code is changed from the example of the official website.)
The other is that if you want to get the value of Semantic Similarity, you should use answer_similarity or Semantic Similarity. I think both of them work, but I don't know which one is right and the difference between them.

Code Examples
`
os.environ['AZURE_OPENAI_API_KEY'] = 'xxxxxxx'.
os.environ['HTTP_PROXY'] = 'http://xxxxx'
os.environ['HTTPS_PROXY'] = 'http://xxxxx'

azure_config = {
"base_url": "https://yaotalai.openai.azure.com/", # your endpoint
"model_deployment": "gpt-35-turbo-16k", # your model deployment name
"model_name": "gpt35", # your model name
"embedding_deployment": "text-embedding-ada-002", # your embedding deployment name
"embedding_name": "embedding", # your embedding name
}

evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_endpoint=azure_config["base_url"],
azure_deployment=azure_config["model_deployment"],
model=azure_config["model_name"],
validate_base_url=False,
))

init the embeddings for answer_relevancy, answer_correctness and answer_similarity

evaluator_embeddings = LangchainEmbeddingsWrapper(AzureOpenAIEmbeddings(
openai_api_version="2023-05-15",
azure_endpoint=azure_config["base_url"],
azure_deployment=azure_config["embedding_deployment"],
model=azure_config["embedding_name"],
))

file_path = 'ts.xlsx'
data = pd.read_excel(file_path)
selected_columns = data[['reference', 'response']]
dataset = Dataset.from_pandas(selected_columns)

Define the metrics you want to evaluate

answer_similarity.llm = evaluator_llm

answer_similarity.embeddings = evaluator_embeddings

metrics = [
# answer_similarity,
FactualCorrectness(llm=evaluator_llm),
SemanticSimilarity(llm=evaluator_llm, embeddings=evaluator_embeddings),
]

Evaluate the dataset using the selected metrics

results = evaluate(
dataset = dataset,
llm = evaluator_llm,
embeddings = evaluator_embeddings,
metrics = metrics)
print(results)
`

@ldzh-97 ldzh-97 added the question Further information is requested label Nov 20, 2024
@dosubot dosubot bot added the bug Something isn't working label Nov 20, 2024
@ldzh-97
Copy link
Author

ldzh-97 commented Nov 20, 2024

Sorry, the above code format is incorrect, the correct source code is here.

os.environ['AZURE_OPENAI_API_KEY'] = 'xxxxxxx'.
os.environ['HTTP_PROXY'] = 'http://xxxxx/'
os.environ['HTTPS_PROXY'] = 'http://xxxxx/'

azure_config = {
"base_url": "https://yaotalai.openai.azure.com/", # your endpoint
"model_deployment": "gpt-35-turbo-16k", # your model deployment name
"model_name": "gpt35", # your model name
"embedding_deployment": "text-embedding-ada-002", # your embedding deployment name
"embedding_name": "embedding", # your embedding name
}

evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_endpoint=azure_config["base_url"],
azure_deployment=azure_config["model_deployment"],
model=azure_config["model_name"],
validate_base_url=False,
))


evaluator_embeddings = LangchainEmbeddingsWrapper(AzureOpenAIEmbeddings(
openai_api_version="2023-05-15",
azure_endpoint=azure_config["base_url"],
azure_deployment=azure_config["embedding_deployment"],
model=azure_config["embedding_name"],
))

file_path = 'ts.xlsx'
data = pd.read_excel(file_path)
selected_columns = data[['reference', 'response']]
dataset = Dataset.from_pandas(selected_columns)


answer_similarity.llm = evaluator_llm
answer_similarity.embeddings = evaluator_embeddings
metrics = [
answer_similarity,
FactualCorrectness(llm=evaluator_llm),
SemanticSimilarity(llm=evaluator_llm, embeddings=evaluator_embeddings),
]


results = evaluate(
dataset = dataset,
llm = evaluator_llm,
embeddings = evaluator_embeddings,
metrics = metrics)
print(results)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant