Replies: 1 comment
-
Please ask here: https://github.com/langchain-ai/langchain |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have just started using Langchain and I was following this tutorial to build SQL answering machine. My database sits on Postgres but it's a representation of a REVIT CAD model of a building architecture. it's quite a complicated structure with lots of tables and columns.
The problem is what is being sent as the database schema metadata to LLM to formulate the SQL query is way to big creating a massive number of tokens this has made calling chatgptAPIs super expensive and not viable thus far.
Here is my code snippet so far:
It seems like I can pass the custom prompt to create_sql_query_chain, but I am not sure what's the best way to strip down the results from schema description results which is good enough to pass to LLM in order for the SQL to be formulated.
My goal is to be able to pass barebone metadata about the database schema to the llm while enabling the llm to be able to understand enough to produce the SQL statement to be executed. So that the cost and speed will be improved. Any ideas on how to go about doing this?
P.S: I understand that it would be better to provide the database schema but because it's absolutely so many tables and columns it would just clutter this conversation.
Beta Was this translation helpful? Give feedback.
All reactions