Welcome to the Chat with URL API! This API uses various tools and libraries to fetch content from a URL, process it, and generate responses using OpenAI's language models. It supports both GET and POST requests.
- Getting Started
- Environment Variables
- Endpoints
- Request Parameters
- Example Usage
- Error Handling
- Dependencies
- License
To get started with this API, follow these steps:
-
Clone the repository:
git clone https://github.com/RaheesAhmed/chat-with-url.git cd chat-with-url
-
Install the dependencies:
npm install
-
Set up the environment variables:
Create a
.env
file in the root of your project and add your OpenAI API key:OPENAI_API_KEY=your_openai_api_key
-
Run the development server:
npm run dev
-
Open your browser:
Go to http://localhost:3000 to see the API in action.
OPENAI_API_KEY
: Your OpenAI API key. This is required for accessing OpenAI's models.
Returns a welcome message.
{
"response": "Welcome to Chat with URL...."
}
Processes the content from a given URL and generates a response using OpenAI's language models.
Request Parameters
query (string)
: The question or query you want to ask based on the URL content.
url (string)
: The URL of the content you want to process.
model (string, optional)
: The OpenAI model to use (default is gpt-3.5-turbo).
temperature (number, optional)
: The temperature for the OpenAI model (default is 0.7).
maxTokens (number, optional)
: The maximum number of tokens for the OpenAI model (default is 1000).
chunkSize (number, optional)
: The chunk size for splitting the content (default is 1000).
chunkOverlap (number, optional)
: The chunk overlap for splitting the content (default is 200).
The response will contain the generated answer based on the input query and URL content.
{
"result": "Your generated answer based on the query and URL content."
}
curl -X POST http://localhost:3000/api -H "Content-Type: application/json" -d '{
"query": "What is the main topic of the article?",
"url": "https://example.com/article",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"maxTokens": 1000,
"chunkSize": 1000,
"chunkOverlap": 200,
}'
Response
{
"result": "The main topic of the article is about..."
}
Errors are logged to the console, and the API returns a JSON response with the error message and a status code of 500.
next:
The React framework for building server-side rendered applications.
@langchain/core
: Core utilities for building language chain applications.
@langchain/community
: Community utilities for building language chain applications.
cheerio
: Fast, flexible, and lean implementation of core jQuery designed specifically for the server.
dotenv
: Loads environment variables from a .env file.
openai
: OpenAI API client.
langchain
: Language chain utilities and components.
This project is licensed under the MIT License.License.md