Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update doc #69

Merged
merged 1 commit into from
Nov 16, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 4 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,12 @@ $env:API_KEY="your_api_key""
### 🔧 Configuration
`codai` requires a `config.yml` file in the `root of your working directory` or using `environment variables` to set below configs `globally` as a configuration.

By default codai config works with `openai` provider and the `config.yml` contains the following values:
The `config` file should be like following example base on your `AI provider`:

**config.yml - openai sample**
**config.yml**
```yml
ai_provider_config:
provider_name: "openai"
provider_name: "openai" # openai | ollama
chat_completion_url: "https://api.openai.com/v1/chat/completions"
chat_completion_model: "gpt-4o"
embedding_url: "https://api.openai.com/v1/embeddings" #(Optional, If you want use RAG.)
Expand All @@ -65,23 +65,7 @@ theme: "dracula"
rag: true #(Optional, If you want use RAG.)
```

Also, to provide the config for the `ollama` provider and the `config.yml` contains the following values:

**config.yml - ollama sample**

```yml
ai_provider_config:
provider_name: "ollama"
chat_completion_url: "http://localhost:11434/v1/chat/completions"
chat_completion_model: "llama3.1"
embedding_url: "http://localhost:11434/v1/embeddings" #(Optional, If you want use RAG.)
embedding_model: "all-minilm:l6-v2" #(Optional, If you want use RAG.)
temperature: 0.2
threshold: 0.3 #(Optional, If you want use RAG.)
theme: "dracula"
rag: true #(Optional, If you want use RAG.)
```

> Note: For `ollama` provider use base url `http://localhost:11434` ollama for chat, embeddings url and also use your `desire models` ollama for chat, embeddings model.
> Note: We used the standard integration of [OpenAI APIs](https://platform.openai.com/docs/api-reference/introduction) and [Ollama APIs](https://github.com/ollama/ollama/blob/main/docs/api.md) and you can find more details in documentation of each APIs.

If you wish to customize your configuration, you can create your own `config.yml` file and place it in the `root directory` of `each project` you want to analyze with codai. If `no configuration` file is provided, codai will use the `default settings`.
Expand Down
Loading