-
Notifications
You must be signed in to change notification settings - Fork 1
FAQ
Nagendra Dhanakeerthi edited this page Oct 30, 2024
·
1 revision
A Ruby gem that provides a simple and intuitive interface to OpenAI's GPT models, supporting both chat and completion endpoints.
Ruby 3.0 and above are supported. The gem is tested on Ruby 3.0, 3.1, and 3.2.
Currently supports:
- GPT-3.5-turbo (default for chat)
- text-davinci-002 (default for completions)
- Other GPT-3.x models
# In your Gemfile
gem 'chatgpt-ruby'
# Or install directly
gem install chatgpt-ruby
Common reasons:
- API key not set correctly
- Invalid API key
- API key not loaded from environment variables
Solution:
# Check your API key
puts ENV['OPENAI_API_KEY'].nil? ? "API key missing" : "API key present"
# Set explicitly
client = ChatGPT::Client.new('your-api-key')
ChatGPT.configure do |config|
config.api_key = ENV['OPENAI_API_KEY']
config.request_timeout = 30
config.max_retries = 3
end
client = ChatGPT::Client.new
response = client.chat([
{ role: "user", content: "Hello!" }
])
client.chat_stream(messages) do |chunk|
print chunk.dig("choices", 0, "delta", "content")
end
-
chat
: Modern endpoint for conversational AI (recommended) -
completions
: Legacy endpoint for text completion tasks
begin
response = client.chat(messages)
rescue ChatGPT::RateLimitError => e
sleep(5) # Wait before retrying
retry
end
# Increase timeout in configuration
ChatGPT.configure do |config|
config.request_timeout = 60 # 60 seconds
end
begin
response = client.chat(messages)
rescue ChatGPT::APIError => e
puts "Status: #{e.status_code}"
puts "Error: #{e.message}"
puts "Type: #{e.error_type}"
end
- Use appropriate
max_tokens
- Keep messages concise
- Use streaming for long responses
- Implement proper error handling
The gem is thread-safe. Use normal Ruby concurrency patterns:
threads = messages.map do |msg|
Thread.new { client.chat([msg]) }
end
responses = threads.map(&:value)
# In test_helper.rb
require 'webmock/minitest'
# In your tests
def test_chat
stub_request(:post, "https://api.openai.com/v1/chat/completions")
.to_return(status: 200, body: response_json)
response = @client.chat(messages)
assert_equal expected_response, response
end
# Helper method for stubbing
def stub_chat_response(content)
{
"choices" => [{
"message" => {
"role" => "assistant",
"content" => content
}
}]
}.to_json
end
No, always use environment variables or secure credential management:
# Good
api_key = ENV['OPENAI_API_KEY']
# Bad
api_key = 'sk-...' # Never do this
Use Rails credentials:
# config/credentials.yml.enc
openai:
api_key: your_key_here
# Usage
ChatGPT.configure do |config|
config.api_key = Rails.application.credentials.openai[:api_key]
end
Common reasons:
- Invalid API key
- Rate limits exceeded
- Invalid request parameters
- Network issues
begin
response = client.chat(messages)
rescue => e
puts "Error Class: #{e.class}"
puts "Message: #{e.message}"
puts "Backtrace: #{e.backtrace.first(5)}"
end
Create a GitHub issue with:
- Ruby version (
ruby -v
) - Gem version
- Minimal reproduction code
- Error message
- Expected behavior