π€π A comprehensive Ruby SDK for OpenAI's GPT APIs, providing a robust, feature-rich interface for AI-powered applications.
π Check out the Integration Guide to get started!
- π Full support for GPT-3.5-Turbo and GPT-4 models
- π‘ Streaming responses support
- π§ Function calling and JSON mode
- π¨ DALL-E image generation
- π Fine-tuning capabilities
- π Token counting and validation
- β‘ Async operations support
- π‘οΈ Built-in rate limiting and retries
- π― Type-safe responses
- π Comprehensive logging
- Features
- Installation
- Quick Start
- Configuration
- Core Features
- Advanced Usage
- Development
- Contributing
- License
Add to your Gemfile:
gem 'chatgpt-ruby'
Or install directly:
$ gem install chatgpt-ruby
require 'chatgpt'
# Initialize with API key
client = ChatGPT::Client.new(ENV['OPENAI_API_KEY'])
# Chat API (Recommended for GPT-3.5-turbo, GPT-4)
response = client.chat([
{ role: "user", content: "What is Ruby?" }
])
puts response.dig("choices", 0, "message", "content")
# Completions API (For GPT-3.5-turbo-instruct)
response = client.completions("What is Ruby?")
puts response.dig("choices", 0, "text")
ChatGPT.configure do |config|
config.api_key = ENV['OPENAI_API_KEY']
config.api_version = 'v1'
config.default_engine = 'gpt-3.5-turbo'
config.request_timeout = 30
config.max_retries = 3
config.default_parameters = {
max_tokens: 16,
temperature: 0.5,
top_p: 1.0,
n: 1
}
end
# Chat with system message
response = client.chat([
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" }
])
# With streaming
client.chat_stream([
{ role: "user", content: "Tell me a story" }
]) do |chunk|
print chunk.dig("choices", 0, "delta", "content")
end
functions = [
{
name: "get_weather",
description: "Get current weather",
parameters: {
type: "object",
properties: {
location: { type: "string" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
}
}
}
]
response = client.chat(
messages: [{ role: "user", content: "What's the weather in London?" }],
functions: functions,
function_call: "auto"
)
# Generate image
image = client.images.generate(
prompt: "A sunset over mountains",
size: "1024x1024",
quality: "hd"
)
# Create variations
variation = client.images.create_variation(
image: File.read("input.png"),
n: 1
)
# Create fine-tuning job
job = client.fine_tunes.create(
training_file: "file-abc123",
model: "gpt-3.5-turbo"
)
# List fine-tuning jobs
jobs = client.fine_tunes.list
# Get job status
status = client.fine_tunes.retrieve(job.id)
# Count tokens
count = client.tokens.count("Your text here", model: "gpt-4")
# Validate token limits
client.tokens.validate_messages(messages, max_tokens: 4000)
begin
response = client.chat(messages: [...])
rescue ChatGPT::RateLimitError => e
puts "Rate limit hit: #{e.message}"
rescue ChatGPT::APIError => e
puts "API error: #{e.message}"
rescue ChatGPT::TokenLimitError => e
puts "Token limit exceeded: #{e.message}"
end
client.async do
response1 = client.chat(messages: [...])
response2 = client.chat(messages: [...])
[response1, response2]
end
responses = client.batch do |batch|
batch.add_chat(messages: [...])
batch.add_chat(messages: [...])
end
response = client.chat(messages: [...])
response.content # Main response content
response.usage # Token usage information
response.finish_reason # Why the response ended
response.model # Model used
# Run tests
bundle exec rake test
# Run linter
bundle exec rubocop
# Generate documentation
bundle exec yard doc
- Fork it
- Create your feature branch (
git checkout -b feature/my-new-feature
) - Add tests for your feature
- Make your changes
- Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin feature/my-new-feature
) - Create a new Pull Request
Released under the MIT License. See LICENSE for details.