Skip to content

πŸ€–πŸ’Ž ChatGPT Ruby – a simple, easy-to-integrate gem for accessing the OpenAI API

License

Notifications You must be signed in to change notification settings

nagstler/chatgpt-ruby

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

68 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ChatGPT Ruby

Gem Version License Maintainability Test Coverage CI

πŸ€–πŸ’Ž A comprehensive Ruby SDK for OpenAI's GPT APIs, providing a robust, feature-rich interface for AI-powered applications.

πŸ“š Check out the Integration Guide to get started!

Features

  • πŸš€ Full support for GPT-3.5-Turbo and GPT-4 models
  • πŸ“‘ Streaming responses support
  • πŸ”§ Function calling and JSON mode
  • 🎨 DALL-E image generation
  • πŸ”„ Fine-tuning capabilities
  • πŸ“Š Token counting and validation
  • ⚑ Async operations support
  • πŸ›‘οΈ Built-in rate limiting and retries
  • 🎯 Type-safe responses
  • πŸ“ Comprehensive logging

Table of Contents

Installation

Add to your Gemfile:

gem 'chatgpt-ruby'

Or install directly:

$ gem install chatgpt-ruby

Quick Start

require 'chatgpt'

# Initialize with API key
client = ChatGPT::Client.new(ENV['OPENAI_API_KEY'])

# Chat API (Recommended for GPT-3.5-turbo, GPT-4)
response = client.chat([
  { role: "user", content: "What is Ruby?" }
])

puts response.dig("choices", 0, "message", "content")

# Completions API (For GPT-3.5-turbo-instruct)
response = client.completions("What is Ruby?")
puts response.dig("choices", 0, "text")

Configuration

ChatGPT.configure do |config|
  config.api_key = ENV['OPENAI_API_KEY']
  config.api_version = 'v1'
  config.default_engine = 'gpt-3.5-turbo'
  config.request_timeout = 30
  config.max_retries = 3
  config.default_parameters = {
    max_tokens: 16,
    temperature: 0.5,
    top_p: 1.0,
    n: 1
  }
end

Core Features

Chat Completions

# Chat with system message
response = client.chat([
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Hello!" }
])

# With streaming
client.chat_stream([
  { role: "user", content: "Tell me a story" }
]) do |chunk|
  print chunk.dig("choices", 0, "delta", "content")
end

Function Calling

functions = [
  {
    name: "get_weather",
    description: "Get current weather",
    parameters: {
      type: "object",
      properties: {
        location: { type: "string" },
        unit: { type: "string", enum: ["celsius", "fahrenheit"] }
      }
    }
  }
]

response = client.chat(
  messages: [{ role: "user", content: "What's the weather in London?" }],
  functions: functions,
  function_call: "auto"
)

Image Generation (DALL-E)

# Generate image
image = client.images.generate(
  prompt: "A sunset over mountains",
  size: "1024x1024",
  quality: "hd"
)

# Create variations
variation = client.images.create_variation(
  image: File.read("input.png"),
  n: 1
)

Fine-tuning

# Create fine-tuning job
job = client.fine_tunes.create(
  training_file: "file-abc123",
  model: "gpt-3.5-turbo"
)

# List fine-tuning jobs
jobs = client.fine_tunes.list

# Get job status
status = client.fine_tunes.retrieve(job.id)

Token Management

# Count tokens
count = client.tokens.count("Your text here", model: "gpt-4")

# Validate token limits
client.tokens.validate_messages(messages, max_tokens: 4000)

Error Handling

begin
  response = client.chat(messages: [...])
rescue ChatGPT::RateLimitError => e
  puts "Rate limit hit: #{e.message}"
rescue ChatGPT::APIError => e
  puts "API error: #{e.message}"
rescue ChatGPT::TokenLimitError => e
  puts "Token limit exceeded: #{e.message}"
end

Advanced Usage

Async Operations

client.async do
  response1 = client.chat(messages: [...])
  response2 = client.chat(messages: [...])
  [response1, response2]
end

Batch Operations

responses = client.batch do |batch|
  batch.add_chat(messages: [...])
  batch.add_chat(messages: [...])
end

Response Objects

response = client.chat(messages: [...])

response.content  # Main response content
response.usage   # Token usage information
response.finish_reason  # Why the response ended
response.model   # Model used

Development

# Run tests
bundle exec rake test

# Run linter
bundle exec rubocop

# Generate documentation
bundle exec yard doc

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b feature/my-new-feature)
  3. Add tests for your feature
  4. Make your changes
  5. Commit your changes (git commit -am 'Add some feature')
  6. Push to the branch (git push origin feature/my-new-feature)
  7. Create a new Pull Request

License

Released under the MIT License. See LICENSE for details.