Skip to content

Commit

Permalink
feat: Core implementation of ChatGPT Ruby SDK
Browse files Browse the repository at this point in the history
- Implemented ChatGPT client with configuration management
- Added support for completions and chat APIs
- Added streaming support
- Implemented robust error handling
- Added comprehensive test suite with 100% coverage
  • Loading branch information
nagstler committed Oct 30, 2024
1 parent cd4a3e8 commit 0424370
Show file tree
Hide file tree
Showing 20 changed files with 695 additions and 239 deletions.
Binary file added .DS_Store
Binary file not shown.
8 changes: 5 additions & 3 deletions Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@ gem "rubocop", "~> 1.21"

gem 'rest-client'

# Gemfile
group :test do
gem 'simplecov', require: false
gem 'simplecov_json_formatter', require: false
end
gem 'webmock'
gem 'simplecov'
gem 'simplecov_json_formatter'
end


18 changes: 16 additions & 2 deletions Gemfile.lock
Original file line number Diff line number Diff line change
@@ -1,16 +1,23 @@
PATH
remote: .
specs:
chatgpt-ruby (1.0.0)
rest-client
chatgpt-ruby (2.0.0)
rest-client (~> 2.1)

GEM
remote: https://rubygems.org/
specs:
addressable (2.8.7)
public_suffix (>= 2.0.2, < 7.0)
ast (2.4.2)
bigdecimal (3.1.8)
crack (1.0.0)
bigdecimal
rexml
docile (1.4.0)
domain_name (0.5.20190701)
unf (>= 0.0.5, < 1.0.0)
hashdiff (1.1.1)
http-accept (1.7.0)
http-cookie (1.0.5)
domain_name (~> 0.5)
Expand All @@ -23,6 +30,7 @@ GEM
parallel (1.22.1)
parser (3.2.1.1)
ast (~> 2.4.1)
public_suffix (6.0.1)
rainbow (3.1.1)
rake (13.0.6)
regexp_parser (2.7.0)
Expand Down Expand Up @@ -55,9 +63,14 @@ GEM
unf_ext
unf_ext (0.0.8.2)
unicode-display_width (2.4.2)
webmock (3.24.0)
addressable (>= 2.8.0)
crack (>= 0.3.2)
hashdiff (>= 0.4.0, < 2.0.0)

PLATFORMS
arm64-darwin-21
arm64-darwin-22

DEPENDENCIES
chatgpt-ruby!
Expand All @@ -67,6 +80,7 @@ DEPENDENCIES
rubocop (~> 1.21)
simplecov
simplecov_json_formatter
webmock

BUNDLED WITH
2.3.26
251 changes: 179 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,135 +1,242 @@
# ChatGPT Ruby

[![Gem Version](https://badge.fury.io/rb/chatgpt-ruby.svg)](https://badge.fury.io/rb/chatgpt-ruby) [![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Maintainability](https://api.codeclimate.com/v1/badges/08c7e7b58e9fbe7156eb/maintainability)](https://codeclimate.com/github/nagstler/chatgpt-ruby/maintainability) [![Test Coverage](https://api.codeclimate.com/v1/badges/08c7e7b58e9fbe7156eb/test_coverage)](https://codeclimate.com/github/nagstler/chatgpt-ruby/test_coverage) [![CI](https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml)

The `chatgpt-ruby` is a Ruby SDK for the OpenAI API, providing methods for generating text and completing prompts using the ChatGPT model.
[![Gem Version](https://badge.fury.io/rb/chatgpt-ruby.svg)](https://badge.fury.io/rb/chatgpt-ruby)
[![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Maintainability](https://api.codeclimate.com/v1/badges/08c7e7b58e9fbe7156eb/maintainability)](https://codeclimate.com/github/nagstler/chatgpt-ruby/maintainability)
[![Test Coverage](https://api.codeclimate.com/v1/badges/08c7e7b58e9fbe7156eb/test_coverage)](https://codeclimate.com/github/nagstler/chatgpt-ruby/test_coverage)
[![CI](https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml)

A comprehensive Ruby SDK for OpenAI's GPT APIs, providing a robust, feature-rich interface for AI-powered applications.

## Features

- 🚀 Full support for GPT-3.5-Turbo and GPT-4 models
- 📡 Streaming responses support
- 🔧 Function calling and JSON mode
- 🎨 DALL-E image generation
- 🔄 Fine-tuning capabilities
- 📊 Token counting and validation
- ⚡ Async operations support
- 🛡️ Built-in rate limiting and retries
- 🎯 Type-safe responses
- 📝 Comprehensive logging

## Table of Contents

- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#configuration)
- [Core Features](#core-features)
- [Chat Completions](#chat-completions)
- [Function Calling](#function-calling)
- [Image Generation (DALL-E)](#image-generation-dall-e)
- [Fine-tuning](#fine-tuning)
- [Token Management](#token-management)
- [Error Handling](#error-handling)
- [Advanced Usage](#advanced-usage)
- [Async Operations](#async-operations)
- [Batch Operations](#batch-operations)
- [Response Objects](#response-objects)
- [Development](#development)
- [Contributing](#contributing)
- [License](#license)

## Installation

Add this line to your application's Gemfile:
Add to your Gemfile:

```ruby
gem 'chatgpt-ruby'
```

And then execute:
Or install directly:

```ruby
$ bundle install
```bash
$ gem install chatgpt-ruby
```

Or install it yourself as:
## Quick Start

```ruby
$ gem install chatgpt-ruby
```
require 'chatgpt'

# Initialize with API key
client = ChatGPT::Client.new(api_key: 'your-api-key')

## Usage
# Simple chat completion
response = client.chat(messages: [
{ role: "user", content: "What is Ruby?" }
])

To use the ChatGPT API SDK, you will need an API key from OpenAI. You can obtain an API key by signing up for the [OpenAI API beta program](https://beta.openai.com/signup/) .
puts response.content
```

Once you have an API key, you can create a new `ChatGPT::Client` instance with your API key:
## Configuration

```ruby
require 'chatgpt/client'

api_key = 'your-api-key'
client = ChatGPT::Client.new(api_key)
ChatGPT.configure do |config|
config.api_key = 'your-api-key'
config.default_model = 'gpt-4'
config.timeout = 30
config.max_retries = 3
config.api_version = '2024-01'
end
```

## Completions
## Core Features

To generate completions given a prompt, you can use the `completions` method:
### Chat Completions

```ruby
prompt = 'Hello, my name is'
completions = client.completions(prompt)
# Basic chat
client.chat(messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" }
])

# With streaming
client.chat_stream(messages: [...]) do |chunk|
print chunk.content
end
```

### Function Calling

```ruby
functions = [
{
name: "get_weather",
description: "Get current weather",
parameters: {
type: "object",
properties: {
location: { type: "string" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
}
}
}
]

# Output: an array of completion strings
response = client.chat(
messages: [{ role: "user", content: "What's the weather in London?" }],
functions: functions,
function_call: "auto"
)
```

You can customize the generation process by passing in additional parameters as a hash:
### Image Generation (DALL-E)

```ruby
params = {
engine: 'text-davinci-002',
max_tokens: 50,
temperature: 0.7
}
completions = client.completions(prompt, params)

puts completions["choices"].map { |c| c["text"] }
# Output: an array of completion strings
# Generate image
image = client.images.generate(
prompt: "A sunset over mountains",
size: "1024x1024",
quality: "hd"
)

# Create variations
variation = client.images.create_variation(
image: File.read("input.png"),
n: 1
)
```

## Chat
### Fine-tuning

The `chat` method allows for a dynamic conversation with the GPT model. It requires an array of messages where each message is a hash with two properties: `role` and `content`.
```ruby
# Create fine-tuning job
job = client.fine_tunes.create(
training_file: "file-abc123",
model: "gpt-3.5-turbo"
)

`role` can be:
- `'system'`: Used for instructions that guide the conversation.
- `'user'`: Represents the user's input.
- `'assistant'`: Represents the model's output.
# List fine-tuning jobs
jobs = client.fine_tunes.list

`content` contains the text message from the corresponding role.
# Get job status
status = client.fine_tunes.retrieve(job.id)
```

Here is how you would start a chat:
### Token Management

```ruby
# Count tokens
count = client.tokens.count("Your text here", model: "gpt-4")

# Define the conversation messages
messages = [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "Who won the world series in 2020?"
}
]

# Start a chat
response = client.chat(messages)
# Validate token limits
client.tokens.validate_messages(messages, max_tokens: 4000)
```

The response will be a hash containing the model's message(s). You can extract the assistant's message like this:
### Error Handling

```ruby

puts response['choices'][0]['message']['content'] # Outputs the assistant's message
begin
response = client.chat(messages: [...])
rescue ChatGPT::RateLimitError => e
puts "Rate limit hit: #{e.message}"
rescue ChatGPT::APIError => e
puts "API error: #{e.message}"
rescue ChatGPT::TokenLimitError => e
puts "Token limit exceeded: #{e.message}"
end
```

The conversation can be continued by extending the `messages` array and calling the `chat` method again:
## Advanced Usage

### Async Operations

```ruby
client.async do
response1 = client.chat(messages: [...])
response2 = client.chat(messages: [...])
[response1, response2]
end
```

messages << {role: "user", content: "Tell me more about it."}
### Batch Operations

response = client.chat(messages)
puts response['choices'][0]['message']['content'] # Outputs the assistant's new message
```ruby
responses = client.batch do |batch|
batch.add_chat(messages: [...])
batch.add_chat(messages: [...])
end
```

With this method, you can build an ongoing conversation with the model.
### Response Objects

## Changelog
```ruby
response = client.chat(messages: [...])

For a detailed list of changes for each version of this project, please see the [CHANGELOG](CHANGELOG.md).
response.content # Main response content
response.usage # Token usage information
response.finish_reason # Why the response ended
response.model # Model used
```

## Development

After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
```bash
# Run tests
bundle exec rake test

To install this gem onto your local machine, run `bundle exec rake install`.
# Run linter
bundle exec rubocop

# Generate documentation
bundle exec yard doc
```

## Contributing

Bug reports and pull requests are welcome on GitHub at https://github.com/nagstler/chatgpt-ruby. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/nagstler/chatgpt-ruby/blob/main/CODE_OF_CONDUCT.md).
1. Fork it
2. Create your feature branch (`git checkout -b feature/my-new-feature`)
3. Add tests for your feature
4. Make your changes
5. Commit your changes (`git commit -am 'Add some feature'`)
6. Push to the branch (`git push origin feature/my-new-feature`)
7. Create a new Pull Request

## License

The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).

## Code of Conduct

Everyone interacting in the Chatgpt::Ruby project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/nagstler/chatgpt-ruby/blob/main/CODE_OF_CONDUCT.md).
Released under the MIT License. See [LICENSE](LICENSE.txt) for details.
Loading

0 comments on commit 0424370

Please sign in to comment.