Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eliminate OpenAI -> Gemini Model Mapping #38

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 47 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Gemini-OpenAI-Proxy

Gemini-OpenAI-Proxy is a proxy designed to convert the OpenAI API protocol to the Google Gemini Pro protocol. This enables seamless integration of OpenAI-powered functionalities into applications using the Gemini Pro protocol.
Gemini-OpenAI-Proxy is a proxy designed to convert the OpenAI API protocol to the Google Gemini protocol. This enables applications built for the OpenAI API to seamlessly communicate with the Gemini protocol, including support for Chat Completion, Embeddings, and Model(s) endpoints.

This is a fork of zhu327/gemini-openai-proxy that eliminates the mapping of openAI models to gemini models and directly exposes the underlying gemini models to the api endpoints directly. I've also added support for Google's embeddings model. This was motivated by my own issues with using Google's [openAI API Compatible Endpoint](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-gemini-using-openai-library).
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the steps about forking from the README.


---

Expand Down Expand Up @@ -30,11 +32,24 @@ go build -o gemini main.go

We recommend deploying Gemini-OpenAI-Proxy using Docker for a straightforward setup. Follow these steps to deploy with Docker:

```bash
docker run --restart=always -it -d -p 8080:8080 --name gemini zhu327/gemini-openai-proxy:latest
```
You can either do this on the command line:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GitHub Actions will automatically build Docker images, so we can just use the existing image here.

```bash
docker run --restart=unless-stopped -it -d -p 8080:8080 --name gemini ghcr.io/ekatiyar/gemini-openai-proxy:latest
```

Adjust the port mapping (e.g., `-p 8080:8080`) as needed, and ensure that the Docker image version (`zhu327/gemini-openai-proxy:latest`) aligns with your requirements.
Or with the following docker-compose config:
```yaml
version: '3'
services:
gemini:
container_name: gemini
ports:
- "8080:8080"
image: ghcr.io/ekatiyar/gemini-openai-proxy:latest
restart: unless-stopped
```

Adjust the port mapping (e.g., `-p 5001:8080`) as needed, and ensure that the Docker image version aligns with your requirements. If you only want the added embedding model support and still want open ai model mapping, use `ghcr.io/ekatiyar/gemini-openai-proxy:embedding` instead

---

Expand All @@ -51,13 +66,13 @@ Gemini-OpenAI-Proxy offers a straightforward way to integrate OpenAI functionali
3. **Integrate the Proxy into Your Application:**
Modify your application's API requests to target the Gemini-OpenAI-Proxy, providing the acquired Google AI Studio API key as if it were your OpenAI API key.

Example API Request (Assuming the proxy is hosted at `http://localhost:8080`):
Example Chat Completion API Request (Assuming the proxy is hosted at `http://localhost:8080`):
```bash
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"model": "gemini-1.0-pro-latest",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
Expand All @@ -70,7 +85,7 @@ Gemini-OpenAI-Proxy offers a straightforward way to integrate OpenAI functionali
-H "Content-Type: application/json" \
-H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \
-d '{
"model": "gpt-4-vision-preview",
"model": "gemini-1.5-vision-latest",
"messages": [{"role": "user", "content": [
{"type": "text", "text": "What’s in this image?"},
{
Expand All @@ -83,6 +98,7 @@ Gemini-OpenAI-Proxy offers a straightforward way to integrate OpenAI functionali
"temperature": 0.7
}'
```
If you wish to map `gemini-1.5-vision-latest` to `gemini-1.5-pro-latest`, you can configure the environment variable `GEMINI_VISION_PREVIEW = gemini-1.5-pro-latest`. This is because `gemini-1.5-pro-latest` now also supports multi-modal data. Otherwise, the default is to use the `gemini-1.5-flash-latest` model

If you already have access to the Gemini 1.5 Pro api, you can use:

Expand All @@ -91,22 +107,36 @@ Gemini-OpenAI-Proxy offers a straightforward way to integrate OpenAI functionali
-H "Content-Type: application/json" \
-H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \
-d '{
"model": "gpt-4-turbo-preview",
"model": "gemini-1.5-pro-latest",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
```

Model Mapping:
Example Embeddings API Request:

| GPT Model | Gemini Model |
|---|---|
| gpt-3.5-turbo | gemini-1.0-pro-latest |
| gpt-4 | gemini-1.5-flash-latest |
| gpt-4-turbo-preview | gemini-1.5-pro-latest |
| gpt-4-vision-preview | gemini-1.0-pro-vision-latest |
```bash
curl http://localhost:8080/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \
-d '{
"model": "text-embedding-004",
"input": "This is a test sentence."
}'
```

You can also pass in multiple input strings as a list:

```bash
curl http://localhost:8080/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $YOUR_GOOGLE_AI_STUDIO_API_KEY" \
-d '{
"model": "text-embedding-004",
"input": ["This is a test sentence.", "This is another test sentence"]
}'
```

If you wish to map `gpt-4-vision-preview` to `gemini-1.5-pro-latest`, you can configure the environment variable `GPT_4_VISION_PREVIEW = gemini-1.5-pro-latest`. This is because `gemini-1.5-pro-latest` now also supports multi-modal data.

4. **Handle Responses:**
Process the responses from the Gemini-OpenAI-Proxy in the same way you would handle responses from OpenAI.
Expand Down
84 changes: 76 additions & 8 deletions api/handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,25 +26,31 @@ func ModelListHandler(c *gin.Context) {
"data": []any{
openai.Model{
CreatedAt: 1686935002,
ID: openai.GPT3Dot5Turbo,
ID: adapter.Gemini1Pro,
Object: "model",
OwnedBy: "openai",
OwnedBy: "google",
},
openai.Model{
CreatedAt: 1686935002,
ID: openai.GPT4,
ID: adapter.Gemini1Dot5Flash,
Object: "model",
OwnedBy: "openai",
OwnedBy: "google",
},
openai.Model{
CreatedAt: 1686935002,
ID: openai.GPT4TurboPreview,
ID: adapter.Gemini1Dot5Pro,
Object: "model",
OwnedBy: "openai",
OwnedBy: "google",
},
openai.Model{
CreatedAt: 1686935002,
ID: openai.GPT4VisionPreview,
ID: adapter.Gemini1Dot5ProV,
Object: "model",
OwnedBy: "google",
},
openai.Model{
CreatedAt: 1686935002,
ID: adapter.TextEmbedding004,
Object: "model",
OwnedBy: "openai",
},
Expand All @@ -58,7 +64,7 @@ func ModelRetrieveHandler(c *gin.Context) {
CreatedAt: 1686935002,
ID: model,
Object: "model",
OwnedBy: "openai",
OwnedBy: "google",
})
}

Expand Down Expand Up @@ -154,3 +160,65 @@ func setEventStreamHeaders(c *gin.Context) {
c.Writer.Header().Set("Transfer-Encoding", "chunked")
c.Writer.Header().Set("X-Accel-Buffering", "no")
}

func EmbeddingProxyHandler(c *gin.Context) {
// Retrieve the Authorization header value
authorizationHeader := c.GetHeader("Authorization")
// Declare a variable to store the OPENAI_API_KEY
var openaiAPIKey string
// Use fmt.Sscanf to extract the Bearer token
_, err := fmt.Sscanf(authorizationHeader, "Bearer %s", &openaiAPIKey)
if err != nil {
c.JSON(http.StatusBadRequest, openai.APIError{
Code: http.StatusBadRequest,
Message: err.Error(),
})
return
}

req := &adapter.EmbeddingRequest{}
// Bind the JSON data from the request to the struct
if err := c.ShouldBindJSON(req); err != nil {
c.JSON(http.StatusBadRequest, openai.APIError{
Code: http.StatusBadRequest,
Message: err.Error(),
})
return
}

messages, err := req.ToGenaiMessages()
if err != nil {
c.JSON(http.StatusBadRequest, openai.APIError{
Code: http.StatusBadRequest,
Message: err.Error(),
})
return
}

ctx := c.Request.Context()
client, err := genai.NewClient(ctx, option.WithAPIKey(openaiAPIKey))
if err != nil {
log.Printf("new genai client error %v\n", err)
c.JSON(http.StatusBadRequest, openai.APIError{
Code: http.StatusBadRequest,
Message: err.Error(),
})
return
}
defer client.Close()

model := req.ToGenaiModel()
gemini := adapter.NewGeminiAdapter(client, model)

resp, err := gemini.GenerateEmbedding(ctx, messages)
if err != nil {
log.Printf("genai generate content error %v\n", err)
c.JSON(http.StatusBadRequest, openai.APIError{
Code: http.StatusBadRequest,
Message: err.Error(),
})
return
}

c.JSON(http.StatusOK, resp)
}
3 changes: 3 additions & 0 deletions api/router.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,7 @@ func Register(router *gin.Engine) {

// openai chat
router.POST("/v1/chat/completions", ChatProxyHandler)

// openai embeddings
router.POST("/v1/embeddings", EmbeddingProxyHandler)
}
36 changes: 36 additions & 0 deletions pkg/adapter/chat.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ const (
Gemini1Pro = "gemini-1.0-pro-latest"
Gemini1Dot5Pro = "gemini-1.5-pro-latest"
Gemini1Dot5Flash = "gemini-1.5-flash-latest"
Gemini1Dot5ProV = "gemini-1.5-vision-latest" // Converted to one of the above models in struct::ToGenaiModel
TextEmbedding004 = "text-embedding-004"

genaiRoleUser = "user"
genaiRoleModel = "model"
Expand Down Expand Up @@ -239,3 +241,37 @@ func setGenaiModelByOpenaiRequest(model *genai.GenerativeModel, req *ChatComplet
},
}
}

func (g *GeminiAdapter) GenerateEmbedding(
ctx context.Context,
messages []*genai.Content,
) (*openai.EmbeddingResponse, error) {
model := g.client.EmbeddingModel(g.model)

batchEmbeddings := model.NewBatch()
for _, message := range messages {
batchEmbeddings = batchEmbeddings.AddContent(message.Parts...)
}

genaiResp, err := model.BatchEmbedContents(ctx, batchEmbeddings)
if err != nil {
return nil, errors.Wrap(err, "genai generate embeddings error")
}

openaiResp := openai.EmbeddingResponse{
Object: "list",
Data: make([]openai.Embedding, 0, len(genaiResp.Embeddings)),
Model: openai.EmbeddingModel(g.model),
}

for i, genaiEmbedding := range genaiResp.Embeddings {
embedding := openai.Embedding{
Object: "embedding",
Embedding: genaiEmbedding.Values,
Index: i,
}
openaiResp.Data = append(openaiResp.Data, embedding)
}

return &openaiResp, nil
}
66 changes: 57 additions & 9 deletions pkg/adapter/struct.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ package adapter
import (
"encoding/json"
"os"
"strings"

"github.com/google/generative-ai-go/genai"
"github.com/pkg/errors"
Expand Down Expand Up @@ -45,24 +44,22 @@ type ChatCompletionRequest struct {

func (req *ChatCompletionRequest) ToGenaiModel() string {
switch {
case req.Model == openai.GPT4VisionPreview:
if os.Getenv("GPT_4_VISION_PREVIEW") == Gemini1Dot5Pro {
case req.Model == Gemini1Dot5ProV:
if os.Getenv("GEMINI_VISION_PREVIEW") == Gemini1Dot5Pro {
return Gemini1Dot5Pro
}

return Gemini1Dot5Flash
case req.Model == openai.GPT4TurboPreview || req.Model == openai.GPT4Turbo1106 || req.Model == openai.GPT4Turbo0125:
return Gemini1Dot5Pro
case strings.HasPrefix(req.Model, openai.GPT4):
return Gemini1Dot5Flash
default:
return Gemini1Pro
return req.Model
}
}

func (req *ChatCompletionRequest) ToGenaiMessages() ([]*genai.Content, error) {
if req.Model == openai.GPT4VisionPreview {
if req.Model == Gemini1Dot5ProV {
return req.toVisionGenaiContent()
} else if req.Model == TextEmbedding004 {
return nil, errors.New("Chat Completion is not supported for embedding model")
}

return req.toStringGenaiContent()
Expand Down Expand Up @@ -176,3 +173,54 @@ type CompletionResponse struct {
Model string `json:"model"`
Choices []CompletionChoice `json:"choices"`
}

type StringArray []string

// UnmarshalJSON implements the json.Unmarshaler interface for StringArray.
func (s *StringArray) UnmarshalJSON(data []byte) error {
// Check if the data is a JSON array
if data[0] == '[' {
var arr []string
if err := json.Unmarshal(data, &arr); err != nil {
return err
}
*s = arr
return nil
}

// Check if the data is a JSON string
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
*s = StringArray{str} // Wrap the string in a slice
return nil
}

// EmbeddingRequest represents a request structure for embeddings API.
type EmbeddingRequest struct {
Model string `json:"model" binding:"required"`
Messages StringArray `json:"input" binding:"required,min=1"`
}

func (req *EmbeddingRequest) ToGenaiMessages() ([]*genai.Content, error) {
if req.Model != TextEmbedding004 {
return nil, errors.New("Embedding is not supported for chat model " + req.Model)
}

content := make([]*genai.Content, 0, len(req.Messages))
for _, message := range req.Messages {
embedString := []genai.Part{
genai.Text(message),
}
content = append(content, &genai.Content{
Parts: embedString,
})
}

return content, nil
}

func (req *EmbeddingRequest) ToGenaiModel() string {
return TextEmbedding004
}
Loading