Skip to content

Commit

Permalink
SDK regeneration
Browse files Browse the repository at this point in the history
  • Loading branch information
fern-api[bot] committed Sep 19, 2024
1 parent cdd01e4 commit 775da0e
Show file tree
Hide file tree
Showing 17 changed files with 145 additions and 463 deletions.
90 changes: 10 additions & 80 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -1603,13 +1603,17 @@ If you want to learn more how to use the embedding model, have a look at the [Se
<dd>

```python
from cohere import Client
from cohere import Client, EmbedRequestV2

client = Client(
client_name="YOUR_CLIENT_NAME",
token="YOUR_TOKEN",
)
client.embed()
client.embed(
request=EmbedRequestV2(
model="model",
),
)

```
</dd>
Expand All @@ -1625,80 +1629,7 @@ client.embed()
<dl>
<dd>

**texts:** `typing.Optional[typing.Sequence[str]]` — An array of strings for the model to embed. Maximum number of texts per call is `96`. We recommend reducing the length of each text to be under `512` tokens for optimal quality.

</dd>
</dl>

<dl>
<dd>

**images:** `typing.Optional[typing.Sequence[str]]`

An array of image data URIs for the model to embed. Maximum number of images per call is `1`.

The image must be a valid [data URI](https://developer.mozilla.org/en-US/docs/Web/URI/Schemes/data). The image must be in either `image/jpeg` or `image/png` format and has a maximum size of 5MB.

</dd>
</dl>

<dl>
<dd>

**model:** `typing.Optional[str]`

Defaults to embed-english-v2.0

The identifier of the model. Smaller "light" models are faster, while larger models will perform better. [Custom models](/docs/training-custom-models) can also be supplied with their full ID.

Available models and corresponding embedding dimensions:

* `embed-english-v3.0` 1024
* `embed-multilingual-v3.0` 1024
* `embed-english-light-v3.0` 384
* `embed-multilingual-light-v3.0` 384

* `embed-english-v2.0` 4096
* `embed-english-light-v2.0` 1024
* `embed-multilingual-v2.0` 768

</dd>
</dl>

<dl>
<dd>

**input_type:** `typing.Optional[EmbedInputType]`

</dd>
</dl>

<dl>
<dd>

**embedding_types:** `typing.Optional[typing.Sequence[EmbeddingType]]`

Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.

* `"float"`: Use this when you want to get back the default float embeddings. Valid for all models.
* `"int8"`: Use this when you want to get back signed int8 embeddings. Valid for only v3 models.
* `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models.
* `"binary"`: Use this when you want to get back signed binary embeddings. Valid for only v3 models.
* `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.

</dd>
</dl>

<dl>
<dd>

**truncate:** `typing.Optional[EmbedRequestTruncate]`

One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.

Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.

If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.
**request:** `EmbedRequestV2`

</dd>
</dl>
Expand Down Expand Up @@ -2858,16 +2789,15 @@ If you want to learn more how to use the embedding model, have a look at the [Se
<dd>

```python
from cohere import Client, ImageEmbedRequestV2
from cohere import Client, EmbedRequestV2

client = Client(
client_name="YOUR_CLIENT_NAME",
token="YOUR_TOKEN",
)
client.v2.embed(
request=ImageEmbedRequestV2(
images=["string"],
model="string",
request=EmbedRequestV2(
model="model",
),
)

Expand Down
20 changes: 2 additions & 18 deletions src/cohere/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,6 @@
CitationStartEventDelta,
CitationStartEventDeltaMessage,
CitationStartStreamedChatResponseV2,
ClassificationEmbedRequestV2,
ClassifyDataMetrics,
ClassifyExample,
ClassifyRequestTruncate,
Expand All @@ -89,7 +88,6 @@
ClassifyResponseClassificationsItemClassificationType,
ClassifyResponseClassificationsItemLabelsValue,
ClientClosedRequestErrorBody,
ClusteringEmbedRequestV2,
CompatibleEndpoint,
Connector,
ConnectorAuthStatus,
Expand Down Expand Up @@ -119,8 +117,8 @@
EmbedJob,
EmbedJobStatus,
EmbedJobTruncate,
EmbedRequestTruncate,
EmbedRequestV2,
EmbedRequestV2Truncate,
EmbedResponse,
EmbeddingType,
EmbeddingsByTypeEmbedResponse,
Expand All @@ -141,8 +139,6 @@
Generation,
GetConnectorResponse,
GetModelResponse,
ImageEmbedRequestV2,
Images,
JsonObjectResponseFormat,
JsonObjectResponseFormatV2,
JsonResponseFormat,
Expand All @@ -169,9 +165,7 @@
RerankerDataMetrics,
ResponseFormat,
ResponseFormatV2,
SearchDocumentEmbedRequestV2,
SearchQueriesGenerationStreamedChatResponse,
SearchQueryEmbedRequestV2,
SearchResultsStreamedChatResponse,
SingleGeneration,
SingleGenerationInStream,
Expand Down Expand Up @@ -200,8 +194,6 @@
TextResponseFormatV2,
TextSystemMessageContentItem,
TextToolContent,
Texts,
TextsTruncate,
TokenizeResponse,
TooManyRequestsErrorBody,
Tool,
Expand Down Expand Up @@ -359,7 +351,6 @@
"CitationStartEventDelta",
"CitationStartEventDeltaMessage",
"CitationStartStreamedChatResponseV2",
"ClassificationEmbedRequestV2",
"ClassifyDataMetrics",
"ClassifyExample",
"ClassifyRequestTruncate",
Expand All @@ -372,7 +363,6 @@
"ClientClosedRequestErrorBody",
"ClientEnvironment",
"ClientV2",
"ClusteringEmbedRequestV2",
"CompatibleEndpoint",
"Connector",
"ConnectorAuthStatus",
Expand Down Expand Up @@ -408,8 +398,8 @@
"EmbedJob",
"EmbedJobStatus",
"EmbedJobTruncate",
"EmbedRequestTruncate",
"EmbedRequestV2",
"EmbedRequestV2Truncate",
"EmbedResponse",
"EmbeddingType",
"EmbeddingsByTypeEmbedResponse",
Expand All @@ -432,8 +422,6 @@
"Generation",
"GetConnectorResponse",
"GetModelResponse",
"ImageEmbedRequestV2",
"Images",
"InternalServerError",
"JsonObjectResponseFormat",
"JsonObjectResponseFormatV2",
Expand Down Expand Up @@ -464,9 +452,7 @@
"ResponseFormat",
"ResponseFormatV2",
"SagemakerClient",
"SearchDocumentEmbedRequestV2",
"SearchQueriesGenerationStreamedChatResponse",
"SearchQueryEmbedRequestV2",
"SearchResultsStreamedChatResponse",
"ServiceUnavailableError",
"SingleGeneration",
Expand Down Expand Up @@ -496,8 +482,6 @@
"TextResponseFormatV2",
"TextSystemMessageContentItem",
"TextToolContent",
"Texts",
"TextsTruncate",
"TokenizeResponse",
"TooManyRequestsError",
"TooManyRequestsErrorBody",
Expand Down
Loading

0 comments on commit 775da0e

Please sign in to comment.