We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add method that will calculate tokens a request will use for text completion and chat API.
This is useful when consuming the library in an application and needing to truncate prompt.
Help avoid each implementation using a different calculation.
The text was updated successfully, but these errors were encountered:
That would be very helpful. I keep getting exceptions because the input size + the maxtokens param exceed the models max token count.
Sorry, something went wrong.
Bit late to the party, but you can use TikTokenSharp for this (port of the tiktoken python library)
No branches or pull requests
Add method that will calculate tokens a request will use for text completion and chat API.
This is useful when consuming the library in an application and needing to truncate prompt.
Help avoid each implementation using a different calculation.
The text was updated successfully, but these errors were encountered: