Skip to content

Commit

Permalink
Prep v0.11.0 release
Browse files Browse the repository at this point in the history
  • Loading branch information
benbrandt committed Apr 18, 2024
1 parent 2d68cf0 commit 2af1dc5
Show file tree
Hide file tree
Showing 4 changed files with 15 additions and 4 deletions.
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
# Changelog

## v0.11.0

### Breaking Changes

- Bump tokenizers from 0.15.2 to 0.19.1

### Other updates

- Bump either from 1.10.0 to 1.11.0
- Bump pyo3 from 0.21.1 to 0.21.2

## v0.10.0

### Breaking Changes
Expand Down
4 changes: 2 additions & 2 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
members = ["bindings/*"]

[workspace.package]
version = "0.10.0"
version = "0.11.0"
authors = ["Ben Brandt <benjamin.j.brandt@gmail.com>"]
edition = "2021"
description = "Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rust and Python."
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ There are lots of methods of determining sentence breaks, all to varying degrees
| Dependency Feature | Version Supported | Description |
| ------------------ | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `tiktoken-rs` | `0.5.8` | Enables `(Text/Markdown)Splitter::new` to take `tiktoken_rs::CoreBPE` as an argument. This is useful for splitting text for OpenAI models. |
| `tokenizers` | `0.15.2` | Enables `(Text/Markdown)Splitter::new` to take `tokenizers::Tokenizer` as an argument. This is useful for splitting text models that have a Hugging Face-compatible tokenizer. |
| `tokenizers` | `0.19.1` | Enables `(Text/Markdown)Splitter::new` to take `tokenizers::Tokenizer` as an argument. This is useful for splitting text models that have a Hugging Face-compatible tokenizer. |

## Inspiration

Expand Down

0 comments on commit 2af1dc5

Please sign in to comment.