2.2.0
FontaineRiant
released this
22 Mar 13:41
·
84 commits
to pytorch-transformers
since this release
- Hard limit on vRAM usage (by limiting history) for 24GB AMD GPUs
- EOS token now determined through tokenizer config rather than model config (some models don't declare it the same way)
- Forced the tokenizer to add a leading space if it isn't the default behavior for the model (like Mistral)
- Switched from PyInquirer to InquirerPy, making wrAIter compatible with python 3.10
- Added ROCm requirements for AMD GPUs
- Merged short sentences together during TTS to reduce hallucinations.
- Removed the need for TTS to write temporary audio files to disk, instead everything is played from memory.
- Better crash handling and story recovery after a crash
Full Changelog: v2.1.0...2.2.0