Skip to content

v0.3.0

Compare
Choose a tag to compare
@dakinggg dakinggg released this 27 Sep 22:06

🚀 LLM Foundry v0.3.0

LLM Foundry is an efficient codebase for training, evaluating, and deploying Large Language Models (LLMs) and serves as the foundation for the MPT model series. This release includes lots of bug fixes, stability improvements, and improved error messages, in addition to all the new features listed below!

Features

Llama-2 (#485, #520, #533)

Adds support for training Llama-2 models with optimized flash attention. To enable flash attention, set the attention_patch_type in your yaml like so:

model:
    ...
    attention_patch_type: triton
    ...

See the example yaml for a full example of how to finetune Llama-2 on the MosaicML platform.

8-bit Lion (#514)

We have implemented an 8-bit version of the Lion optimizer. This reduces the memory needed per parameter from 12 bits to 9 bits. To switch from Lion to 8-bit Lion, simply change the optimizer name from decoupled_lionw to decoupled_lionw_8b!

Checkpoint conversion (#526, #519, #594)

We've greatly improved our utilities for checkpoint conversion, including generalizing the Composer to Hugging Face conversion script to support all causal LMs, adding a callback to perform the conversion to Hugging Face format during the training job, and support for Faster Transformer conversion from a Composer MPT checkpoint.

To enable the new callback, add the hf_checkpointer callback to your yaml like so:

callbacks:
    ...
    hf_checkpointer:
        # Save a Hugging Face formatted checkpoint at the end of each epoch
        save_interval: 1ep
        # The Hugging Face formatted checkpoints will be saved inside a subfolder called huggingface, 
        # so this folder will likely be the same as your overall save_folder
        save_folder: ./{run_name}/checkpoints 
        # Set the precision you want the checkpoint saved in
        precision: bfloat16

Code evaluation (#587)

We have added support for running HumanEval (code evaluation) using LLM Foundry! See the evaluation readme for a more detailed description and the tasks yaml for an ICL yaml that can be used to run the HumanEval evaluation task.

Transformer Engine support (#432)

Adds support for using NVIDIA's Transformer Enginer to enable FP8 training. To enable, set fc_type='te' and/or ffn_config['ffn_type']='te_ln_mlp' and precision='amp_fp8'.

MLFlow (#475)

Adds support for using MLFlow as an experiment tracker. To enable, simply add mlflow to the loggers section of your yaml. See the Composer docs for more configuration options for MLFlow. Stay tuned for automatic model logging to MLFlow for easy deployment.

Updated streaming version/defaults (#503, #573, #580, #602)

Updates to the latest release of MosaicML Streaming and sets better defaults for improved shuffling quality and training throughput. Check out the Streaming release notes for the full details of all the new options!

Grouped Query Attention (#492)

Implements Grouped Query Attention, which can strike a good balance between the quality of Multi Head Attention and the speed of Multi Query Attention. To enable, set attn_config['attn_type']='grouped_query_attention' and attn_config['kv_n_heads'] to the desired number of kv heads.

MPT quality of life improvements (#559, #599)

Thanks to @tdoublep and @lorabit110 for making MPT a bit easier to use with other parts of the NLP ecosystem!

Eval gauntlet during training, inference API eval wrapper (#501, #494)

Improvements to our evaluation setup, including the ability to run the eval gauntlet during training, and a wrapper to allow using inference APIs with our eval gauntlet. The ICL tasks and gauntlet can be specified as shown [here](https://github.com/mosaicml/llm-foundry/blob/fd36398dad5ac9fde085af679514189ce9439be4/scripts/eval/yamls/hf_eval.yaml#L46-L47.

tiktoken support (#610)

We have enabled training with tiktoken tokenizers with a thin wrapper around the tiktoken library for compatibility with all the tooling built around Hugging Face tokenizers. You can enable this with a simple change to the tokenizer section of your yaml:

tokenizer:
    name: tiktoken
    kwargs:
        model_name: gpt-4

LoRA eval (#515)

Allows the use of our evaluation script with a model trained using LoRA. Coming soon, full support for LoRA with FSDP! See this yaml for an example of evaluating a model trained using LoRA. Stay tuned for full LoRA support with FSDP!

Finetuning API

Lastly, we are building a finetuning API on top of LLM Foundry, Composer, and Streaming. Please reach out if you might be interested in using this API as a customer!

What's Changed

New Contributors

Full Changelog: v0.2.0...v0.3.0