FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
-
Updated
Jan 13, 2024 - Python
FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Unlock the potential of finetuning Large Language Models (LLMs). Learn from industry expert, and discover when to apply finetuning, data preparation techniques, and how to effectively train and evaluate LLMs.
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
This project enhances the LLaMA-2 model using Quantized Low-Rank Adaptation (QLoRA) and other parameter-efficient fine-tuning techniques to optimize its performance for specific NLP tasks. The improved model is demonstrated through a Streamlit application, showcasing its capabilities in real-time interactive settings.
Jupyter notebooks for course Finetuning Large Language Models, taught by Sharon Zhou (Lamini) and Andrew Ng (DeepLearning.AI).
Finetuning Starcoder2-3B for Code Completion on a single A100 GPU
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
This Repo Contains Script To Fine Tune Open Source Models Using Unsloth by using UI with simple click and progress
AI chat model that can convert natural language to code i.e., understand natural language as input and generate the required embedded platform code as the output.
Jupyter notebooks from "Finetune LLMs" course at deeplearning.ai
(In-progress) Finetuning OpenAI's GPT-3.5-Turbo as a base model on open-source data about the Tampa Bay region to create a chatbot specializing in information on the area!
Finetuning-LLM
This repo contains influential papers which use finetuning techniques for LLMs for domain specific tasks.
A Slope Analysis Trainer for AI Models based on Crypto Price Charts
Finetuning Roberta on your own dataset
A research engine designed to deliver actionable insights by analyzing and interpreting multimodal inputs, including text, images, and other data types. It integrates diverse information sources to provide comprehensive and contextually relevant responses.
This project uses BERT to build a QA system fine-tuned on the SQuAD dataset, improving the accuracy and efficiency of question-answering tasks. We address challenges in contextual understanding and ambiguity handling to enhance user experience and system performance.
Add a description, image, and links to the finetuning-large-language-models topic page so that developers can more easily learn about it.
To associate your repository with the finetuning-large-language-models topic, visit your repo's landing page and select "manage topics."