Skip to content

Latest commit

 

History

History
52 lines (32 loc) · 2.66 KB

README.md

File metadata and controls

52 lines (32 loc) · 2.66 KB

LLM-Lora-PEFT_accumulate

Welcome to the LLM-Lora-PEFT_accumulate repository!

This repository contains implementations and experiments related to Large Language Models (LLMs) using PEFT (Parameter Efficient Fine Tuning), LORA (Low-Rank Adaptation of Large Language Models), and QLORA (Quantized LLMs with Low-Rank Adapters).

Loading a model in 8-bit precision can save up to 4x memory compared to full precision model

image

What does PEFT do?

You easily add adapters on a frozen 8-bit model thus reducing the memory requirements of the optimizer states, by training a small fraction of parameters

image

Resources

🌐 Websites

📺 YouTube Videos

📄 Papers

🐙 GitHub Repositories

🐍 Python Notebooks

SWOT of LLMs

image Go to LLM Analysis with SWOT for more clarification.