Skip to content

Latest commit

 

History

History
23 lines (17 loc) · 742 Bytes

README.md

File metadata and controls

23 lines (17 loc) · 742 Bytes

Reward-Model

Reward Model training framework for LLM RLHF. For in-depth understanding of Reward modeling, checkout our blog The word nemesis originally meant the distributor of fortune, neither good nor bad, simply in due proportion to each according to what was deserved.

Quick Start

  • Inference
from transformers import AutoModelForSequenceClassification, AutoTokenizer
MODEL = "shahules786/Reward-model-gptneox-410M"

model = AutoModelForSequenceClassification.from_pretrained(MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
  • Training
python src/training.py --config-name <your-config-name>

Contributions

  • All contributions are welcome. Checkout #issues