Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Passing in Tokenized Data to One-Shot #2202

Merged
merged 7 commits into from
Apr 10, 2024
Merged

Conversation

Satrat
Copy link

@Satrat Satrat commented Mar 28, 2024

Adds support for passing a tokenized dataset to sparseml.transformers.oneshot. See OneShot UX Enhancement doc for feature details. If the user passes in a dataset smaller than num_calibration_samples, we throw a warning and just use the smaller dataset.

Previously we were handling dataset shuffling within our code. I added a new shuffle_calibration_samples flag to disable this, to support cases where a user wants to shuffle the data themselves and keep that ordering. This wasn't in the original UX requirement, so definitely open to feedback on taking this out or adjusting it. See the unit test for an example

Example Script

from datasets import load_dataset
from transformers import AutoTokenizer
from sparseml.transformers import SparseAutoModelForCausalLM, oneshot
import torch

ALPACA_TEMPLATE = {
    "prompt_input": "Below is an instruction that describes a task, paired with an "
    "input that provides further context. Write a response that appropriately "
    "completes the request.\n\n### Instruction:\n{instruction}\n\n### Input:\n"
    "{input}\n\n### Response:\n",
    "prompt_no_input": "Below is an instruction that describes a task. Write a "
    "response that appropriately completes the request.\n\n### Instruction:\n{"
    "instruction}\n\n### Response:\n",
}

recipe_stub = """
test_stage:
  obcq_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      block_size: 128
      sequential_update: False
      quantize: False
      targets: [
        "re:model.layers.\\\d+$"
      ]
"""

MODEL_STUB = "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"
OUTPUT_DIR = "./test_oneshot_tokenized_input"
NUM_CALIBRATION_SAMPLES = 1024

model = SparseAutoModelForCausalLM.from_pretrained(MODEL_STUB, device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(MODEL_STUB)

dataset = load_dataset("garage-bAInd/Open-Platypus")["train"]
dataset = dataset.shuffle(seed=42).select(range(256))

def preprocess_for_oneshot(sample):
    if "input" in sample:
        concat_text = ALPACA_TEMPLATE["prompt_input"].format(
            instruction=sample["instruction"], input=sample["input"]
        )
    else:
        concat_text = ALPACA_TEMPLATE["prompt_no_input"].format(
            instruction=sample["instruction"]
        )
    if "output" in sample:
        concat_text += sample["output"]

    return tokenizer(
      concat_text,
      padding=False,
      max_length=512,
      truncation=True
    )

tokenized_dataset = dataset.map(preprocess_for_oneshot, remove_columns=["input", "output", "instruction", "data_source"])

oneshot(
  model=model,
  dataset=tokenized_dataset,
  recipe=recipe_stub,
  num_calibration_samples=NUM_CALIBRATION_SAMPLES,
  output_dir=OUTPUT_DIR,
  overwrite_output_dir=True
)

@robertgshaw2-neuralmagic
Copy link
Contributor

robertgshaw2-neuralmagic commented Mar 28, 2024

UX LGTM

Can you validate it works with a model that does not accept labels? e.g. swap in the following

model = AutoModel.from_pretrained("BAAI/bge-small-en-v1.5")

recipe = """
test_stage:
  obcq_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      targets: ["re:encoder.layer.\\d+$"]
"""

..

@mgoin mgoin self-requested a review April 1, 2024 13:39
@Satrat
Copy link
Author

Satrat commented Apr 1, 2024

UX LGTM

Can you validate it works with a model that does not accept labels? e.g. swap in the following

model = AutoModel.from_pretrained("BAAI/bge-small-en-v1.5")

recipe = """
test_stage:
  obcq_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      targets: ["re:encoder.layer.\\d+$"]
"""

..

@robertgshaw2-neuralmagic Confirmed this worked with the following recipe:

recipe_stub = """
test_stage:
  obcq_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      targets: ["re:bert.encoder.layer.\\\d+$"]
"""

MODEL_STUB = "BAAI/bge-small-en-v1.5"
OUTPUT_DIR = "./test_oneshot_tokenized_input"
NUM_CALIBRATION_SAMPLES = 1024

model = SparseAutoModelForCausalLM.from_pretrained(MODEL_STUB, device_map="cuda:0", torch_dtype=torch.float16)

#(rest of the script is the same as the example above)

Only hiccup is "BAAI/bge-small-en-v1.5" didn't have support for device_map auto, but it was small enough to fit on one GPU

bfineran
bfineran previously approved these changes Apr 1, 2024
@Satrat Satrat requested a review from bfineran April 9, 2024 19:17
@bfineran bfineran merged commit e9a6866 into main Apr 10, 2024
13 of 17 checks passed
@bfineran bfineran deleted the tokenized_inputs branch April 10, 2024 14:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants