Skip to content
This repository has been archived by the owner on Oct 2, 2024. It is now read-only.
/ modelgauge Public archive

Make it easy to automatically and uniformly measure the behavior of many AI Systems.

License

Notifications You must be signed in to change notification settings

mlcommons/modelgauge

Repository files navigation

ModelGauge

ModelGauge was originally planned to be an evolution of crfm-helm, intended to meet their existing use cases as well as those needed by the MLCommons AI Safety project. However, that project, instead of using a big set of existing tests instead developed a smaller set of custom ones. Because of that, some of this code was moved into the related project MLCommons ModelBench and this repo was archived.

Summary

ModelGauge is a library that provides a set of interfaces for Tests and Systems Under Test (SUTs) such that:

  • Each Test can be applied to all SUTs with the required underlying capabilities (e.g. does it take text input?)
  • Adding new Tests or SUTs can be done without modifications to the core libraries or support from ModelGauge authors.

Currently ModelGauge is targeted at LLMs and single turn prompt response Tests, with Tests scored by automated Annotators (e.g. LlamaGuard). However, we expect to extend the library to cover more Test, SUT, and Annotation types as we move toward full release.

Docs

About

Make it easy to automatically and uniformly measure the behavior of many AI Systems.

Resources

License

Stars

Watchers

Forks

Packages

No packages published