All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Option to user text_area for textual feedback collection in Streamlit. Thanks @hamdan-27
- Now compatible for pydantic>=1.5 (v1 and v2)
streamlit-feedback has been revamped for tighter integration with st chat elements
- fix dependency with
streamlit-feedback==0.1.2
single_submit
has been removed from.st_feedback()
, and replaced withdisable_with_score
- new streamlit chatbot streaming example
- streamlit llm examples
user_response
is dict rather thanResponse
for feedback_type="text"collector.st_feedback(..., save_to_trubrics=False)
returns a python dict- pydantic dependency updated and
.dict()
updated to.model_dump()
- Rename
model_config
withconfig_model
for pydantic
- All validations have been depreciated.
- New hierarchy for organising
Projects
in Trubrics - New
log_prompts
method for tracing user prompts and model generations
- Updated API (Python SDK & Streamlit) for collecting feedback
- Added option to skip success or error message upon saving feedback to Trubrics
- Replaced streamlit python feedback components with React components
- Move all validations dependencies to "pip install trubrics[validations]"
- Make UTC default for saving feedback responses
- Use default factory for Feedback object
created_on
- Refactored all feedback docs to fit new trubrics feedback API
- Reorganised all examples
- Fixed flask example app
- Fixed titanic example app
- Fixed llm example app
Refactor of Feedback collector to fit to new Trubrics user insights platform
- Added trubrics_platform_auth into titanic example app
- Upgrade streamlit>=1.18.0
- Fix
Unauthenticated
error in Trubrics platform auth with refresh function parameter
- Fixed
trubrics run
with new .json file corresponding to newTrubric
data model
- Functionality to fail a Trubric run (cli or notebook) based on the severity of validations
- New integration with MlFlow 🎉 - you can now:
- Validate an mlflow run with Trubrics with
mlflow.evaluate(evaluators="trubrics")
- Save all validation results to the MLflow UI
- Write custom python functions to validate your data or models with MLflow
- Validate an mlflow run with Trubrics with
- Changed data model of
Trubric
object - Tutorials for classification and regression models added to docs, ready to run in google colab
- Removed notebook run in the docs CI
- Users can now trubrics init with environment variables
- Clearer trubrics init documentation
- Users can now trubrics init without manual prompts
- New methods of
FeedbackCollector
to allow for the use of standalone Trubrics UI components. E.g.collector.st_faces_ui()
- Open question feedback option to collect with feedback types "issue" & "faces"
- Disable on click functionality for a smoother user experience with feedback types
Feedback
pydantic model returned fromst_feedback()
method
- Updated data model for the Feedback object
- Add a note to the demo app explaining the experiment features
- Changed order of feedback and validations in README
Feedback
components are now decoupled from the data context
- Custom type for streamlit FeedbackCollector
- Unit tests for streamlit FeedbackCollector
- Example code snippets for Demo app
- A brand new, shiny FeedbackCollector for streamlit 🎉. Highlights:
- A new demo app on Titanic, with options for auth directly in the CLI
- New FeedbackCollector object that stores metadata of your models and dataset versions
- New auth component for Trubrics platform
- New st_feedback() component with multiple types available
- Updated docs
- Flex dependencies versions
- Moved Streamlit, Gradio and Dash to extra_dependencies
- Moved feedback integrations to an integrations/ dir
- Display up to 50 projects from Trubrics in
trubrics init
- Add @lru_cache to get Idtoken upon each write
- Hide locals in typer print (for sensitive password on error)
- Refactored TrubricRun to include methods to generate a Trubric
- Update CLI docs and gifs
- Added .json to python package
- Refactored CLI with rich prints
- Moved trubric_run_context path argument to
trubrics run
fromtrubrics init
- Change collaborators in feedback from email to display name
- Add archived projects filter in trubrics init
- Python __version__ number in cli and __init__.py
- Add save_ui param to trubrics example-app in CLI
- Added project_id to
trubrics init
trubrics init
refactoring with authentication with Trubricstrubrics run
refactoring to store validations to firestore (Trubrics DB)Feedback
andTrubric
have adapted data model- Cleaned notebooks examples/ folder
- Updated docs and README with Trubrics platform references
- FeedbackCollector now has authentication for Trubrics users
Feedback
andTrubric
havesave_ui()
methods for new DB
- Bumped streamlit version
- Allow for list types to be saved in result dict validation output
- Restricted extra_fields of
Trubric
pydantic model - Changed
trubric_name
field toname
inTrubric
pydantic model
- Getting started video to readme
- Features field for DataContext
- Integration with Gradio for collecting feedback
- Integration with Dash for collecting feedback
- Adopted a functional approach to FeedbackCollector
- Separated feedback collector to collect and experiment functions
- Updated feedback collector readme and docs
- Fixed github action to display changelog to release tag
- Fixed save feedback log
- FeedbackCollector has simplified feedback form
- New CHANGELOG.md
- New CONTRIBUTING.md
- Getting started video from README and docs. Soon to be replaced with updated video
- New metrics, cli & data_context docs
- Restructure readme with examples that run
- Restructure docs to follow readme key features
- Complete unit tests for ModelValidator
- Separate out "contexts" from context.py into their respective folders
- PyPI packaged example data is now readable