Skip to content

Commit

Permalink
release v0.3.3 (#1116)
Browse files Browse the repository at this point in the history
  • Loading branch information
dworthen authored Sep 10, 2024
1 parent 1b55972 commit e7ee8cb
Show file tree
Hide file tree
Showing 18 changed files with 438 additions and 414 deletions.
66 changes: 66 additions & 0 deletions .semversioner/0.3.3.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
{
"changes": [
{
"description": "Add entrypoints for incremental indexing",
"type": "patch"
},
{
"description": "Clean up and organize run index code",
"type": "patch"
},
{
"description": "Consistent config loading. Resolves #99 and Resolves #1049",
"type": "patch"
},
{
"description": "Fix circular dependency when running prompt tune api directly",
"type": "patch"
},
{
"description": "Fix default settings for embedding",
"type": "patch"
},
{
"description": "Fix img for auto tune",
"type": "patch"
},
{
"description": "Fix img width",
"type": "patch"
},
{
"description": "Fixed a bug in prompt tuning process",
"type": "patch"
},
{
"description": "Refactor text unit build at local search",
"type": "patch"
},
{
"description": "Update Prompt Tuning docs",
"type": "patch"
},
{
"description": "Update create_pipeline_config.py",
"type": "patch"
},
{
"description": "Update prompt tune command in docs",
"type": "patch"
},
{
"description": "add querying from azure blob storage",
"type": "patch"
},
{
"description": "fix setting base_dir to full paths when not using file system.",
"type": "patch"
},
{
"description": "fix strategy config in entity_extraction",
"type": "patch"
}
],
"created_at": "2024-09-10T19:51:24+00:00",
"version": "0.3.3"
}
4 changes: 0 additions & 4 deletions .semversioner/next-release/minor-20240909192217829240.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240712071506108985.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240814063732868394.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240827203354884800.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240827212041426794.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240829175336332224.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240829213842840703.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240829222117086645.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240829223855375571.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240829230018473667.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240830151802543194.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240830181135475287.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240903205022597458.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240904161252783119.json

This file was deleted.

4 changes: 0 additions & 4 deletions .semversioner/next-release/patch-20240904173227165702.json

This file was deleted.

194 changes: 106 additions & 88 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,88 +1,106 @@
# Changelog
Note: version releases in the 0.x.y range may introduce breaking changes.

## 0.3.2

- patch: Add context data to query API responses.
- patch: Add missing config parameter documentation for prompt tuning
- patch: Add neo4j community notebook
- patch: Ensure entity types to be str when running prompt tuning
- patch: Fix weight casting during graph extraction
- patch: Patch "past" dependency issues
- patch: Update developer guide.
- patch: Update query type hints.
- patch: change-lancedb-placement

## 0.3.1

- patch: Add preflight check to check LLM connectivity.
- patch: Add streaming support for local/global search to query cli
- patch: Add support for both float and int on schema validation for community report generation
- patch: Avoid running index on gh-pages publishing
- patch: Implement Index API
- patch: Improves filtering for data dir inferring
- patch: Update to nltk 3.9.1

## 0.3.0

- minor: Implement auto templating API.
- minor: Implement query engine API.
- patch: Fix file dumps using json for non ASCII chars
- patch: Stabilize smoke tests for query context building
- patch: fix query embedding
- patch: fix sort_context & max_tokens params in verb

## 0.2.2

- patch: Add a check if there is no community record added in local search context
- patch: Add sepparate workflow for Python Tests
- patch: Docs updates
- patch: Run smoke tests on 4o

## 0.2.1

- patch: Added default columns for vector store at create_pipeline_config. No change for other cases.
- patch: Change json parsing error in the map step of global search to warning
- patch: Fix Local Search breaking when loading Embeddings input. Defaulting overwrite to True as in the rest of the vector store config
- patch: Fix json parsing when LLM returns faulty responses
- patch: Fix missing community reports and refactor community context builder
- patch: Fixed a bug that erased the vector database, added a new parameter to specify the config file path, and updated the documentation accordingly.
- patch: Try parsing json before even repairing
- patch: Update Prompt Tuning meta prompts with finer examples
- patch: Update default entity extraction and gleaning prompts to reduce hallucinations
- patch: add encoding-model to entity/claim extraction config
- patch: add encoding-model to text chunking config
- patch: add user prompt to history-tracking llm
- patch: update config reader to allow for zero gleans
- patch: update config-reader to allow for empty chunk-by arrays
- patch: update history-tracking LLm to use 'assistant' instead of 'system' in output history.
- patch: use history argument in hash key computation; add history input to cache data

## 0.2.0

- minor: Add content-based KNN for selecting prompt tune few shot examples
- minor: Add dynamic community report rating to the prompt tuning engine
- patch: Add Minute-based Rate Limiting and fix rpm, tpm settings
- patch: Add N parameter support
- patch: Add cli flag to overlay default values onto a provided config.
- patch: Add exception handling on file load
- patch: Add language support to prompt tuning
- patch: Add llm params to local and global search
- patch: Fix broken prompt tuning link on docs
- patch: Fix delta none on query calls
- patch: Fix docsite base url
- patch: Fix encoding model parameter on prompt tune
- patch: Fix for --limit exceeding the dataframe length
- patch: Fix for Ruff 0.5.2
- patch: Fixed an issue where base OpenAI embeddings can't work with Azure OpenAI LLM
- patch: Modify defaults for CHUNK_SIZE, CHUNK_OVERLAP and GLEANINGS to reduce time and LLM calls
- patch: fix community_report doesn't work in settings.yaml
- patch: fix llm response content is None in query
- patch: fix the organization parameter is ineffective during queries
- patch: remove duplicate file read
- patch: support non-open ai model config to prompt tune
- patch: use binary io processing for all file io operations

## 0.1.0

- minor: Initial Release
# Changelog
Note: version releases in the 0.x.y range may introduce breaking changes.

## 0.3.3

- patch: Add entrypoints for incremental indexing
- patch: Clean up and organize run index code
- patch: Consistent config loading. Resolves #99 and Resolves #1049
- patch: Fix circular dependency when running prompt tune api directly
- patch: Fix default settings for embedding
- patch: Fix img for auto tune
- patch: Fix img width
- patch: Fixed a bug in prompt tuning process
- patch: Refactor text unit build at local search
- patch: Update Prompt Tuning docs
- patch: Update create_pipeline_config.py
- patch: Update prompt tune command in docs
- patch: add querying from azure blob storage
- patch: fix setting base_dir to full paths when not using file system.
- patch: fix strategy config in entity_extraction

## 0.3.2

- patch: Add context data to query API responses.
- patch: Add missing config parameter documentation for prompt tuning
- patch: Add neo4j community notebook
- patch: Ensure entity types to be str when running prompt tuning
- patch: Fix weight casting during graph extraction
- patch: Patch "past" dependency issues
- patch: Update developer guide.
- patch: Update query type hints.
- patch: change-lancedb-placement

## 0.3.1

- patch: Add preflight check to check LLM connectivity.
- patch: Add streaming support for local/global search to query cli
- patch: Add support for both float and int on schema validation for community report generation
- patch: Avoid running index on gh-pages publishing
- patch: Implement Index API
- patch: Improves filtering for data dir inferring
- patch: Update to nltk 3.9.1

## 0.3.0

- minor: Implement auto templating API.
- minor: Implement query engine API.
- patch: Fix file dumps using json for non ASCII chars
- patch: Stabilize smoke tests for query context building
- patch: fix query embedding
- patch: fix sort_context & max_tokens params in verb

## 0.2.2

- patch: Add a check if there is no community record added in local search context
- patch: Add sepparate workflow for Python Tests
- patch: Docs updates
- patch: Run smoke tests on 4o

## 0.2.1

- patch: Added default columns for vector store at create_pipeline_config. No change for other cases.
- patch: Change json parsing error in the map step of global search to warning
- patch: Fix Local Search breaking when loading Embeddings input. Defaulting overwrite to True as in the rest of the vector store config
- patch: Fix json parsing when LLM returns faulty responses
- patch: Fix missing community reports and refactor community context builder
- patch: Fixed a bug that erased the vector database, added a new parameter to specify the config file path, and updated the documentation accordingly.
- patch: Try parsing json before even repairing
- patch: Update Prompt Tuning meta prompts with finer examples
- patch: Update default entity extraction and gleaning prompts to reduce hallucinations
- patch: add encoding-model to entity/claim extraction config
- patch: add encoding-model to text chunking config
- patch: add user prompt to history-tracking llm
- patch: update config reader to allow for zero gleans
- patch: update config-reader to allow for empty chunk-by arrays
- patch: update history-tracking LLm to use 'assistant' instead of 'system' in output history.
- patch: use history argument in hash key computation; add history input to cache data

## 0.2.0

- minor: Add content-based KNN for selecting prompt tune few shot examples
- minor: Add dynamic community report rating to the prompt tuning engine
- patch: Add Minute-based Rate Limiting and fix rpm, tpm settings
- patch: Add N parameter support
- patch: Add cli flag to overlay default values onto a provided config.
- patch: Add exception handling on file load
- patch: Add language support to prompt tuning
- patch: Add llm params to local and global search
- patch: Fix broken prompt tuning link on docs
- patch: Fix delta none on query calls
- patch: Fix docsite base url
- patch: Fix encoding model parameter on prompt tune
- patch: Fix for --limit exceeding the dataframe length
- patch: Fix for Ruff 0.5.2
- patch: Fixed an issue where base OpenAI embeddings can't work with Azure OpenAI LLM
- patch: Modify defaults for CHUNK_SIZE, CHUNK_OVERLAP and GLEANINGS to reduce time and LLM calls
- patch: fix community_report doesn't work in settings.yaml
- patch: fix llm response content is None in query
- patch: fix the organization parameter is ineffective during queries
- patch: remove duplicate file read
- patch: support non-open ai model config to prompt tune
- patch: use binary io processing for all file io operations

## 0.1.0

- minor: Initial Release
Loading

0 comments on commit e7ee8cb

Please sign in to comment.