-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Response Generation Completes Without Proper Indicator #42
Labels
bug
Something isn't working
Comments
Hello, Thank you for reporting this issue. We are aware of it and plan to resolve it in an upcoming release. |
You're doing a great job. Thanks! |
10 tasks
NorbertKlockiewicz
added a commit
that referenced
this issue
Dec 20, 2024
…els (#73) ## Description There was a problem when user was switching between multiple llms in one component, to fix this issue I removed code related to handling strict mode problems(strict mode was causing problems with event listeners also so I think we should ignore it as it isn't the best with background tasks), I've also remove deleteModule native function as it wasn't really doing anything, now user can download multiple llms within one component and seamlessly switch between them without bugs. The problem was mentioned in issue #42 ### Type of change - [x] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Documentation update (improves or adds clarity to existing documentation) ### Tested on - [x] iOS - [x] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [x] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. -->
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
There is a case where llma.response returns data and isModelGenerating is false, but the model is still generating a response. Due to this inconsistency, it becomes impossible to determine when the response generation has fully completed, making it challenging to handle the response appropriately in the database.
Steps to reproduce
Expected Behavior
The response generation process should have a clear indicator when it has fully completed, ensuring data is stored to the database only after generation finishes.
Actual Behavior
llama.response and !llama.isModelGenerating trigger prematurely, leading to saving the response multiple times during generation.
React Native Executorch version
0.1.2
React Native version
0.76.3
Platforms
Android, iOS
JavaScript runtime
Hermes
Workflow
Expo Dev Client
Architecture
Fabric (New Architecture)
Build type
Debug mode
Device
Real device
Device model
Iphone 14 pro
AI model
llama-3.2-1B/QLoRA/llama3_2_qat_lora.pte , v0.1.0/llama-3.2-3B/spinquant/llama3_2_3B_spinquant.pte
Performance logs
No response
Acknowledgements
Yes
The text was updated successfully, but these errors were encountered: