Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Response Generation Completes Without Proper Indicator #42

Open
ajmalp5 opened this issue Dec 4, 2024 · 2 comments
Open

Bug: Response Generation Completes Without Proper Indicator #42

ajmalp5 opened this issue Dec 4, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@ajmalp5
Copy link

ajmalp5 commented Dec 4, 2024

Description

There is a case where llma.response returns data and isModelGenerating is false, but the model is still generating a response. Due to this inconsistency, it becomes impossible to determine when the response generation has fully completed, making it challenging to handle the response appropriately in the database.

Steps to reproduce

  1. Start with an initial model where llama.response and llama.isModelGenerating work as expected.
  2. Switch to a different model.
  3. Generate responses multiple times after switching models.
  4. Observe that the issue occurs where llama.response and llama.isModelGenerating is false, even though the response is still being generated

Expected Behavior
The response generation process should have a clear indicator when it has fully completed, ensuring data is stored to the database only after generation finishes.
Actual Behavior
llama.response and !llama.isModelGenerating trigger prematurely, leading to saving the response multiple times during generation.

useEffect(() => {
  const { response, isModelGenerating } = llama;
  console.log(response, isModelGenerating);
  if (llama.response && !llama.isModelGenerating) {
    console.log('Message is stored to DB after finishing generating');
    addMessage(db, parseInt(chatIdRef.current), {
      content: llama.response,
      role: Role.Bot,
    });
  }
}, [llama.response, llama.isModelGenerating]);

React Native Executorch version

0.1.2

React Native version

0.76.3

Platforms

Android, iOS

JavaScript runtime

Hermes

Workflow

Expo Dev Client

Architecture

Fabric (New Architecture)

Build type

Debug mode

Device

Real device

Device model

Iphone 14 pro

AI model

llama-3.2-1B/QLoRA/llama3_2_qat_lora.pte , v0.1.0/llama-3.2-3B/spinquant/llama3_2_3B_spinquant.pte

Performance logs

No response

Acknowledgements

Yes

@NorbertKlockiewicz NorbertKlockiewicz added the bug Something isn't working label Dec 4, 2024
@NorbertKlockiewicz
Copy link
Contributor

Hello,

Thank you for reporting this issue. We are aware of it and plan to resolve it in an upcoming release.

@ajmalp5
Copy link
Author

ajmalp5 commented Dec 4, 2024

You're doing a great job. Thanks!

NorbertKlockiewicz added a commit that referenced this issue Dec 20, 2024
…els (#73)

## Description
There was a problem when user was switching between multiple llms in one
component, to fix this issue I removed code related to handling strict
mode problems(strict mode was causing problems with event listeners also
so I think we should ignore it as it isn't the best with background
tasks), I've also remove deleteModule native function as it wasn't
really doing anything, now user can download multiple llms within one
component and seamlessly switch between them without bugs.
The problem was mentioned in issue #42 

### Type of change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] Documentation update (improves or adds clarity to existing
documentation)

### Tested on
- [x] iOS
- [x] Android

### Testing instructions
<!-- Provide step-by-step instructions on how to test your changes.
Include setup details if necessary. -->

### Screenshots
<!-- Add screenshots here, if applicable -->

### Related issues
<!-- Link related issues here using #issue-number -->

### Checklist
- [x] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have updated the documentation accordingly
- [ ] My changes generate no new warnings

### Additional notes
<!-- Include any additional information, assumptions, or context that
reviewers might need to understand this PR. -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants