Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No requirements.txt for pip modules #1

Open
bradgillap opened this issue Mar 26, 2023 · 2 comments
Open

No requirements.txt for pip modules #1

bradgillap opened this issue Mar 26, 2023 · 2 comments

Comments

@bradgillap
Copy link

Hello,

I tried to install this but I'm getting hung up on the import modules. A requirements.txt file might allow this process to be sped up.

I'm having some trouble resolving this dependency despite having pip installed for qt5.

  File "D:\github\ChatLLaMA-and-ChatGPT-Desktop-App\launch_gui.py", line 29, in <module>
    from PyQt5.QtWebEngineWidgets import QWebEngineView
ModuleNotFoundError: No module named 'PyQt5.QtWebEngineWidgets'
@rioncarter
Copy link

I was able to get past this by running:
pip install PyQtWebEngine

Currently stumped by this error:

Traceback (most recent call last):
File "/media/user/src-repos/serp-ai/ChatLLaMA-and-ChatGPT-Desktop-App/launch_gui.py", line 32, in
from .assistant import OpenAIAssistant, LocalAssistant
ImportError: attempted relative import with no known parent package

@rioncarter
Copy link

rioncarter commented Mar 27, 2023

By modifying the import statements to remove the leading periods i've been able to get a step further and am now stuck on this error:

  File "/media/user/src-repos/serp-ai/ChatLLaMA-and-ChatGPT-Desktop-App/launch_gui.py", line 1525, in <module>
    floating_icon = FloatingIcon(chat_config=config['chat_config'], text2audio_api_key=config['text2audio_api_key'], text2audio_voice=config['text2audio_voice'], wolfram_app_id=config['wolfram_app_id'], mode=config['mode'])
  File "/media/user/src-repos/serp-ai/ChatLLaMA-and-ChatGPT-Desktop-App/launch_gui.py", line 65, in __init__
    self.chat_dialog = ChatDialog(text2audio=TTSElevenlabs(api_key=text2audio_api_key) if text2audio_api_key != None else None, text2audio_voice='Jarvis' if text2audio_voice is None else text2audio_voice, assistant=LocalAssistant(memory_manager=MemoryManager(), **chat_config), wolfram_app_id=wolfram_app_id, mode='local')
  File "/media/user/src-repos/serp-ai/ChatLLaMA-and-ChatGPT-Desktop-App/assistant.py", line 479, in __init__
    from quantization.utils.llama_wrapper import LlamaClass
  File "/media/user/src-repos/serp-ai/ChatLLaMA-and-ChatGPT-Desktop-App/quantization/utils/llama_wrapper.py", line 5, in <module>
    from transformers import LlamaForCausalLM
ImportError: cannot import name 'LlamaForCausalLM' from 'transformers' (/home/user/.local/lib/python3.10/site-packages/transformers/__init__.py)

Having a hard time finding out where the LlamaForCausalLM can be pulled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants