We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2024-05-04 20:46:41] [4304 ] [INFO ] Starting request_processor for groq [2024-05-04 20:46:41] [4304 ] [ERROR ] Error processing queue Traceback (most recent call last): File "C:\GPT-Telegramus\queue_handler.py", line 186, in _queue_processing_loop request_process.start() File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\context.py", line 336, in _Popen return Popen(process_obj) File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'weakref.ReferenceType' object Traceback (most recent call last): File "", line 1, in File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input
The text was updated successfully, but these errors were encountered:
No branches or pull requests
[2024-05-04 20:46:41] [4304 ] [INFO ] Starting request_processor for groq
[2024-05-04 20:46:41] [4304 ] [ERROR ] Error processing queue
Traceback (most recent call last):
File "C:\GPT-Telegramus\queue_handler.py", line 186, in _queue_processing_loop
request_process.start()
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\context.py", line 336, in _Popen
return Popen(process_obj)
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'weakref.ReferenceType' object
Traceback (most recent call last):
File "", line 1, in
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\laragon\bin\python\python-3.10\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
The text was updated successfully, but these errors were encountered: