You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
On Unity 6, with the latest 3.0 version of mlagents, any attempt to run the example environments with the threaded parameter set to true will result in runtime errors being reported in the console after a few seconds and the environment hanging permanently. Terminates with the error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
This occurs on both poca and ppo (have not tested with SAC)
I note that this behaviour did not seem to occur in an older environment on unity 2022.3.5f1 with mlagents v3.0.0-exp.1
To Reproduce
open the config yyy.yaml
insert parameter threaded: true
example of yaml file:
Traceback (most recent call last):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
Exception in thread Thread-4 (trainer_update_func):
Traceback (most recent call last):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
self.run()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
self._target(*self._args, **self._kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
Exception in thread Thread-20 (trainer_update_func):
Traceback (most recent call last):
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
self.run()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
self._process_trajectory(t)
self._process_trajectory(t)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
self._target(*self._args, **self._kwargs)
Exception in thread Thread-2 (trainer_update_func):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
Traceback (most recent call last):
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
self.run()
Exception in thread Thread-12 (trainer_update_func):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
Traceback (most recent call last):
value_estimates, next_value_mem = self.critic.critic_pass(
self._process_trajectory(t)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
value_estimates, next_value_mem = self.critic.critic_pass(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
self._target(*self._args, **self._kwargs)
encoding, memories = self.network_body(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
encoding, memories = self.network_body(
Exception in thread Thread-8 (trainer_update_func):
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
self.run()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
Traceback (most recent call last):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
Exception in thread Thread-18 (trainer_update_func):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
value_estimates, next_value_mem = self.critic.critic_pass(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
self.run()
Traceback (most recent call last):
self._process_trajectory(t)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
Exception in thread Thread-6 (trainer_update_func):
Traceback (most recent call last):
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
encoding, memories = self.network_body(
self._target(*self._args, **self._kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
Exception in thread Thread-16 (trainer_update_func):
return forward_call(*args, **kwargs)
self._target(*self._args, **self._kwargs)
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
self.run()
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
Traceback (most recent call last):
self.run()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
self._target(*self._args, **self._kwargs)
trainer.advance()
return self._call_impl(*args, **kwargs)
Exception in thread Thread-10 (trainer_update_func):
value_estimates, next_value_mem = self.critic.critic_pass(
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
self._target(*self._args, **self._kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
return forward_call(*args, **kwargs)
encoding, memories = self.network_body(
self.run()
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
self._process_trajectory(t)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
self._process_trajectory(t)
Traceback (most recent call last):
return forward_call(*args, **kwargs)
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 1016, in _bootstrap_inner
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
self._target(*self._args, **self._kwargs)
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
self._process_trajectory(t)
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
self.run()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\threading.py", line 953, in run
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)
trainer.advance()
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
value_estimates, next_value_mem = self.critic.critic_pass(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
self._target(*self._args, **self._kwargs)
self._process_trajectory(t)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 297, in trainer_update_func
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
encoding, memories = self.network_body(
trainer.advance()
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\trainer\rl_trainer.py", line 293, in advance
self._process_trajectory(t)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
value_estimates, next_value_mem = self.critic.critic_pass(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
return forward_call(*args, **kwargs)
value_estimates, next_value_mem = self.critic.critic_pass(
self._process_trajectory(t)
value_estimates, next_value_mem = self.critic.critic_pass(
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
return self._call_impl(*args, **kwargs)
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\trainer.py", line 92, in _process_trajectory
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
encoding, memories = self.network_body(
) = self.optimizer.get_trajectory_and_baseline_value_estimates(
encoding, memories = self.network_body(
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 645, in get_trajectory_and_baseline_value_estimates
return self.seq_layers(input_tensor)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
value_estimates, next_value_mem = self.critic.critic_pass(
encoding, memories = self.network_body(
value_estimates, next_value_mem = self.critic.critic_pass(
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
encoding, memories = self.network_body(
return self._call_impl(*args, **kwargs)
return self.seq_layers(input_tensor)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\poca\optimizer_torch.py", line 133, in critic_pass
self_attn_inputs.append(self.obs_encoder(None, g_inp))
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
encoding, memories = self.network_body(
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)
return forward_call(*args, **kwargs)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
input = module(input)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
self_attn_inputs.append(self.obs_encoder(None, g_inp))
return forward_call(*args, **kwargs)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\networks.py", line 404, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
self_attn_inputs.append(self.obs_encoder(None, g_inp))
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
return self._call_impl(*args, **kwargs)
input = module(input)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
self_attn_inputs.append(self.obs_encoder(None, g_inp))
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
return self.seq_layers(input_tensor)
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
encoded_entities = self.self_ent_encoder(entities)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
return forward_call(*args, **kwargs)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
encoded_entities = self.self_ent_encoder(entities)
return self._call_impl(*args, **kwargs)
return forward_call(*args, **kwargs)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self._call_impl(*args, **kwargs)
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self.seq_layers(input_tensor)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\attention.py", line 170, in forward
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
input = module(input)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self._call_impl(*args, **kwargs)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
encoded_entities = self.self_ent_encoder(entities)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)
return self.seq_layers(input_tensor)
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
return F.linear(input, self.weight, self.bias)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return self._call_impl(*args, **kwargs)
return self.seq_layers(input_tensor)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self.seq_layers(input_tensor)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
return self._call_impl(*args, **kwargs)
return forward_call(*args, **kwargs)
return self.seq_layers(input_tensor)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return self.seq_layers(input_tensor)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
input = module(input)
return self._call_impl(*args, **kwargs)
input = module(input)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\mlagents\trainers\torch_entities\layers.py", line 169, in forward
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
return self.seq_layers(input_tensor)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return forward_call(*args, **kwargs)
return self._call_impl(*args, **kwargs)
input = module(input)
input = module(input)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
input = module(input)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return F.linear(input, self.weight, self.bias)
return self._call_impl(*args, **kwargs)
return self._call_impl(*args, **kwargs)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
return forward_call(*args, **kwargs)
return F.linear(input, self.weight, self.bias)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return forward_call(*args, **kwargs)
input = module(input)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return forward_call(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return F.linear(input, self.weight, self.bias)
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return self._call_impl(*args, **kwargs)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
return F.linear(input, self.weight, self.bias)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return forward_call(*args, **kwargs)
return F.linear(input, self.weight, self.bias)
File "C:\Users\anonuser\miniconda3\envs\mlagents\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)```
**Screenshots**
console stack trace included above
**Environment**
- Unity Version: Unity 6000.0.27f1
- OS + version: Windows 11
- _ML-Agents version_: ML agents v3.0.0
- _Torch version_: 2.2.2+cu121
- _Environment_: all environments
The text was updated successfully, but these errors were encountered:
Describe the bug
On Unity 6, with the latest 3.0 version of mlagents, any attempt to run the example environments with the threaded parameter set to true will result in runtime errors being reported in the console after a few seconds and the environment hanging permanently. Terminates with the error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
This occurs on both poca and ppo (have not tested with SAC)
I note that this behaviour did not seem to occur in an older environment on unity 2022.3.5f1 with mlagents v3.0.0-exp.1
To Reproduce
open the config yyy.yaml
insert parameter
threaded: true
example of yaml file:
Console logs / stack traces
The text was updated successfully, but these errors were encountered: