Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not take Tutorial 03 and 04 with Flow #1058

Open
TrinhTuanHung2021 opened this issue Jan 9, 2022 · 11 comments
Open

Can not take Tutorial 03 and 04 with Flow #1058

TrinhTuanHung2021 opened this issue Jan 9, 2022 · 11 comments
Labels

Comments

@TrinhTuanHung2021
Copy link

TrinhTuanHung2021 commented Jan 9, 2022

Hello all

I followed the instructions but I can't install Flow. ( https://flow.readthedocs.io/en/latest/flow_setup.html#remote-installation-using-docker)

After a while of searching and following online instructions, I changed the Requirement.txt file. I replaced redis~=2.10.6 with redis and I finally installed Flow.

But when running the model in Tutorial 03 and 04, all errors appear.
AttributeError: 'numpy.ndarray' object has no attribute 'keys'

I've seen a few other people get this error as well but they didn't fix it. Does anyone know how to fix it?
Thank you

@xiedanmu
Copy link

my bro, have you fix it?

@TrinhTuanHung2021
Copy link
Author

my bro, have you fix it?

Not yet. Do you have got any ideal?
I tried to reinstall Flow with other versions of ray and redis but it could not run tutorial 04

@xiedanmu
Copy link

my bro, have you fix it?

Not yet. Do you have got any ideal? I tried to reinstall Flow with other versions of ray and redis but it could not run tutorial 04

I just follow your step by replacing redis~=2.10.6 with redis and I haven't take Tutorial 03 and 04 until now. If I take Tutorial 03 and 04, I will tell you whether it works or not.

@TrinhTuanHung2021
Copy link
Author

my bro, have you fix it?

Not yet. Do you have got any ideal? I tried to reinstall Flow with other versions of ray and redis but it could not run tutorial 04

I just follow your step by replacing redis~=2.10.6 with redis and I haven't take Tutorial 03 and 04 until now. If I take Tutorial 03 and 04, I will tell you whether it works or not.

About Tutorial 03, I installed it with this source code https://github.com/lcipolina/flow (ray==0.7.3, redis==2.10.6)
and it could run.

@TrinhTuanHung2021 TrinhTuanHung2021 changed the title Can not run Tutorial 03 and 04 with Flow Can not take Tutorial 03 and 04 with Flow Jan 16, 2022
@ShuxinLee
Copy link

Hey, bro, has the problem been solved?

@wencanmao
Copy link

I think I have finally found the reason for this error. The earlier version of flow was built on ray==0.7.3. The training weights file "trained_ring 200" was generated three years ago (around 2019). Then they upgrade the package to ray==0.8.0. a year ago (around 2021), which cannot recognize the weight as a dictionary (but as a NumPy array instead). Nevertheless, if you use the weights generated in "ray_results/training_example" from tutorial03 to visualize, it will not show this error (AttributeError: 'numpy.ndarray' object has no attribute 'keys'). So, if you use ray==0.8.0 and redis==newest, the codes can still be used for visualizing if you train by yourself.

@TrinhTuanHung2021
Copy link
Author

I think I have finally found the reason for this error. The earlier version of flow was built on ray==0.7.3. The training weights file "trained_ring 200" was generated three years ago (around 2019). Then they upgrade the package to ray==0.8.0. a year ago (around 2021), which cannot recognize the weight as a dictionary (but as a NumPy array instead). Nevertheless, if you use the weights generated in "ray_results/training_example" from tutorial03 to visualize, it will not show this error (AttributeError: 'numpy.ndarray' object has no attribute 'keys'). So, if you use ray==0.8.0 and redis==newest, the codes can still be used for visualizing if you train by yourself.

Now I am using DRL with DQN or DDPG.
Flow has no staff to support so it too difficult to use

@wencanmao
Copy link

Are you still using SUMO for vehicular simulation? How can you connect the DRL environment with TraCI?

@TrinhTuanHung2021
Copy link
Author

Are you still using SUMO for vehicular simulation? How can you connect the DRL environment with TraCI?

I saw a lot of DRL models with sumo on github

For example

https://github.com/AndreaVidali/Deep-QLearning-Agent-for-Traffic-Signal-Control.git

@sjtulhm
Copy link

sjtulhm commented Jun 30, 2023

I think I have finally found the reason for this error. The earlier version of flow was built on ray==0.7.3. The training weights file "trained_ring 200" was generated three years ago (around 2019). Then they upgrade the package to ray==0.8.0. a year ago (around 2021), which cannot recognize the weight as a dictionary (but as a NumPy array instead). Nevertheless, if you use the weights generated in "ray_results/training_example" from tutorial03 to visualize, it will not show this error (AttributeError: 'numpy.ndarray' object has no attribute 'keys'). So, if you use ray==0.8.0 and redis==newest, the codes can still be used for visualizing if you train by yourself.

When I write "python flow/visualize/visualizer_rllib.py ~/ray_results/test0629/PPO 400 --horizon 500 --gen_emission" in the terminal, it can open the sumo-gui, but run only 1 second some errors happened. I have installed the ray==0.8.0, then I will show the errors:
2023-06-30 09:45:40,604 INFO resource_spec.py:216 -- Starting Ray with 4.69 GiB memory available for workers and up to 2.35 GiB for objects. You can adjust these settings with ray.init(memory=, object_store_memory=).
2023-06-30 09:45:41,335 INFO trainer.py:371 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2023-06-30 09:45:41,340 INFO trainer.py:512 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
2023-06-30 09:45:41,344 WARNING ppo.py:168 -- Using the simple minibatch optimizer. This will significantly reduce performance, consider simple_optimizer=False.
2023-06-30 09:45:44,264 INFO trainable.py:346 -- Restored from checkpoint: /home/a906/ray_results/test0629/PPO_MultiAgentlhmNetwork1POEnv-v0_f48e2fe4_2023-06-29_20-17-46cjrol_re/checkpoint_400/checkpoint-400
2023-06-30 09:45:44,264 INFO trainable.py:353 -- Current state after restoring: {'_iteration': 400, '_timesteps_total': 28777138, '_time_total': 42873.71559095383, '_episodes_total': 1777}
Traceback (most recent call last):
File "/home/a906/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 62, in check_shape
if not self._obs_space.contains(observation):
File "/home/a906/anaconda3/envs/flow/lib/python3.7/site-packages/gym-0.14.0-py3.7.egg/gym/spaces/box.py", line 102, in contains
return x.shape == self.shape and np.all(x >= self.low) and np.all(x <= self.high)
AttributeError: 'dict' object has no attribute 'shape'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "flow/visualize/visualizer_rllib.py", line 386, in
visualizer_rllib(args)
File "flow/visualize/visualizer_rllib.py", line 229, in visualizer_rllib
action = agent.compute_action(state)
File "/home/a906/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 643, in compute_action
policy_id].transform(observation)
File "/home/a906/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 166, in transform
self.check_shape(observation)
File "/home/a906/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 69, in check_shape
"should be an np.array, not a Python list.", observation)
ValueError: ('Observation for a Box/MultiBinary/MultiDiscrete space should be an np.array, not a Python list.', {})
/home/a906/flow/flow/visualize/test_time_rollout/test0629_20230630-0945411688089541.3611593-0_emission.csv /home/a906/flow/flow/visualize/test_time_rollout/

So, how should I do? thank you very much!

@bonchoe
Copy link

bonchoe commented Jul 2, 2024

Hi bro, did you fix it in the past?
I am reviving the same issue to resolve this stuck execution.

Though this is just reproduce the example with RL, this won't run.
I tried to downgrade the package version of Ray from the docker image but it won't help in the end.

Though too long time has passed after upload, can you share your status about this? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants