Skip to content

elbecerrasoto/gym-cellular-automata

Repository files navigation

Gym Cellular Automata




Semantic Versioning MIT License Code style: black Gitmoji

Cellular Automata Environments for Reinforcement Learning


Gym Cellular Automata is a collection of Reinforcement Learning Environments (RLEs) that follow the Gym API.

The available RLEs are based on Cellular Automata (CAs). On them an Agent interacts with a CA, by changing its cell states, in a attempt to drive the emergent properties of its grid.

Installation

git clone https://github.com/elbecerrasoto/gym-cellular-automata
pip install -e gym-cellular-automata

Usage

🎠 🎠 🎠

Prototype & Benchmark, the two modes of gymca...

import gymnasium as gym
import gym_cellular_automata as gymca

# benchmark mode
env_id = gymca.envs[0]
env = gym.make(env_id)

# prototype mode
ProtoEnv = gymca.prototypes[0]
env = ProtoEnv(nrows=42, ncols=42)

The tuple gymca.envs contains calling strings for gym.make.

gym.make generates an instance of a registered environment.

A registered environment is inflexible as it cannot be customized. This is on purpose, since the gym library is about benchmarking RL algorithms—a benchmark must not change if it wants to provide meaningful comparisons.

CA Envs are experimental—they need to mature into worth-solving RL tasks. For this to happen fast prototyping is needed. This involves parameter tweaking and module combination.

gym-cellular-automata strives to be an environment-design library, this is the motivation behind the prototype mode, which does not register the environment, but exposes it to configuration.

Grid size (nrows, ncols) is one of the most changed parameters so it is required. Other parameters are optional and differ from class to class. Grid size is a proxy for task difficulty, bigger grids are usually harder.

Random Policy

import gymnasium as gym
import gym_cellular_automata as gymca

env_id = gymca.envs[0]
gym.make(env_id, render_mode="human")

obs, info = env.reset()

total_reward = 0.0
done = False
step = 0
threshold = 12

# Random Policy for at most "threshold" steps
while not done and step < threshold:
    action = env.action_space.sample()  # Your agent goes here!
    obs, reward, terminated, truncated, info = env.step(action)
    done = terminated or truncated
    total_reward += reward
    step += 1

print(f"{env_id}")
print(f"Total Steps: {step}")
print(f"Total Reward: {total_reward}")

Gallery

Helicopter

Forest Fire Helicopter

Bulldozer

Forest Fire Bulldozer

Documentation

👷 Documentation is in progress.

Releases

🥁

Contributing

🌲 🔥

For contributions check contributing and the to do list.

Contributions to Gym Cellular Automata are always welcome. Feel free to open pull requests.

This project adheres to the following practices:

Issues

About

Cellular Automata Environments for Reinforcement Learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •