Skip to content

Commit

Permalink
Make pre-commit match Gymnasium (add many more pre-commit hook checks) (
Browse files Browse the repository at this point in the history
  • Loading branch information
elliottower authored Jul 6, 2023
1 parent 4a2be56 commit 110333f
Show file tree
Hide file tree
Showing 42 changed files with 191 additions and 178 deletions.
12 changes: 6 additions & 6 deletions .github/ISSUE_TEMPLATE/question.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ body:
- type: markdown
attributes:
value: >
If you have basic questions about reinforcement learning algorithms, please ask on
[r/reinforcementlearning](https://www.reddit.com/r/reinforcementlearning/) or in the
[RL Discord](https://discord.com/invite/xhfNqQv) (if you're new please use the beginners channel).
Basic questions that are not bugs or feature requests will be closed without reply, because GitHub
issues are not an appropriate venue for these. Advanced/nontrivial questions, especially in areas where
If you have basic questions about reinforcement learning algorithms, please ask on
[r/reinforcementlearning](https://www.reddit.com/r/reinforcementlearning/) or in the
[RL Discord](https://discord.com/invite/xhfNqQv) (if you're new please use the beginners channel).
Basic questions that are not bugs or feature requests will be closed without reply, because GitHub
issues are not an appropriate venue for these. Advanced/nontrivial questions, especially in areas where
documentation is lacking, are very much welcome.
- type: textarea
id: question
attributes:
Expand Down
30 changes: 23 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
---
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-symlinks
- id: destroyed-symlinks
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-toml
- id: check-ast
- id: check-added-large-files
- id: check-merge-conflict
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: detect-private-key
- id: debug-statements
- id: mixed-line-ending
args: [ "--fix=lf" ]
- repo: https://github.com/python/black
rev: 23.3.0
hooks:
Expand Down Expand Up @@ -28,15 +47,10 @@ repos:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/asottile/pyupgrade
rev: v3.3.1
rev: v3.3.2
hooks:
- id: pyupgrade
args: ["--py37-plus"]
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/pycqa/pydocstyle
rev: 6.3.0
hooks:
Expand All @@ -59,3 +73,5 @@ repos:
pass_filenames: false
types: [python]
additional_dependencies: ["pyright"]
args:
- --project=pyproject.toml
1 change: 0 additions & 1 deletion CODE_OF_CONDUCT.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,3 @@ Attribution
-----------
This Code of Conduct is adapted from `Python's Code of Conduct <https://www.python.org/psf/conduct/>`_, which is under a `Creative Commons License
<https://creativecommons.org/licenses/by-sa/3.0/>`_.

2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Tutorials are a crucial way to help people learn how to use PettingZoo and we gr
- You should make a `.md` file for each tutorial within the above directory.
- Each `.md` file should have an "Environment Setup" section and a "Code" section. The title should be of the format `<TUTORIAL_THEME>: <TUTORIAL_TOPIC>`.
- The Environment Setup section should reference the `requirements.txt` file you created using `literalinclude`.
- The Code section should reference the `.py` file you created using `literalinclude`.
- The Code section should reference the `.py` file you created using `literalinclude`.
- `/docs/index.md` should be modified to include every new tutorial.

### Testing your tutorial
Expand Down
4 changes: 2 additions & 2 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
This repository is licensed as follows:
All assets in this repository are the copyright of the Farama Foundation, except
All assets in this repository are the copyright of the Farama Foundation, except
where prohibited. Contributors to the repository transfer copyright of their work
to the Farama Foundation.

Some code in this repository has been taken from other open source projects
and was originally released under the MIT or Apache 2.0 licenses, with
copyright held by another party. We've attributed these authors and they
retain their copyright to the extent required by law. Everything else
is owned by the Farama Foundation. The Secret Code font was also released under
is owned by the Farama Foundation. The Secret Code font was also released under
the MIT license by Matthew Welch (http://www.squaregear.net/fonts/).
The MIT and Apache 2.0 licenses are included below.

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ PettingZoo includes the following families of environments:

To install the base PettingZoo library: `pip install pettingzoo`.

This does not include dependencies for all families of environments (some environments can be problematic to install on certain systems).
This does not include dependencies for all families of environments (some environments can be problematic to install on certain systems).

To install the dependencies for one family, use `pip install pettingzoo[atari]`, or use `pip install pettingzoo[all]` to install all dependencies.

We support Python 3.7, 3.8, 3.9 and 3.10 on Linux and macOS. We will accept PRs related to Windows, but do not officially support it.

## Getting started

For an introduction to PettingZoo, see [Basic Usage](https://pettingzoo.farama.org/content/basic_usage/). To create a new environment, see our [Environment Creation Tutorial](https://pettingzoo.farama.org/tutorials/environmentcreation/1-project-structure/) and [Custom Environment Examples](https://pettingzoo.farama.org/content/environment_creation/).
For an introduction to PettingZoo, see [Basic Usage](https://pettingzoo.farama.org/content/basic_usage/). To create a new environment, see our [Environment Creation Tutorial](https://pettingzoo.farama.org/tutorials/environmentcreation/1-project-structure/) and [Custom Environment Examples](https://pettingzoo.farama.org/content/environment_creation/).
For examples of training RL models using PettingZoo see our tutorials:
* [CleanRL: Implementing PPO](https://pettingzoo.farama.org/tutorials/cleanrl/implementing_PPO/):train multiple PPO agents in the [Pistonball](https://pettingzoo.farama.org/environments/butterfly/pistonball/) environment.
* [Tianshou: Training Agents](https://pettingzoo.farama.org/tutorials/tianshou/intermediate/): train DQN agents in the [Tic-Tac-Toe](https://pettingzoo.farama.org/environments/classic/tictactoe/) environment.
Expand Down
2 changes: 1 addition & 1 deletion docs/_static/img/doc_icon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/_static/img/environment_icon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/_static/img/github_icon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/_static/img/menu_icon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/_static/img/tutorials/rllib-stack.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 11 additions & 12 deletions docs/api/aec.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,20 +24,20 @@ env.reset(seed=42)

for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()

if termination or truncation:
action = None
else:
else:
action = env.action_space(agent).sample() # this is where you would insert your policy
env.step(action)

env.step(action)
env.close()
```

### Action Masking
AEC environments often include action masks, in order to mark valid/invalid actions for the agent.
AEC environments often include action masks, in order to mark valid/invalid actions for the agent.

To sample actions using action masking:
To sample actions using action masking:
```python
from pettingzoo.classic import chess_v6

Expand All @@ -49,17 +49,17 @@ for agent in env.agent_iter():

if termination or truncation:
action = None
else:
else:
# invalid action masking is optional and environment-dependent
if "action_mask" in info:
mask = info["action_mask"]
elif isinstance(observation, dict) and "action_mask" in observation:
mask = observation["action_mask"]
else:
mask = None
mask = None
action = env.action_space(agent).sample(mask) # this is where you would insert your policy
env.step(action)

env.step(action)
env.close()
```

Expand All @@ -68,7 +68,7 @@ Note: action masking is optional, and can be implemented using either `observati
* [PettingZoo Classic](https://pettingzoo.farama.org/environments/classic/) environments store action masks in the `observation` dict:
* `mask = observation["action_mask"]`
* [Shimmy](https://shimmy.farama.org/)'s [OpenSpiel environments](https://shimmy.farama.org/environments/open_spiel/) stores action masks in the `info` dict:
* `mask = info["action_mask"]`
* `mask = info["action_mask"]`

To implement action masking in a custom environment, see [Environment Creation: Action Masking](https://pettingzoo.farama.org/tutorials/environmentcreation/3-action-masking/)

Expand Down Expand Up @@ -158,4 +158,3 @@ For more information on action masking, see [A Closer Look at Invalid Action Mas
.. automethod:: AECEnv.close
```

4 changes: 2 additions & 2 deletions docs/api/parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ observations = parallel_env.reset(seed=42)

while env.agents:
# this is where you would insert your policy
actions = {agent: parallel_env.action_space(agent).sample() for agent in parallel_env.agents}
actions = {agent: parallel_env.action_space(agent).sample() for agent in parallel_env.agents}

observations, rewards, terminations, truncations, infos = parallel_env.step(actions)
env.close()
```
Expand Down
6 changes: 3 additions & 3 deletions docs/api/wrappers.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Wrapper

## Using Wrappers

A wrapper is an environment transformation that takes in an environment as input, and outputs a new environment that is similar to the input environment, but with some transformation or validation applied.
A wrapper is an environment transformation that takes in an environment as input, and outputs a new environment that is similar to the input environment, but with some transformation or validation applied.

The following wrappers can be used with PettingZoo environments:

Expand All @@ -16,12 +16,12 @@ The following wrappers can be used with PettingZoo environments:

[Supersuit Wrappers](/api/wrappers/supersuit_wrappers/) include commonly used pre-processing functions such as frame-stacking and color reduction, compatible with both PettingZoo and Gymnasium.

[Shimmy Compatibility Wrappers](/api/wrappers/shimmy_wrappers/) allow commonly used external reinforcement learning environments to be used with PettingZoo and Gymnasium.
[Shimmy Compatibility Wrappers](/api/wrappers/shimmy_wrappers/) allow commonly used external reinforcement learning environments to be used with PettingZoo and Gymnasium.


```{toctree}
:hidden:
wrappers/pz_wrappers
wrappers/supersuit_wrappers
wrappers/shimmy_wrappers
```
```
4 changes: 2 additions & 2 deletions docs/api/wrappers/pz_wrappers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: PettingZoo Wrappers

# PettingZoo Wrappers

PettingZoo includes the following types of wrappers:
PettingZoo includes the following types of wrappers:
* [Conversion Wrappers](#conversion-wrappers): wrappers for converting environments between the [AEC](/api/aec/) and [Parallel](/api/parallel/) APIs
* [Utility Wrappers](#utility-wrappers): a set of wrappers which provide convenient reusable logic, such as enforcing turn order or clipping out-of-bounds actions.

Expand Down Expand Up @@ -105,4 +105,4 @@ Note: Most AEC environments include TerminateIllegalWrapper in their initializat
.. autoclass:: ClipOutOfBoundsWrapper
.. autoclass:: OrderEnforcingWrapper
```
```
2 changes: 1 addition & 1 deletion docs/api/wrappers/supersuit_wrappers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Supersuit Wrappers

# Supersuit Wrappers

The [SuperSuit](https://github.com/Farama-Foundation/SuperSuit) companion package (`pip install supersuit`) includes a collection of pre-processing functions which can applied to both [AEC](/api/aec/) and [Parallel](/api/parallel/) environments.
The [SuperSuit](https://github.com/Farama-Foundation/SuperSuit) companion package (`pip install supersuit`) includes a collection of pre-processing functions which can applied to both [AEC](/api/aec/) and [Parallel](/api/parallel/) environments.

To convert [space invaders](https://pettingzoo.farama.org/environments/atari/space_invaders/) to a greyscale observation space and stack the last 4 frames:

Expand Down
2 changes: 1 addition & 1 deletion docs/content/basic_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ title: API

To install the base PettingZoo library: `pip install pettingzoo`.

This does not include dependencies for all families of environments (some environments can be problematic to install on certain systems).
This does not include dependencies for all families of environments (some environments can be problematic to install on certain systems).

To install the dependencies for one family, use `pip install pettingzoo[atari]`, or use `pip install pettingzoo[all]` to install all dependencies.

Expand Down
4 changes: 2 additions & 2 deletions docs/environments/atari.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ for agent in env.agent_iter():
action = None
else:
action = env.action_space(agent).sample() # this is where you would insert your policy
env.step(action)

env.step(action)
env.close()
```

Expand Down
11 changes: 5 additions & 6 deletions docs/environments/butterfly.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ butterfly/pistonball
:file: butterfly/list.html
```

Butterfly environments are challenging scenarios created by Farama, using Pygame with visual Atari spaces.
Butterfly environments are challenging scenarios created by Farama, using Pygame with visual Atari spaces.

All environments require a high degree of coordination and require learning of emergent behaviors to achieve an optimal policy. As such, these environments are currently very challenging to learn.

Expand All @@ -25,7 +25,7 @@ Environments are highly configurable via arguments specified in their respective
[Knights Archers Zombies](https://pettingzoo.farama.org/environments/butterfly/knights_archers_zombies/),
[Pistonball](https://pettingzoo.farama.org/environments/butterfly/pistonball/).

### Installation
### Installation
The unique dependencies for this set of environments can be installed via:

````bash
Expand All @@ -43,7 +43,7 @@ observations = env.reset()

while env.agents:
# this is where you would insert your policy
actions = {agent: env.action_space(agent).sample() for agent in env.agents}
actions = {agent: env.action_space(agent).sample() for agent in env.agents}

observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()
Expand All @@ -63,15 +63,14 @@ manual_policy = knights_archers_zombies_v10.ManualPolicy(env)
for agent in env.agent_iter():
clock.tick(env.metadata["render_fps"])
observation, reward, termination, truncation, info = env.last()

if agent == manual_policy.agent:
# get user input (controls are WASD and space)
action = manual_policy(observation, agent)
else:
# this is where you would insert your policy (for non-player agents)
action = env.action_space(agent).sample()

env.step(action)
env.step(action)
env.close()
```

10 changes: 5 additions & 5 deletions docs/environments/classic.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ classic/tictactoe
:file: classic/list.html
```

Classic environments represent implementations of popular turn-based human games and are mostly competitive.
Classic environments represent implementations of popular turn-based human games and are mostly competitive.


### Installation
Expand All @@ -45,14 +45,14 @@ env.reset(seed=42)

for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()

if termination or truncation:
break

mask = observation["action_mask"]
action = env.action_space(agent).sample(mask) # this is where you would insert your policy
env.step(action)

env.step(action)
env.close()
```

Expand Down
6 changes: 3 additions & 3 deletions docs/environments/mpe.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,13 @@ env = simple_tag_v3.env(render_mode='human')
env.reset()
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()

if termination or truncation:
action = None
else:
action = env.action_space(agent).sample() # this is where you would insert your policy
env.step(action)

env.step(action)
env.close()
```

Expand Down
6 changes: 3 additions & 3 deletions docs/environments/sisl.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,13 @@ env = waterworld_v4.env(render_mode='human')
env.reset()
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()

if termination or truncation:
action = None
else:
action = env.action_space(agent).sample() # this is where you would insert your policy
env.step(action)

env.step(action)
env.close()
```

Expand Down
Loading

0 comments on commit 110333f

Please sign in to comment.