Skip to content

Train a pair of agents to solve the Tennis environment

Notifications You must be signed in to change notification settings

CenturyLiu/RL-Project-Tennis

Repository files navigation

RL-Project-Tennis

Train a pair of agent to solve the Tennis environment

Part 1: Environment Introduction

The environment of this project is similar but not identical to the Unity Tennis environment.

Trained agents

Two trained agents playing tennis by controlling rackets to bounce a ball over a net. Image source

  • Observation space and action space

    Number of agents: 2

    Observation space: A single observation is a 8-variable array corresponding to the position and velocity of the ball and racket. 3 single observations are stacked together to form the stacked-observation at each environment step. Each agent receives its own stacked-observation.

    Action space: 2 continuous actions for each agent. One action corresponds to move towards to / away from the net. The other corresponds to jump.

  • Reward setup The task is episodic, agents receive rewards during the episode.

    Condition Reward
    agent hits the ball over the net +0.1
    ball hit the ground -0.01
    agent hits the ball out of bounds -0.01

    Each agent receives its own reward during episodes. After each episode, we add up the rewards that each agent received (without discounting), to get a score for each agent. This yields 2 (potentially different) scores. We then take the maximum of these 2 scores and get the score of the episode.

    The Tennis is considered solved if the average episode score over 100 consecutive episodes achieves 0.5.

Part 2: Getting started

  1. Install python dependencies based on the instruction on udacity deep reinforcement learning github repo

  2. Install Tennis environment

(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)

Part 3: Idea for solving the task

I approached the Tennis environment based on 2 different methods.

  • Self play method

    Create a single ddpg network. The action of both tennis agents are chosen from this network.

  • Multi-agent method

    Create a maddpg agent with 2 seperate actor-networks.

For task-solving detail, see the Report.

Part 4: Repository code usage

This repository contains 6 different solution packages, please disregard the "draft" packages, which are failed trials. The useful packages are "self-play", "self-play-test" and "maddpg_agent", each one of these 3 packages contains a successful solution that can directly be used to solve the environment.

solution package model file agent file(s) main file saved weights
self_play model.py ddpg_agent.py (Replaybuffer and OUNoise included in the same file) training.py actor: checkpoint_actor.pth; critic: checkpoint_critic.pth
self-play-test model.py ddpg_agent.py (Replaybuffer and OUNoise included in the same file) training.py actor: checkpoint_actor.pth; critic: checkpoint_critic.pth
maddpg_agent model.py ddpg_actor.py; maddpg_agent.py; replaybuffer.py; OUNoise.py; param_update.py training.py actor 0: checkpoint_actor0.pth; actor 1: checkpoint_actor1.pth; critic: checkpoint_centralized_critic.pth

Note: remember to change the "file_location" to your location of storing the Tennis environment before use. file_location is defined in function "create_env()" in the main file for all 3 packages.

To see the difference of self_play and self-play-test, please refer to Report.

If your agent just cannot solve the environemnt (hopefully that's not the case), my Report includes my hypothesis about adding batchnorm layer in the agent model, which may lead to agent's not able to solve the environment even though the other parts of the code is correct.

Part 5: Demo for trained agent

Demo of my trained maddpg agents

Demo of the traing maddpg agents

self_play

self_play agent solves the Tennis environment in 238 episodes

maddpg_agent

maddpg_agents solve the Tennis environment in 1225 episodes

Part 6: References

reference reason
maddpg algorithm better understand the maddpg algorithm
nunesma's reinfocement learning file referenced for my implementation of self-play
udacity maddpg lab referenced for my implementation of maddpg

About

Train a pair of agents to solve the Tennis environment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published