Skip to content
#

prioritized-experience-replay

Here are 95 public repositories matching this topic...

PyTorch implementation of Soft-Actor-Critic and Prioritized Experience Replay (PER) + Emphasizing Recent Experience (ERE) + Munchausen RL + D2RL and parallel Environments.

  • Updated Feb 24, 2021
  • Python

DQN-Atari-Agents: Modularized & Parallel PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow, and DRQN

  • Updated Dec 18, 2020
  • Jupyter Notebook

PyTorch Implementation of Implicit Quantile Networks (IQN) for Distributional Reinforcement Learning with additional extensions like PER, Noisy layer, N-step bootstrapping, Dueling architecture and parallel env support.

  • Updated Mar 4, 2023
  • Jupyter Notebook

This Repository contains a series of google colab notebooks which I created to help people dive into deep reinforcement learning.This notebooks contain both theory and implementation of different algorithms.

  • Updated Apr 24, 2021
  • Jupyter Notebook

PyTorch implementation of the state-of-the-art distributional reinforcement learning algorithm Fully Parameterized Quantile Function (FQF) and Extensions: N-step Bootstrapping, PER, Noisy Layer, Dueling Networks, and parallelization.

  • Updated Oct 10, 2020
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the prioritized-experience-replay topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prioritized-experience-replay topic, visit your repo's landing page and select "manage topics."

Learn more