What is it about?

MicroRacer, an open-source environment, draws inspiration from car racing and is specifically designed for teaching Deep Reinforcement Learning. The environment's complexity has been deliberately tuned to enable users to explore various methods, networks, and hyperparameter settings without the need for intricate software or excessively long training times. Additionally, baseline agents for prominent learning algorithms like DDPG, PPO, SAC, TD3, and DSAC are readily available, accompanied by an initial comparison of training time and performance.

Featured Image

Why is it important?

Deep Reinforcement Learning is well known for its potentially lengthy training process, which relies on acquiring a substantial number of unbiased environment observations. Furthermore, since agents dynamically collect these observations, the challenge of striking the right balance between exploitation and exploration arises. The requirement for extended training times, coupled with the complexities of monitoring and debugging agent evolution, and the inherent difficulty in comprehending and explaining the reasons for any learning process failures, all contribute to making DRL a more challenging field compared to other traditional Deep Learning tasks. Microracer For is a simple environment explicitly meant for the didactics of DRL. The environment is inspired by car racing, and has a stimulating competitive nature. Its complexity has been explicitly calibrated to allow students to experiment with many different methods, networks and hyperparameters settings without requiring sophisticated software or exceedingly long training times.

Read the Original

This page is a summary of: MicroRacer: A Didactic Environment for Deep Reinforcement Learning, January 2023, Springer Science + Business Media,
DOI: 10.1007/978-3-031-25599-1_18.
You can read the full text:

Read

Contributors

The following have contributed to this page