Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents (Extended Abstract)

Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, J. Veness, Matthew J. Hausknecht, Michael Bowling
2018
2 references

Abstract

The Arcade Learning Environment (ALE) is an evaluation platform that poses the challenge of building AI agents with general competency across dozens of Atari 2600 games. It supports a variety of different problem settings and it has been receiving increasing attention from the scientific community. In this paper we take a big picture look at how the ALE is being used by the research community. We focus on how diverse the evaluation methodologies in the ALE have become and we highlight some key concerns when evaluating agents in this platform. We use this discussion to present what we consider to be the best practices for future evaluations in the ALE. To further the progress in the field, we also introduce a new version of the ALE that supports multiple game modes and provides a form of stochasticity we call sticky actions.

1 repository
2 references

Code References

â–¶ ray-project/ray
2 files
â–¶ rllib/examples/algorithms/dreamerv3/atari_100k_dreamerv3.py
1
# [2]: "We follow the evaluation protocol of Machado et al. (2018) with 200M
â–¶ rllib/examples/algorithms/dreamerv3/atari_200M_dreamerv3.py
1
# [2]: "We follow the evaluation protocol of Machado et al. (2018) with 200M
Link copied to clipboard!