• Sun. Nov 24th, 2024

Researchers at the University of Oxford Introduce Craftax: A Machine Learning Benchmark for Open-Ended Reinforcement Learning

Mar 7, 2024

Building and using appropriate benchmarks is a major driver of advancement in RL algorithms. For value-based deep RL algorithms, there’s the Arcade Learning Environment; for continuous control, there’s Mujoco; and for multi-agent RL, there’s the StarCraft Multi-Agent Challenge. Benchmarks that demonstrate more open-ended dynamics, such as procedural world generation, skill acquisition and reuse, long-term dependencies, and constant learning, have emerged as part of the move towards more generic agents. Because of this, tools like MiniHack, Crafter, MALMO, and The NetHack Learning Environment have been created. 

Unfortunately, researchers cannot use them due to their lengthy runtime, making them impractical for use with current methods that do not employ large-scale computer resources. At the same time, JAX has seen a boom in RL environments as the speed of running an end-to-end compiled RL pipeline has been fully realized. Experiments that used to take days to execute on a huge compute cluster may now be completed in minutes on a single GPU thanks to effective parallelization, compilation, and the elimination of CPU GPU transfer.

To unite these two schools of thought, a recent study by the University of Oxford and University College London provides the Craftax benchmark, an environment based on JAX that runs orders of magnitude quicker than similar ones and displays intricate, open-ended dynamics. One concrete example is Craftax-Classic, a JAX reimplementation of Crafter that outperforms the original Python version by 250. 

The researchers demonstrate that a basic PPO agent can solve Craftax-Classic (to 90% of maximum return) in 51 minutes with easy access to significantly more timesteps. Accordingly, they also offer Craftax, a far more difficult setting that borrows mechanics from NetHack and, more generally, the Roguelike genre. They provide users with the primary Craftax environment, designed to be harder while keeping a fast runtime, to give a more appealing challenge. A wide variety of new game mechanics are introduced in Craftax. The usage of pixels just adds another layer of representation learning to the problem, and many of the qualities that Crafter examines (exploration, memory) are unconcerned with the precise form of the observation. So, they provide Craftax variants that use symbolic observations as well as pixel-based observations; the former is around ten times faster.

The results of their tests reveal that the currently available approaches perform poorly on Craftax. Therefore, the team hopes it allows experimentation with constrained computational resources while posing a substantial challenge for future RL research.

The team hopes that Craftax-Classic will offer a smooth introduction to Craftax for individuals who are already familiar with the Crafter standard. 


Check out the Paper, Github, and ProjectAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 38k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

You may also like our FREE AI Courses….

The post Researchers at the University of Oxford Introduce Craftax: A Machine Learning Benchmark for Open-Ended Reinforcement Learning appeared first on MarkTechPost.


#AIPaperSummary #AIShorts #Applications #ArtificialIntelligence #EditorsPick #MachineLearning #ReinforcementLearning #Staff #TechNews #Technology #Uncategorized
[Source: AI Techpark]

Related Post