An Ecosystem to Generate, Share, and Use Datasets in Reinforcement Studying


Most reinforcement studying (RL) and sequential determination making algorithms require an agent to generate coaching knowledge by means of massive quantities of interactions with their surroundings to attain optimum efficiency. That is extremely inefficient, particularly when producing these interactions is tough, corresponding to gathering knowledge with an actual robotic or by interacting with a human knowledgeable. This situation may be mitigated by reusing exterior sources of information, for instance, the RL Unplugged Atari dataset, which incorporates knowledge of an artificial agent enjoying Atari video games.

Nonetheless, there are only a few of those datasets and a wide range of duties and methods of producing knowledge in sequential determination making (e.g., knowledgeable knowledge or noisy demonstrations, human or artificial interactions, and many others.), making it unrealistic and never even fascinating for the entire group to work on a small variety of consultant datasets as a result of these won’t ever be consultant sufficient. Furthermore, a few of these datasets are launched in a type that solely works with sure algorithms, which prevents researchers from reusing this knowledge. For instance, fairly than together with the sequence of interactions with the surroundings, some datasets present a set of random interactions, making it unattainable to reconstruct the temporal relation between them, whereas others are launched in barely completely different codecs, which might introduce delicate bugs which might be very tough to establish.

On this context, we introduce Reinforcement Studying Datasets (RLDS), and launch a suite of instruments for recording, replaying, manipulating, annotating and sharing knowledge for sequential determination making, together with offline RL, studying from demonstrations, or imitation studying. RLDS makes it straightforward to share datasets with none lack of info (e.g., preserving the sequence of interactions as a substitute of randomizing them) and to be agnostic to the underlying unique format, enabling customers to rapidly check new algorithms on a wider vary of duties. Moreover, RLDS gives instruments for gathering knowledge generated by both artificial brokers (EnvLogger) or people (RLDS Creator), in addition to for inspecting and manipulating the collected knowledge. Finally, integration with TensorFlow Datasets (TFDS) facilitates the sharing of RL datasets with the analysis group.

With RLDS, customers can report interactions between an agent and an surroundings in a lossless and normal format. Then, they’ll use and remodel this knowledge to feed completely different RL or Sequential Determination Making algorithms, or to carry out knowledge evaluation.

Dataset Construction

Algorithms in RL, offline RL, or imitation studying might eat knowledge in very completely different codecs, and, if the format of the dataset is unclear, it is easy to introduce bugs attributable to misinterpretations of the underlying knowledge. RLDS makes the info format specific by defining the contents and the that means of every of the fields of the dataset, and gives instruments to re-align and remodel this knowledge to suit the format required by any algorithm implementation. To be able to outline the info format, RLDS takes benefit of the inherently normal construction of RL datasets — i.e., sequences (episodes) of interactions (steps) between brokers and environments, the place brokers may be, for instance, rule-based/automation controllers, formal planners, people, animals, or a mix of those. Every of those steps accommodates the present commentary, the motion utilized to the present commentary, the reward obtained on account of making use of motion, and the low cost obtained along with reward. Steps additionally embody further info to point whether or not the step is the primary or final of the episode, or if the commentary corresponds to a terminal state. Every step and episode may comprise customized metadata that can be utilized to retailer environment-related or model-related knowledge.

Producing the Information

Researchers produce datasets by recording the interactions with an surroundings made by any form of agent. To keep up its usefulness, uncooked knowledge is ideally saved in a lossless format by recording all the knowledge that’s produced, preserving the temporal relation between the info objects (e.g., ordering of steps and episodes), and with out making any assumption on how the dataset goes for use sooner or later. For this, we launch EnvLogger, a software program library to log agent-environment interactions in an open format.

EnvLogger is an surroundings wrapper that data agent–surroundings interactions and saves them in long-term storage. Though EnvLogger is seamlessly built-in within the RLDS ecosystem, we designed it to be usable as a stand-alone library for better modularity.

As in most machine studying settings, gathering human knowledge for RL is a time consuming and labor intensive course of. The widespread strategy to deal with that is to make use of crowd-sourcing, which requires user-friendly entry to environments that could be tough to scale to massive numbers of individuals. Throughout the RLDS ecosystem, we launch a web-based software referred to as RLDS Creator, which gives a common interface to any human-controllable surroundings by means of a browser. Customers can work together with the environments, e.g., play the Atari video games on-line, and the interactions are recorded and saved such that they are often loaded again later utilizing RLDS for evaluation or to coach brokers.

Sharing the Information

Datasets are sometimes onerous to provide, and sharing with the broader analysis group not solely permits reproducibility of former experiments, but in addition accelerates analysis because it makes it simpler to run and validate new algorithms on a spread of eventualities. For that function, RLDS is built-in with TensorFlow Datasets (TFDS), an present library for sharing datasets throughout the machine studying group. As soon as a dataset is a part of TFDS, it’s listed within the world TFDS catalog, making it accessible to any researcher by utilizing tfds.load(name_of_dataset), which masses the info both in Tensorflow or in Numpy codecs.

TFDS is impartial of the underlying format of the unique dataset, so any present dataset with RLDS-compatible format can be utilized with RLDS, even when it was not initially generated with EnvLogger or RLDS Creator. Additionally, with TFDS, customers hold possession and full management over their knowledge and all datasets embody a quotation to credit score the dataset authors.

Consuming the Information

Researchers can use the datasets in an effort to analyze, visualize or practice a wide range of machine studying algorithms, which, as famous above, might eat knowledge in numerous codecs than the way it has been saved. For instance, some algorithms, like R2D2 or R2D3, eat full episodes; others, like Behavioral Cloning or ValueDice, eat batches of randomized steps. To allow this, RLDS gives a library of transformations for RL eventualities. These transformations have been optimized, bearing in mind the nested construction of the RL datasets, they usually embody auto-batching to speed up a few of these operations. Utilizing these optimized transformations, RLDS customers have full flexibility to simply implement some excessive stage functionalities, and the pipelines developed are reusable throughout RLDS datasets. Instance transformations embody statistics throughout the complete dataset for chosen step fields (or sub-fields) or versatile batching respecting episode boundaries. You possibly can discover the prevailing transformations on this tutorial and see extra advanced actual examples on this Colab.

Obtainable Datasets

In the meanwhile, the next datasets (appropriate with RLDS) are in TFDS:

Our group is dedicated to rapidly increasing this record within the close to future and exterior contributions of recent datasets to RLDS and TFDS are welcomed.


The RLDS ecosystem not solely improves reproducibility of analysis in RL and sequential determination making issues, but in addition permits new analysis by making it simpler to share and reuse knowledge. We hope the capabilities provided by RLDS will provoke a development of releasing structured RL datasets, holding all the knowledge and protecting a wider vary of brokers and duties.


Moreover the authors of this submit, this work has been finished by Google Analysis groups in Paris and Zurich in Collaboration with Deepmind. Particularly by Sertan Girgin, Damien Vincent, Hanna Yakubovich, Daniel Kenji Toyama, Anita Gergely, Piotr Stanczyk, Raphaël Marinier, Jeremiah Harmsen, Olivier Pietquin and Nikola Momchev. We additionally wish to thank the collaboration of different engineers and researchers who supplied suggestions and contributed to the challenge. Particularly, George Tucker, Sergio Gomez, Jerry Li, Caglar Gulcehre, Pierre Ruyssen, Etienne Pot, Anton Raichuk, Gabriel Dulac-Arnold, Nino Vieillard, Matthieu Geist, Alexandra Faust, Eugene Brevdo, Tom Granger, Zhitao Gong, Toby Boyd and Tom Small.


Leave a Reply

Your email address will not be published. Required fields are marked *