State representation learning for control: An overview

Abstract

Representation learning algorithms are designed to learn abstract features that characterize data. State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. The representation is learned to capture the variation in the environment generated by the agent’s actions; this kind of representation is particularly suitable for robotics and control scenarios. In particular, the low dimension characteristic of the representation helps to overcome the curse of dimensionality, provides easier interpretation and utilization by humans and can help improve performance and speed in policy learning algorithms such as reinforcement learning. This survey aims at covering the state-of-the-art on state representation learning in the most recent years. It reviews different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real). In particular, it highlights how generic learning objectives are differently exploited in the reviewed algorithms. Finally, it discusses evaluation methods to assess the representation learned and summarizes current and future lines of research.

Article Arxiv PDF

Bibtex

@article{Lesort18State,
title = "State representation learning for control: An overview",
journal = "Neural Networks",
year = "2018",
issn = "0893-6080",
doi = "https://doi.org/10.1016/j.neunet.2018.07.006",
author = "Timoth{\'{e}}e Lesort and Natalia D{\'{\i}}az-Rodr{\'{\i}}guez and Jean-Fran{\c{c}}ois Goudou and David Filliat",
keywords = "State representation learning, Low dimensional embedding learning, Learning disentangled representations, Disentanglement of control factors, Robotics, Reinforcement learning"
}