Machine Learning (ML) algorithms are traditionally designed to learn one task at a time. In recent years, many research directions focused on designing techniques to extend ML algorithms to transfer solutions learned in one task to other tasks drawn from the same domain, in order to improve the learning performance. More precisely, Transfer Learning (TL) refers to the problem of retaining and applying the knowledge learned in one or more tasks to efficiently develop an effective hypothesis for a new task. This challenging goal has been pursued following many different perspectives (e.g., meta-learning, multi-task learning, learn to learn, continual learning) and many empirical and theoretical results showed that learning algorithms can actually benefit from the transfer of knowledge across related tasks. Research on transfer obtained significant successes in supervised learning problems, such as recommender systems, medical decision making, text classification, and general game playing. On the other hand, the possibility of knowledge transfer is sequential decision-making problems (e.g., portfolio management, computer games, automatic controls, etc.) received relatively little attention so far. Reinforcement Learning (RL) represents the more mature paradigm for the formalization of sequential decision-making problems and recent works focused on enabling RL algorithms to transfer knowledge across tasks. The talk will be divided into two main parts. In the first part I will provide an introduction to transfer in RL and a classification of the main approaches to transfer proposed so far. In the second part I will talk about the main results of my PhD thesis on "Transfer of Knowledge in Reinforcement Learning".
Transfer Learning, Reinforcement Learning
Alessandro Lazaric. (2008)
Knowledge Transfer in Reinforcement Learning. Ph.D. Thesis. Politecnico di Milano.
This is a draft, the final version will be available in few weeks.