Other Works

CueLearner: Bootstrapping and local policy adaptation from relative feedback

Giulio Schiavi1, Andrei Cramariuc2, Lionel Ott1, Roland Siegwart1
1 Autonomous Systems Lab, ETH Zurich, Switzerland 2 Robotics Systems Lab, ETH Zurich, Switzerland IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025
Paper banner illustration representing the project

We use relative feedback from a human to bootstrap off-policy reinforcement learning in sparse-reward scenarios, and to adapt existing policies to external disturbances and user preferences.

Abstract

Human guidance has emerged as a powerful tool for enhancing reinforcement learning (RL). However, conventional forms of guidance such as demonstrations or binary scalar feedback can be challenging to collect or have low information content, motivating the exploration of other forms of human input. Among these, relative feedback (i.e., feedback on how to improve an action, such as "more to the left") offers a good balance between usability and information richness. Previous research has shown that relative feedback can be used to enhance policy search methods. However, these efforts have been limited to specific policy classes and use feedback inefficiently. In this work, we introduce a novel method to learn from relative feedback and combine it with off-policy reinforcement learning. Through evaluations on two sparse-reward tasks, we demonstrate our method can be used to improve the sample efficiency of reinforcement learning by guiding its exploration process. Additionally, we show it can adapt a policy to changes in the environment or the user's preferences. Finally, we demonstrate real-world applicability by employing our approach to learn a navigation policy in a sparse reward setting.

BibTeX

@INPROCEEDINGS{11247163,
  author={Schiavi, Giulio and Cramariuc, Andrei and Ott, Lionel and Siegwart, Roland},
  booktitle={2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={CueLearner: Bootstrapping and local policy adaptation from relative feedback}, 
  year={2025},
  volume={},
  number={},
  pages={7917-7924},
  keywords={Training;Adaptation models;Navigation;Annotations;Search methods;Reinforcement learning;Robot learning;Usability;Intelligent robots},
  doi={10.1109/IROS60139.2025.11247163}}