Giulio Schiavi

I am a PhD student at the Autonomous Systems Lab at ETH Zurich, working in the mobile manipulation group.

My research explores how robots can learn and collaborate effectively with humans. I am particularly interested in how robots can leverage human guidance during training, and continue adapting during deployment through ongoing feedback. In practice, my work touches on many interesting areas of robotics, including reinforcement learning with human feedback, continual learning, and learning from demonstrations.


Education
  • ETH Zurich
    ETH Zurich
    PhD in Robotics
    2023-present
  • ETH Zurich
    ETH Zurich
    MSc Robotics, Systems and Control
    5.93 / 6.0, graduated with distinction
    2019-2022
  • Politecnico di Milano
    Politecnico di Milano
    BSc Mechanical Engineering
    110 / 110
    2016-2019
Honors & Awards
  • ICRA 2024 Best Conference Paper Award
    2024
  • ETH Medal for MSc Thesis
    2022
  • Politecnico di Milano: Scholarship for particularly deserving off-campus students
    2018
  • Politecnico di Milano: Best freshmen of 2016 award
    2016
Selected Publications (view all )
CueLearner: Bootstrapping and Local Policy Adaptation from Relative Feedback
CueLearner: Bootstrapping and Local Policy Adaptation from Relative Feedback

Giulio Schiavi, Andrei Cramariuc, Lionel Ott, Roland Siegwart

To appear in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2025

We propose a novel method to integrate relative human feedback (e.g. "throw the ball more to the left") with off-policy reinforcement learning. We show that our method can be used to bootstrap reinforcement learning in sparse-reward scenarios, and to adapt a policy a-posteriori to new environment constraints or user preferences.

CueLearner: Bootstrapping and Local Policy Adaptation from Relative Feedback

Giulio Schiavi, Andrei Cramariuc, Lionel Ott, Roland Siegwart

To appear in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2025

We propose a novel method to integrate relative human feedback (e.g. "throw the ball more to the left") with off-policy reinforcement learning. We show that our method can be used to bootstrap reinforcement learning in sparse-reward scenarios, and to adapt a policy a-posteriori to new environment constraints or user preferences.

Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Open X-Embodiment Collaboration

IEEE International Conference on Robotics and Automation (ICRA) 2024

As part of a large multi-institution collaboration, I contributed and cleaned data for a dataset spanning 22 robots, 21 institutions, and 527 skills, which was used to train the high-capacity RT-X model. The resulting generalist policy transfers capabilities across embodiments, improving downstream robots by leveraging demonstrations collected on other platforms.

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Open X-Embodiment Collaboration

IEEE International Conference on Robotics and Automation (ICRA) 2024

As part of a large multi-institution collaboration, I contributed and cleaned data for a dataset spanning 22 robots, 21 institutions, and 527 skills, which was used to train the high-capacity RT-X model. The resulting generalist policy transfers capabilities across embodiments, improving downstream robots by leveraging demonstrations collected on other platforms.

Learning Agent-Aware Affordances for Closed-Loop Interaction with Articulated Objects
Learning Agent-Aware Affordances for Closed-Loop Interaction with Articulated Objects

Giulio Schiavi*, Paula Wulkop*, Giuseppe Rizzi, Lionel Ott, Roland Siegwart, Jen Jen Chung (* equal contribution)

IEEE International Conference on Robotics and Automation (ICRA) 2023

We propose a closed-loop manipulation pipeline that combines agent-aware affordance prediction with sampling-based whole-body control to tackle interactions with articulated objects. Conditioning affordances on the full embodiment lets the robot recover from failures and execute multi-stage tasks such as opening and closing household appliances with significantly higher success rates.

Learning Agent-Aware Affordances for Closed-Loop Interaction with Articulated Objects

Giulio Schiavi*, Paula Wulkop*, Giuseppe Rizzi, Lionel Ott, Roland Siegwart, Jen Jen Chung (* equal contribution)

IEEE International Conference on Robotics and Automation (ICRA) 2023

We propose a closed-loop manipulation pipeline that combines agent-aware affordance prediction with sampling-based whole-body control to tackle interactions with articulated objects. Conditioning affordances on the full embodiment lets the robot recover from failures and execute multi-stage tasks such as opening and closing household appliances with significantly higher success rates.

All publications