Xiao Liu (Leo)

Arizona State University | Honda Research Institute

image

I am currently a Postdoctoral Scientist in the Physical Interaction Group at Honda Research Institute - US, where we are developing AI-assisted robotic systems. My research focuses on Robot Learning, Representation Learning, and their applications in Embodied AI and Human-Robot Interaction (HRI). My work has been presented at conferences such as CoRL, ICRA, and IROS. I earned my Ph.D. in Computer Science from the School of Computing and Augmented Intelligence at Arizona State University, supervised by Prof. Heni Ben Amor, and completed my Master’s at Case Western Reserve University under the guidance of Prof. Kiju Lee. Please feel free to review my curriculum vitae for further details.


Work Experiences

Postdoc Scientist, Honda Research Institute

HRI-US | 2024 - Present

Create and improve various AI support functions for dexterous teleoperation avatar through robot learning.

Research Associate, Interactive Robotics Lab

ASU IRL | 2020 - 2024

Research topics: Robot Learning via Deep State-Space Modeling

  • Embodied AI: Proposed Diff-Control, an action diffusion policy incorporating ControlNet from the domain of image generation to robot actions generation.
  • Created a multimodal learning framework (α-MDF) using attention mechanism and differentiable filtering, which conducts state estimation in latent space with multiple modalities. Experimented on real-world tasks and validated the system on both rigid body robots and soft robots.
  • Developed differentiable Ensemble Kalman Filters framework incorporating algorithmic priors for robot imitation learning, i.e., learning system dynamics, missing observations, and learning representations in high-dimensional space.
  • Deployed the differentiable filtering framework with smartwatch for ubiquitous robot control tasks, i.e., remote teleoperation, drone piloting.

Research Engineer (part-time), RadiusAI

RadiusAI | 2020 - 2024

Worked on multiple projects with different research items across computer vision, differentiable filters, optimizations.

  • Developed and refined Multi-object tracking (MOT) algorithms using Bayes Filter for Video Analytics for indoor and outdoor cameras.
  • Developed monocular depth prediction models with varied advanced architecture-Vision Transformer (ViT) and multi-scale local planar guidance (LPG) blocks
  • Developed multi-objective optimization technique base on Frank-Wolfe algorithm for training across multiple datasets
  • Proposed Depth model shows 0.117, 0.416 on abs REL and RMS error metrics and 0.868 on d1 metric on NYU depth testset

Research Assistant, CWRU

DIRL (now ART) | 2017 - 2019

Social robot project “Woody and Philos” team leader, collaborated research assistant of “e-Cube” project for human cognitive skill assessment, developed advanced algorithms in Computer Vision for broad.

  • Real-time Human Facial Emotion Expression Recognition for Human-robot Interaction using deep learning and machine learning technique-featured on Case Western Daily
  • Human centered biomedical devices–“e-Cube” for cognitive skills assessment
  • Developed social robots–“Philos” & “Woody” from the kinematics to the high-level control-featured on ideastream

Publications, Presentations, …

Papers
Project image
Woody: Low-Cost Open-source Humanoid Torso Robot
Daniel Hayosh, Xiao Liu, Kiju Lee
IEEE 17th International Conference on Ubiquitous Robots (UR), pp. 247-252 , 2020
Project Page / Paper
Presentations

Xiao Liu, Xiangyi Cheng, Kiju Lee. “e-Cube: Vision-based Interactive Block Games for Assessing Cognitive Skills: Design and Preliminary Evaluation,” CWRU ShowCASE.

Xiao Liu, Daniel Hayosh, Kiju Lee. “Woody: A New Prototype of Social Robotic Platform Integrated with Facial Emotion Recognition for Real-time Application,” CWRU ShowCASE.