I am a Ph.D. student in Computer Science at the National University of Singapore, advised by Prof. David Hsu. My research primarily focuses on developing reasoning algorithms and interactive systems for robots to adaptively perform assistive tasks in dynamic and open human-centered environments.
Previously, I worked as a Research Assistant with Prof. Max Q.-H. Meng. I received my B.E. in Automation from the Harbin Institute of Technology, Shenzhen in 2021. During my junior year, I was a visiting student at UC Berkeley, working with Prof. Koushil Sreenath.
I love playing basketball 🏀 and table tennis 🏓 in my free time. I am also open to collaborating with people to explore the possibilities of robotics in various fields (e.g. HCI, IoT, Materials). Additionally, we always welcome undergraduate student who are interested and have strong experience in robotics to join our research projects at NUS or Tsinghua SIGS.
CV   /   Email   /   Google Scholar   /   Github   /   Twitter   /   Personality
Robotic Autonomous Trolley Collection with Progressive Perception and Nonlinear Model Predictive Control
Anxing Xiao*, Hao Luan*, Ziqi Zhao*, Yue Hong, Jieting Zhao, Jiankun Wang, Max Q-H Meng
IEEE International Conference on Robotics and Automation (ICRA) 2022.
arXiv
/
Video
PUTN: A Plane-fitting based Uneven Terrain Navigation Framework
Zhuozhu Jian, Zihong Lu, Xiao Zhou, Bin Lan, Anxing Xiao, Xueqian Wang, Bin Liang
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022.
arXiv
/
Video
/
Code
Robotic Guide Dog: Leading Human with Leash-Guided Hybrid Physical Interaction
Anxing Xiao*, Wenzhe Tong*, Lizhi Yang*, Jun Zeng, Zhongyu Li and Koushil Sreenath
IEEE International Conference on Robotics and Automation (ICRA) 2021.
This paper was the ICRA Best Paper Award Finalist for Service Robotics.
arXiv
/
Video
Media:
New Scientist
/ Daily Mail
/ The Independent
/ Tech Xplore
/ Daily Californian
Autonomous Navigation with Optimized Jumping through Constrained Obstacles on Quadrupeds
Scott Gilroy, Derek Lau, Lizhi Yang, Ed Izaguirre, Kristen Biermayer, Anxing Xiao, Mengti Sun, Ayush Agrawal, Jun Zeng, Zhongyu Li, Koushil Sreenath
IEEE International Conference on Automation Science and Engineering (CASE) 2021.
arXiv
/
Video
* co-first author
First Author | Mobile Manipulation Workshop @ ICRA 2024
In this paper, we introduce Robi Butler, a novel household robotic system that enables multimodal interaction with the user.
Leveraging advanced communication interfaces, Robi Butler enables users to monitor the robot's status, give text/voice instruction, and select target objects with hand pointing.
At the core of our robotic system are the high-level behavior module powered by Large Language Models (LLMs) that interpret received multimodal instructions to generate plans, and open-vocabulary primitives supported by the Vision-Language Models (VLMs) for executing the planned actions with text and pointing queries.
The integration of above components allows Robi Butler to ground remote multimodal instruction in the real-world home environment in a zero-shot manner.
Paper
Video
Collaboration | Accepted by RSS 2024
In this work, we investigate combining tactile perception with language, which enables embodied systems to obtain physical properties through interaction and apply common-sense reasoning. We contribute a new dataset PHYSICLEAR, which comprises both physical/property reasoning tasks and annotated tactile videos obtained using a GelSight tactile sensor. We then introduce OCTOPI, a system that leverages both tactile representation learning and large vision-language models to predict and reason about tactile inputs with minimal language fine-tuning. Our evaluations on PHYSICLEAR show that OCTOPI is able to effectively use intermediate physical property predictions to improve physical reasoning in both trained tasks and for zero-shot reasoning.
Paper
Website
Collaboration | Preprint
We propose a novel, expandable state representation that provides continuous expansion and updating of object attributes from the Language Model's inherent capabilities for context understanding and historical action reasoning. Our proposed representation maintains a comprehensive record of an object's attributes and changes, enabling robust retrospective summary of the sequence of actions leading to the current state. We validate our model through experiments across simulated and real-world task planning scenarios, demonstrating significant improvements over baseline methods in a variety of tasks requiring long-horizon state tracking and reasoning.
Paper
Video
Research Mentor | IROS 2023
This paper presents an autonomous nonholonomic multi-robot system and a hierarchical autonomy framework for collaborative luggage trolley transportation. This framework finds kinematic-feasible paths, computes online motion plans, and provides feedback that enables the multi-robot system to handle long lines of luggage trolleys and navigate obstacles and pedestrians while dealing with multiple inherently complex and coupled constraints. We demonstrate the designed collaborative trolley transportation system through practical transportation tasks in complex and dynamic environments.
Paper
Video
Research Mentor | ICRA 2023
We propose a novel guidance robot system with a comfort-based concept.
To allow humans to be guided safely and more comfortably to the target position in complex environments, our proposed force planner can plan the forces experienced by the human with the force-based human motion model. And the proposed motion planner generate the specific motion command for robot and controllable leash to track the planned force.
Our system has been deployed on Unitree Laikago quadrupedal platform and validated in real-world scenarios.
Paper
Video
First Author | ICRA 2022
We propose a novel mobile manipulation system with applications in luggage trolley collection.
The proposed system integrates a compact hardware design and a progressive perception stragy and MPC-based planning framework, enabling the system to efficiently and robustly collect trolleys in dynamic and complex environments.
We demonstrate our design and framework by deploying the system on actual trolley collection tasks, and their effectiveness and robustness are experimentally validated.
Paper
Video
First Author | ICRA 2021
We propose a hybrid physical Human-Robot Interaction model that involves leash tension to describe the dynamical relationship in the robot-guiding human system. This hybrid model is utilized in a mixed-integer programming problem to develop a reactive planner that is able to utilize slack-taut switching to guide a blind-folded person to safely travel in a confined space.
The proposed leash-guided robot framework is deployed on a Mini Cheetah quadrupedal robot and validated in experiments.
Paper
Video
The best way to reach me is via email:
anxingxiao [at] gmail.com (Primary)
anxingxiao [at] u.nus.edu. (Academic)
anxingx [at] comp.nus.edu.sg (Research)
You can often find me at:
Innovation 4.0, #06-01C, 3 Research Link, Singapore 117602
NUS CS5340 Uncertainty Modelling in AI
Utilized Markov Random Fields (MRF) to denoise forward-looking sonar (FLS) images and Bayesian optimization to estimate the odometry; Implemented the Monte Carlo Localisation algorithm for Autonomous Underwater Vehicles (AUVs) on data collected from AUVs operating in open water.
HIT Auto2012 Introduction to Maching Learning
Designed training pipelines to solve reach-avoid games using the Soft Actor Criticism (SAC) algorithm;
The model was trained in Robotarium simulations and transferred to real-world experiments;
Learned policy performed better than the baseline MPC method and human policy in both defense and attack tasks.
Berkeley ME102B Mechatronics Design
Completed 3D model design in SolidWorks and manufactured parts of the scanner by 3D printing and laser cutting;
Integrated electronics components to achieve autonomous page turning and scanning;
Processed the scanner image with perspective transformation and adaptive threshold using OpenCV.