• Home
  • Home
RSS 2025 WORKSHOP
  • Home

LARGE FOUNDATION MODEL FOR INTERACTIVE ROBOT LEARNING




​ROOM SGM123
June 25, 2025 (Wed)
Picture
Description
Large foundation models (LFMs), spanning language, vision, and multimodal systems, are revolutionizing interactive robot learning. These models enable robots to learn from human interactions, environments, and other robots, advancing capabilities such as path planning, grasping, walking, swarm coordination, and manipulation. LFMs hold the potential to enhance robot autonomy in dynamic and unstructured environments, making significant strides in applications such as autonomous vehicles, AI-driven manufacturing, healthcare robotics, and advanced air mobility. This workshop will focus on cutting-edge research that explores the integration of LFMs into robotic systems, addressing both their potential and the challenges they pose. 


Discussion Topics
  • Enhancing robot learning from multimodal data using LFMs. 
  • LFM applications in path planning, manipulation, and swarm robotics. 
  • Multi-robot coordination and advanced teaming strategies. 
  • LFM used in real-world applications (self-driving, manufacturing, and healthcare). 
  • Integration of LFMs with robotic hardware for perception and control. 
  • Ethical and safety concerns in deploying LFMs in critical environments. 

Important Date
​Submission Due: May 31, 2025
Notification Day: June 04, 2025


Submission Contents and Format
This workshop welcome all the research reports on the latest preliminary results, recently-submitted drafts, and recently-published work. The page limit is two pages including reference. The format will be the stardard IEEE conference format. Review will be conducted by the organization committee as well as some domain scholars. All submissions, please email draft to: [email protected], with email subject "submission for RSS workshop". After committee review, authors will be notified before the notification day.

Talk&Poster Schedule
Highlights:
  • Time: 8:20 - 18:30
  • 16 invited talks (20 mins per talk, including Q&A) grouped by 5 Topics (LLM for Robot Reasoning, Human Engagement, Scalable Collaboration, Interaction, and Robot Learning)
  • 2 Spotlight Talk Sessions (40 mins per session, including Q&A)
  • 2 Coffee break & Poster Session (30 mins per session, right after the spotlight talk sessions)
  • 1 Panel Discussion (50 mins, 17:30 - 18:20)
  • Others: Lunch break (1.5 hours 80 mins), Opening/Closing Remarks (10 mins each)

Detailed Schedule:
8:20 - 8:30 Opening Remarks 
8:30 - 9:50 Group 1: LLM for Robot Reasoning
8:30 - 8:50 Lawson Wong (Northeastern Univ) - Sense-Plan-Act with Foundation Models
8:50 - 9:10 Wen Sun (Cornell) - Imitation Learning and Reinforcement Learning via Diffusion Models
9:10 - 9:30 Yilun Du (Harvard) - Learning Compositional Models of the World
9:30 - 9:50 Sergey Levine (UC Berkeley) - Robotic Foundation Models

9:50 - 10:30 Spotlight Session I (5 minutes each)
1. ActionRAG: Generalizing Instruction-to-Code via Action Graph Retrieval for Robotic Manipulation
2. QA-VLM: Providing Human-Interpretable Vision-based Quality Assessment for Wire-Feed Laser Additive Manufacturing with Vision-Language Models
3. DEXOS: Scaling Data Collection of Dexterous Manipulation for Robotic Foundation Model
4. LaMMA-P: Generalizable Multi-Agent Long-Horizon Task Allocation and Planning with LM-Driven PDDL Planner
5. ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations
6. Large Language Model Driven Situation Awareness Assessment in Human-Swarm Systems
​7. Self-Guided Action Diffusion
10:30 - 11:00 Coffee break, Poster Session I
11:00 - 11:40 Group 2: LLM for Human Engagement
11:00 - 11:20 Roberto Martin-Martin (UT Austin) - Adaptive Mixed-Initiative Language Interactions between Robots and Humans for Mobile Manipulation Collaborative Tasks
11:20 - 11:40 Jesse Thomason (USC) - Embracing Language as Grounded Communication
11:40 - 12:40 Group 3: LLM for Scalable Collaboration
11:40 - 12:00 Amanda Prorok (Cambridge) - Foundation Models for Multi-Robot Perception and Control
12:00 - 12:20 Fei Miao (UConn) - Multi-agent Reinforcement Learning and LLM for Embodied AI
12:20 - 12:40 Chuchu Fan (MIT) - Robot Planning with Natural Language Inputs and LLMs
12:40 - 14:00 Lunch break
14:00 - 15:20 Group 4: LLM for Interaction
14:00 - 14:20 Shuran Song (Stanford) - Generative Video Modeling for Robotics
14:20 - 14:40 Yunzhu Li (Columbia) - Foundation Models for Robotic Manipulation: Opportunities and Challenges
14:40 - 15:00 Beomjoon Kim (KAIST) - Hierarchical and Modular Neural Network for Manipulation Skill Discovery
15:00 - 15:20 Brian Ichter (Physical Intelligence) - Leveraging Language for Robotic Foundation Models
15:20 - 16:00 Spotlight Session II
Afternoon spotlight talks (5 minutes each)
1. CASPER: Inferring Diverse Intents for Assistive Teleoperation with Vision Language Models
2. Finding 3D Scene Analogies with Multimodal Foundation Models
3. CuriousBot: Interactive Mobile Exploration via Actionable 3D Relational Object Graph 
4. Deep Reactive Policy: Learning Reactive Manipulator Motion Planning for Dynamic Environments 
5. Enhancing Safety of Foundation Models for Visual Navigation through Collision Avoidance via Repulsive Estimation
6. Guiding Data Collection via Factored Scaling Curves
7. IMPACT: Intelligent Motion Planning with Acceptable Contact Trajectories via Vision-Language Models
8. Towards a VLM Benchmark for Simulated Robotics
16:00 - 16:30 Coffee break, Poster Session II
16:30 - 17:30 Group 5: LLM for Robot Learning
16:30 - 16:50 Deepak Pathak (CMU) - Recipes for Full-Stack Robot Learning at Scale
16:50 - 17:10 Jason Ma (UPenn) - Foundation Model Supervision for Robot Learning
17:10 - 17:30 Sebastian Sartor (CDTM&MIT) - Scaling Laws in Robotics: More Data, Bigger Models, Better Robots

17:30 - 18:20 Panel Discussion: Bridging Theory and Practice 
18:20 - 18:30 Closing Remarks

Invited Speakers 

Sergey Levine
UC Berkeley
https://rail.eecs.berkeley.edu/
Talk: "Responsive and Interactive Vision-Language-Action Models"
Abstract: Vision-language-action (VLA) models combine the generalization capabilities of web-scale pretraining from VLMs with the ability to control robots end to end. This not only enables robots to generalize more broadly, but it also makes it possible to use language interaction with humans as a source of supervision. In this talk, I'll discuss some recent progress on VLA policies that enables them to leverage natural verbal feedback and instructions from people to improve the robot's performance.
Bio: ​Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as applications in other decision-making domains. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Yilun Du
Harvard & Google DeepMind
https://yilundu.github.io/
Talk: "Learning Compositional Models of the World"
Abstract: I'll talk about how large pretrained video models can be useful for robotics. I'll first talk about how they can be used as universal policies, enabling us to learn a model for many tasks. I'll further talk about how they can be used as planners, enabling us to synthesize long-horizon plans for new tasks. Finally, I'll talk about how they can  be used with MPC, allowing us to improve the robustness of imitation learning policies.
Yilun Du is an assistant professor at Harvard in the Kempner Institute and Computer Science. He was previously a senior research scientist at Google Deepmind and received his PhD from MIT. He focuses on generative models, decision-making, robot learning, and embodied agents, with a particular interest in developing intelligent embodied agents in the physical world. He holds a PhD from MIT EECS and previously worked at OpenAI, FAIR, and Google DeepMind. 

Brian Ichter
Physical Intelligence
https://www.physicalintelligence.company/
Talk: "Large Language Models for Learning"
Abstract: This talk will explore language as a medium for robotic foundation models, from learning it to leveraging it. I'll start by describing what ingredients matter for learning to follow language commands. Then given this recipe, I'll describe how language can be used to interact with models, probe understanding, and inherit knowledge from web-scale data.
Bio: Brian Ichter is a co-founder of Physical Intelligence (Pi), a company focused on integrating general-purpose AI into the physical world. His interests lie in leveraging machine learning and large-scale models to enable robots to perform general tasks in real-world environments.

Chuchu Fan
MIT
https://aeroastro.mit.edu/realm/
"Robot Planning with Natural Language Inputs and LLMs"
Abstract: To facilitate human-robot collaboration, we want non-expert users to intuitively interact with teams of robots to accomplish complex, long-horizon tasks. Classic planning methods can find feasible, or even optimal, solutions to complex tasks; however, these methods require tasks to be specified in a formal representation. Alternatively, natural language is both highly expressive and intuitive but needs to be grounded for robots to understand and execute the comments correctly. In this talk, we introduce a framework that translates natural language task descriptions to formal intermediate representations, thereby bridging the gap between users and the power of existing planning and optimization methods. This translation-based framework for language-driven planning uses an autoregressive re-prompting technique to identify and correct errors made by LLMs or human users.
Bio: Dr. Chuchu Fan is a pre-tenure associate professor at AeroAstro and LIDS at MIT. Before that, she was a postdoc researcher at Caltech and got her Ph.D. from ECE at the University of Illinois at Urbana-Champaign. Her research group, Realm at MIT, works on using rigorous mathematics, including formal methods, machine learning, and control theory, for the design, analysis, and verification of safe autonomous systems. Chuchu is the recipient of an NSF CAREER Award, an AFOSR Young Investigator Program (YIP) Award, an ONR Young Investigator Program (YIP) Award, and the 2020 ACM Doctoral Dissertation Award.

Amanda Prorok
Cambirdge
https://proroklab.org/
"Foundation Models for Multi-Robot Perception and Control"
Abstract: How are we to orchestrate large teams of robots? How do we distill global goals into local robot policies? And how do we seamlessly integrate human-led fleet control? Machine learning has revolutionized the way in which we address these questions by enabling us to automatically synthesize agent policies from high-level objectives. In this presentation, I describe how we leverage off-the-shelf foundation models, and how we embed them into our controllers for coordinated and cooperative robot behaviors. I present experimental results with mobile robots engaged in cooperative perception, formation control, and human-led path-finding.
Bio: Dr. Amanda Prorok is a Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at the University of Cambridge, where she leads the Prorok Lab. Her research focuses on developing coordination strategies for multi-agent and multi-robot systems, integrating machine learning, planning, and control methodologies. Dr. Prorok has made significant contributions to the field, including pioneering methods for differentiable communication between learning agents and advancing algorithms for cooperative perception and coordinated path planning. Her work has broad applications in automated transport, logistics, and environmental monitoring.

Deepak Pathak
CMU
https://www.ri.cmu.edu/robotics-groups/pathak-research-group/
"Recipes for Full-Stack Robot Learning at Scale"
Abstract: Despite tremendous successes in language & vision, large-scale learning is yet to make a dent in robotics. The biggest reason is that, unlike vision or language, there is almost no robotics data internet! In this talk, I will outline the foundational principles and key challenges involved in constructing a general-purpose model for robotics. It includes: (1) how to scale data from diverse sources including simulation and videos; (2) how to address optimization issues in learning to act, which become more prominent at scale; (3) finally, I will address the hardware-related challenges, including functionality and cost-efficiency, which must be resolved to enable scalable and democratized deployment of robotic systems.
Bio: Deepak Pathak is the Raj Reddy Assistant Professor in the School of Computer Science at Carnegie Mellon University and CEO/Co-founder of Skild AI which is developing an AI foundation model for robotics. He received his Ph.D. from UC Berkeley and his Bachelor's from IIT Kanpur with a Gold Medal in Computer Science. His research spans computer vision, machine learning, and robotics with the goal of building general-purpose embodied intelligence. He is a recipient of the Sloan Fellowship , MIT TR 35 under 35 Award, Okawa research award, IIT Kanpur Young Alumnus award, Best Paper Awards at ICRA'24, CoRL'22 and faculty awards from Google, Samsung, Sony and GoodAI. Deepak's research has been featured in popular press outlets, including The Economist, The Wall Street Journal, Forbes, Quanta Magazine, Washington Post, CNET, Wired, and MIT Technology Review among others. Earlier, he also co-founded VisageMap Inc. that was later acquired by FaceFirst Inc.
For more details: https://www.cs.cmu.edu/~dpathak/

Jason Ma
UPenn
https://jasonma2016.github.io/
Title: “Foundation Model Supervision for Robot Learning”
Abstract: The availability of internet-scale data has led to impressive large-scale AI models in various domains, such as vision and language. For learning robot skills, despite recent efforts in crowd-sourcing robot data, robot-specific datasets remain orders of magnitude smaller. Rather than focusing on scaling robot data, my research takes the alternative path of directly using available internet data and models as supervision for robots -- in particular, learning general feedback models for robot actions. I will discuss two complementary approaches in this talk. First, I will present a novel reinforcement learning algorithm that can directly use in-the-wild human videos to learn value functions, producing zero-shot dense rewards for manipulation tasks specified in images and texts. Second, I will demonstrate how grounding large language models code search with simulator feedback enables automated reward design for sim-to-real transfer of complex robot skills, such as a quadruped robot dog balancing on a yoga ball.
Bio: Jason Ma is co-founder at Dyna Robotics. He received his PhD from University of Pennsylvania. His research has been recognized with honors such as RSS Pioneers, Apple and OpenAI Fellowship, Best Paper Finalist at ICRA 2024, Top 10 NVIDIA Research Projects of 2023, and covered by popular media such as the Economist, Fox, Yahoo, and TechCrunch.


Fei Miao
UCONN
http://feimiao.org/
"Multi-agent reinforcement learning and LLM for Embodied AI"
Abstract: With rapid evolution of sensing, communication, and computation, integrating learning and control presents significant Embodied AI opportunities. However, current decision-making frameworks lack comprehensive understanding of the relationship among uncertainty quantification and robust decision making for multi-agent systems in complex environments. To address the challenges, first, we design an uncertainty quantification method for collaborative perception and trajectory prediction models. Building upon this, we develop a safe and robust deep multi-agent reinforcement learning (MARL) framework that leverages control theory to address the trade-off challenge between safety guarantee and the efficiency of multi-agent systems. Additionally, we present our theoretical analysis of robust MARL methods under state uncertainties, such as uncertainty in the perception modules or worst-case adversarial state perturbations. We validate the benefits of robust MARL with safety guarantees for multi-agent systems in the context of connected and autonomous vehicles, especially in challenging mixed traffic scenarios. In the second part of the talk, we briefly outline our recent exploration of “YOLO-MARL: You Only LLM Once for MARL”, a novel framework that leverages the high-level task planning capabilities of LLMs to improve the policy learning process of multi-agents in cooperative games.
Bio: Dr. Fei Miao is the Pratt & Whitney Associate Professor in the School of Computing at the University of Connecticut, with a courtesy appointment in Electrical & Computer Engineering. She leads research at the intersection of control theory, machine learning, and game theory, focusing on the safety, efficiency, and security of cyber-physical systems, particularly in autonomous vehicles and intelligent transportation. 

Wen Sun
Cornell 
https://wensun.github.io/
Title: “Scaling Offline Reinforcement Learning at Test-time”
Abstract: Diffusion and flow models have emerged as powerful generative approaches capable of modeling diverse and multimodal behavior. However, applying these models to offline reinforcement learning (RL) remains challenging due to the iterative nature of their noise sampling processes, making policy optimization difficult. We introduce Scalable Offline Reinforcement Learning (SORL), a new offline RL algorithm that leverages shortcut models—a novel class of generative models—to scale both training and inference (test time). SORL’s policy can capture complex data distributions and can be trained simply and efficiently in a one-stage training procedure. At test time, SORL introduces both sequential and parallel test-time scaling by using the learned Q-function as a verifier. We demonstrate that SORL achieves strong performance across a range of offline RL tasks and exhibits positive scaling behavior with increased test-time compute. 
Bio: Dr. Wen Sun is an Assistant Professor in the Computer Science Department at Cornell University, leading the Reinforcement Learning group. His research focuses on developing novel reinforcement learning algorithms with applications in real-world problems. Dr. Sun completed his Ph.D. at Carnegie Mellon University's Robotics Institute and was a postdoctoral researcher at Microsoft Research NYC.

Shuran Song
Stanford
https://real.stanford.edu/
Title: “Generative Video Modeling for Robotics”
Abstract: Video models offer a promising foundation for robot learning by capturing rich real-world dynamics and scaling with massive online datasets, making them a compelling alternative to vision-language models. However, extracting precise and executable robot actions from generated videos is still not easy, where even small inaccuracies can lead to physical failure. In this talk, I’ll introduce the Unified Video Action Model (UVA) that aims at combining the scalability and intuition of video models with the precision and efficiency required for real-world robot control.
Bio: Dr. Shuran Song is an Assistant Professor of Electrical Engineering at Stanford University, where she leads the Robotics and Embodied AI Lab (REAL). Her research lies at the intersection of computer vision and robotics, aiming to develop algorithms that enable intelligent systems to learn from interactions with the physical world.​

Yunzhu Li
Columbia
https://yunzhuli.github.io/​
Title: "Foundation Models for Robotic Manipulation: Opportunities and Challenges"
Abstract: Foundation models, such as GPT, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and generalization. In this talk, I will discuss the opportunities for incorporating foundation models into classic robotic pipelines to endow robots with capabilities beyond those achievable with traditional robotic tools. The talk will focus on two key improvements in (1) task specification and (2) task-level planning. The central idea behind this research is to translate the commonsense knowledge embedded in foundation models into structural priors that can be integrated into robot learning systems. This approach leverages the strengths of different modules (e.g., VLM for task interpretation and constrained optimization for motion planning), achieving the best of both worlds. I will demonstrate how such integration enables robots to interpret instructions provided in free-form natural language to handle a wide range of real-world manipulation tasks. Toward the end of the talk, I will discuss the limitations of the current foundation models, challenges that still lie ahead, and potential avenues to address these challenges.
Bio: Dr. Yunzhu Li is an Assistant Professor of Computer Science at Columbia University. His research focuses on incorporating foundation models into robotic manipulation, aiming to enhance task specification and scene modeling in robotics. Dr. Li completed his Ph.D. at MIT and was a postdoctoral researcher at Stanford's Vision and Learning Lab.

Jesse Thomason
USC
https://glamor-usc.github.io/
Title: "Embracing Language as Grounded Communication"
Abstract: Language is not text data, it is a human medium for communication. The larger part of the natural language processing (NLP) community has doubled down on treating digital text as a sufficient approximation of language, scaling datasets and corresponding models to fit that text. I have argued that experience in the world grounds language, tying it to objects, actions, and concepts. In fact, I believe that language carries meaning only when considered alongside that world, and that the zeitgeist in NLP research currently misses the mark on truly interesting questions at the intersection of human language and machine computation. In this talk, I’ll highlight some of the ways my lab enables agents and robots to better understand and respond to human communication.
Bio: Dr. Jesse Thomason is an Assistant Professor at the University of Southern California, leading the GLAMOR (Grounding Language in Actions, Multimodal Observations, and Robots) Lab. His research integrates natural language processing and robotics to connect language to the world, focusing on language grounding and lifelong learning through interaction.

Beomjoon Kim
Korea Advanced Institute of Science and Technology (KAIST)
https://hugelab.org/
Title: "Hierarchical and modular neural network for manipulation skill discovery"
Abstract: The current idea in vogue is big model, big data, and end-to-end training for developing a general-purpose robot. But here is the problem with this approach: it consumes too much power. For instance, LLAMA 8 billion model uses 250-300 watts just to make a single inference. And that’s for a language model which only has to process a discrete set of symbols. We can only expect the power requirement would be larger for robotics, which has to process a continuous stream of high dimensional sensory data to output a sequence of continuous actions. In contrast, humans on average use only 20 watts of power. This tells us that there is something wrong with how we are building our AI models. In this talk, I will talk about our lab's recent effort to discover useful inductive biases for robot manipulation so that we can do more with less data and smaller models, much like what CNNs did for images.
Bio: ​Dr. Beomjoon Kim is an Associate Professor in the Graduate School of AI at the Korea Advanced Institute of Science and Technology (KAIST). He directs the Humanoid Generalization (HuGe) Lab, focusing on creating general-purpose humanoids capable of efficient decision-making in complex environments.

Lawson Wong
Northeastern
https://www.ccs.neu.edu/home/lsw/grail.html
Title: "Sense-Plan-Act with Foundation Models"
Abstract: Large foundation models are remarkably capable at planning both high-level skill sequences and low-level robot actions. However, their performance is still limited in complex domains, including those with long horizon and partial observability. Instead of letting the large foundation model handle everything end-to-end, we propose an approach that integrates foundation models within a traditional sense-estimate-plan-act pipeline. This enables us to retain the guarantees of conventional planning, while affording us the superior perception and reasoning capabilities of large foundation models.
Bio: Dr. Lawson L.S. Wong is an Assistant Professor in the Khoury College of Computer Sciences at Northeastern University, based in Boston. He leads the Generalizable Robotics and Artificial Intelligence Laboratory (GRAIL), focusing on learning, representing, and estimating knowledge about the world that autonomous robots can utilize.

Roberto Martín-Martín
UT Austin
https://robin-lab.cs.utexas.edu/
Title: "Mixed-Initiative LLM-powered Dialogue for Collaborative Human-Robot Mobile Manipulation Tasks"
Abstract: Despite recent advances, robots still struggle to autonomously perform complex, multi-step tasks in home environments, such as packing and wrapping a gift. A more realistic approach today involves enabling flexible human-robot collaboration, where the robot undertakes as much of the task as possible while adjusting its actions based on the availability and preferences of its human partners. Achieving seamless collaboration fundamentally relies on clear, bidirectional communication, allowing human and robot partners to negotiate roles, propose strategies, and assign tasks that leverage their respective strengths. In this talk, I will present ongoing efforts from my lab focused on enabling effective collaboration between humans and robots through MICoBot, a novel mixed-initiative dialog system specifically designed for collaborative mobile manipulation tasks. Unlike traditional one-way communication approaches, where the human simply commands the robot, MICoBot supports active, two-way dialog, empowering both parties to suggest task divisions, request help, and negotiate responsibilities dynamically, and allows the robot to adjust to the inferred availability of the human partner. MICoBot achieves this through a heterogeneous agent capable of performing both verbal and physical actions based on reactive task-allocation strategies resulting from an autonomously generated code. We believe solutions like MICoBot represent an ideal bridge to integrating robots into human-centric environments today, evolving their roles dynamically in response to improving robotic capabilities and changing human needs.
Bio: Dr. Roberto Martín-Martín is an Assistant Professor of Computer Science at the University of Texas at Austin. His research integrates robotics, computer vision, and machine learning, focusing on enabling robots to operate autonomously in human-centric, unstructured environments such as homes and offices.

Sebastian Sartor
MIT
Talk title: Scaling Laws in Robotics: More Data, Bigger Models, Better Robots
Abstract: Recent progress in language and vision has been fueled by scaling data, models, and compute. But what about robotics? Do the same principles hold? How far are we in understanding scaling laws for embodied intelligence? This talk explores current insights and open questions around scaling behavior in robotics, how it diverges from other domains, and what challenges emerge as we scale further. I’ll argue for a more rigorous science of scaling laws to advance toward generalist robotic systems.
Bio: Sebastian Sartor is an incoming PhD student in Mechanical Engineering at MIT, advised by Neil Thompson and affiliated with MIT CSAIL’s FutureTech Lab. His research focuses on scaling laws in robot foundation models, exploring their opportunities, challenges, and limitations for building generalist robotic systems. More broadly, he is interested in the technical and economic foundations driving progress in robotics and computing, from hardware to deployment. Previously, completed his Master’s at the Technical University of Munich, and studied abroad at UC Berkeley.





Organizing Committee

Picture
Rui Liu, Ph.D. 
Assistant Professor 
College of Aeronautics and Engineering
Kent State University, Ohio USA 
[email protected] 

https://ruiliurobotics.weebly.com/​
Picture
Carlo Pinciroli, Ph.D. 
Associate Professor 
Department of Robotics Engineering 
Worcester Polytechnic Institute
[email protected] 

https://carlo.pinciroli.net/​
Picture
Changjoo Nam, Ph.D. 
Associate Professor 
Department of Electronic Engineering 
Sogang University, Seoul South Korea 
[email protected] 

https://sites.google.com/site/changjoonam/
Picture
Wenhao Luo, Ph.D. 
Assistant Professor 
Department of Computer Science
University of Illinois Chicago, IL USA 
[email protected]

https://www.cs.uic.edu/~wenhao/​
Picture
Xiaoli Zhang, Ph.D. 
Associate Professor 
Department of Mechanical Engineering 
Colorado School of Mines, Colorado USA 
[email protected] 

​https://xzhanglab.mines.edu/
Picture
Jiaoyang Li, Ph.D. 
Assistant Professor 
Robotics Institute 
Carnegie Mellon University, PA USA 
[email protected] 
https://jiaoyangli.me/​​
Picture
Anqi Li, Ph.D. 
Research Scientist 
Nvidia, California USA 
[email protected]

https://anqili.github.io/​
Powered by Create your own unique website with customizable templates.