top of page

Embodied AI Center

Kiel University

VISION

Artificial Intelligence for the Physical World

Artificial intelligence is rapidly evolving from purely digital, text- and image-based systems toward embodied intelligence, known as Embodied AI, which perceives, acts, and learns in the physical world. This new phase of AI research is emerging through the convergence of robotics, machine perception, and large multimodal AI models.

The E-AI-C is intended to bring together the CAU’s existing expertise in surgical robotics, marine robotics, including in cooperation with GEOMAR, as well as humanoid and quadruped robotics within a shared structure.

By closely interlinking physical embodiment with the development of modern AI methods for robots, the center will lay the foundation for new scientific and technological breakthroughs, with far-reaching momentum for medical applications, marine research and marine infrastructures, and human-robot collaboration.

spot_lights.jpg

Objectives

Advance embodied learning in real-world environments

Develop AI systems that can perceive, reason, plan, and act continuously in dynamic physical settings, enabling robots to learn from interaction rather than from static data alone.

Support data-driven business transformation

Enable organizations to integrate intelligent embodied systems, automation, simulation, and AI-assisted decision-making into their workflows to improve efficiency, resilience, and competitiveness.

Develop coordinated multi-agent AI systems

Create teams of robots, software agents, and autonomous systems that can communicate, coordinate, and jointly solve complex tasks in dynamic real-world environments.

Enable safe, trustworthy, and domain-adaptive robotic autonomy

Build reliable autonomous systems for surgical, marine, humanoid, and quadruped robotics, with a strong focus on safety, robustness, human-robot collaboration, and transfer across application domains.

Develop multimodal Vision-Language-Action models for robotics

Create and evaluate AI models that connect language, visual perception, sensor data, and physical motion, allowing robots to understand instructions and translate them into safe, context-aware actions.

Selected Publications

  • A. R. Wagner, M. Balaji Rao, H. Wrede, S. Pirk, X. Xiao, Fire as a Service: Augmenting Robot Simulators with Thermally and Visually Accurate Fire Dynamics, ArXiv, 2026, [Preprint]

  • A. R. Wagner, M. Balaji Rao, X. Xiao, S. Pirk, Understanding Fire Through Thermal Radiation Fields for Mobile Robots, ArXiv, 2026, [Preprint]

  • ​S. Huber, K. Pelzer, D. Nquyen, X. Xiao, S. Pirk, HUMEMBR: Learning Human Routines for Predictive Embodied Navigation, under submission, 2026

  • ...

Members

csm_portraitbild-pirk_d0d86fcafe.jpg.webp

Visual Computing and Artificial Intelligence

Spokesperson and Founding Member

csm_ralf-krestel-2_7c5d25e4af.jpg

Prof. Dr. 

Ralf Krestel

Information

Profiling and Retrieval

Founding Member

csm_cordes-02_b413a1c067.jpg.webp

Prof. Dr.

Ann-Kristin Cordes

Digital

Innovation

Founding Member

csm_portraitbild-nowotka_6dc320d74f.jpg.webp

Prof. Dr. 

Dirk Nowotka

Dependable

Systems

Founding Member

csm_portraitbild-koeser_6dd7a09b7e.jpg.webp

Prof. Dr.-Ing.

Kevin Köser

Marine

Data Science

Founding Member

csm_portraitbild-tomforde_91e173a7f2.jpg.webp

Prof. Dr. 

Sven Tomforde

Intelligent

Systems

Founding Member

bottom of page