DC12

DC12 -Explaining and learning new goounded robot knowledge

Marie Bauer 

Short Bio

Marie is a doctoral candidate at Hamburg University, in the Knowledge Technology Group (WTM). She joined the SWEET project in fall 2025, after graduating from a research master in Computational Linguistics at Paris Cité University. She is researching how to foster more transparency in HR collaborations, by integrating explainable AI (XAI) into robotic platforms. Her interests also include Natural Language Processing (NLP) and Continual Learning (CL).

Topic

This research is set to To investigate how a robot should explain acquired knowledge, as well as behavioural change, to its user. Reasoning in robotic systems relies on specific, e.g. compositional and symbolic, representations of objects, space, tasks, actions and the agent. Since post-hoc reasoning can change a large language model’s (LLM) decision, a recursive in-depth reasoning process based on chain-of-thought prompting will be performed before any decision is accepted. The reasoning trace will be stored in short-term memory to reconcile with any required post-hoc reasoning. In case of conflict, any additionally considered facts such as externally accessed knowledge, as well as differences in belief and decisions made, will be communicated to the user.

Host Institution: Universität Hamburg – Hamburg, Germany

PhD Enrollment: Universität Hamburg

PI: Prof. Stefan Wermter