Get to know the PILLAR-Robots project: Extracting and Grounding purposes in robots

WP2 is crucial in developing this understanding as it harnesses perception, allowing robots to learn much like humans do – by observing their environment.

Here’s what we plan to do in WP2:

  1. Purpose Classification: We begin by identifying and formalizing different classes of purposes. This crucial first step provides a foundation for the robot to understand its role in varied scenarios, much like understanding different jobs in an office setting.
  2. Visual Scene Understanding: Following this, we plan to develop components that allow robots to understand visual scenes. These components enable the robot to describe its environment using high-level entities such as objects and actions, akin to how we distinguish different elements in a scene.
  3. Audiovisual Modules: Building on the visual understanding, we are designing audiovisual modules that can ground a robot’s purpose using human cues and visual examples. This stage is akin to learning by observing how tasks are done – for instance, watching a chef to learn how to cook a dish.
  4. Natural Language Encoding: Lastly, we plan to develop purpose encodings based on natural language. By understanding and interpreting language interactions, our robots will be better equipped to comprehend their purpose and role, a bit like learning a new job by reading a manual.

By making strides in these areas, WP2 is set to make significant contributions to the wider project. It places us on track to enhance the landscape of robotics significantly, bringing us closer to a future where robots understand their purpose and can interact effectively with their environment.