In this work, we support experts in the safety domain with safer dismantling of drug labs, by deploying robots for the initial inspection. Being able to act on the discovered environment is key to enabling this (semi-)autonomous inspection, e.g. to open doors or take a closer at suspicious items. Our approach addresses this with a novel environmental representation, the Behavior-Oriented Situational Graph, where we extend on the classical situational graph by merging a perception-driven backbone with prior actionable knowledge via a situational affordance schema. Linking situations to robot behaviors facilitates both autonomous mission planning and situational understanding of the operator. Planning over the graph is easier and faster, since it directly incorporates actionable information, which is critical for online mission systems. Moreover, the representation allows the human operator to seamlessly transition between different levels of autonomy of the robot, from remote control to behavior execution to full autonomous exploration. We test the effectiveness of our approach in a real-world drug lab scenario at a Dutch police training facility using a mobile Spot robot and use the results to iterate on the system design.
This paper addresses the challenge of scaling Large Multimodal Models (LMMs) to expansive 3D environments. Solving this open problem is especially relevant for robot deployment in many first-responder scenarios, such as search-and-rescue missions that cover vast spaces. The use of LMMs in these settings is currently hampered by the strict context windows that limit the LMM’s input size. We therefore introduce a novel approach that utilizes a datagraph structure, which allows the LMM to iteratively query smaller sections of a large environment. Using the datagraph in conjunction with graph traversal algorithms, we can prioritize the most relevant locations to the query, thereby improving the scalability of 3D scene language tasks. We illustrate the datagraph using 3D scenes, but these can be easily substituted by other dense modalities that represent the environment, such as pointclouds or Gaussian splats. We demonstrate the potential to use the datagraph for two 3D scene language task use cases, in a search-and-rescue mission example.
ICPRAI
Affordance Perception by a Knowledge-Guided Vision-Language Model with Efficient Error Correction
Gertjan Burghouts, Marianne Schaaphok, Michael Bekkum, and 3 more authors
In ICPRAI’24 International Conference on Pattern Recognition and Artificial Intelligence, 2024
Towards Mobile Robot Deployments with Goal Autonomy in Search-and-Rescue: Discovering Tasks and Constructing Actionable Environment Representations using Situational Affordances