Supervisor: Mark Adamik (email@example.com), Ilaria Tiddi (firstname.lastname@example.org)
Mobile robotics has been gaining more popularity in recent years. Localization and SLAM on occupancy maps are the current mainstream methods of robot navigation. However, the next step is scene understanding, so that the robot can adjust its behavior depending on the context (i.e. the perceived environment). A possible use case could be a scenario where robots need to help out elderly people in their apartments. Semantic mapping could help the robots locate and fetch objects.
In this project, you will be using the LoCoBot, a mobile robot equipped with multiple sensors. Your task would be to integrate object recognition methods (e.g. YOLO), path planning (SLAM) and knowledge representation & reasoning methods to solve planning problems.
A literature review on the state-of-the-art methods integrating knowledge representation and reasoning with mobile robotics
Familiarizing with the LoCoBot platform and ROS as well as the SLAM packages.
Build a knowledge graph representing the chosen use-case and combine it with the map.
Developing an integrated robot control system that is able to recognize and reason over objects.
Supervision will be by Mark Adamik and Ilaria Tiddi.
Some knowledge of graphs
Knowledge of Robotic Operating System (ROS) is a nice to have, willingness to learn it is a must
Kostavelis, Ioannis, et al. “Semantic mapping for mobile robotics tasks: A survey” Robotics and Autonomous Systems. Elsevier, 2015.
S. Garg et al., “Semantics for Robotic Mapping, Perception and Interaction: A Survey,” FNT in Robotics, vol. 8, no. 1–2, pp. 1–224, 2020, doi: 10.1561/2300000059.