The problem of service robots in domestic environments is not new, but a substantial amount of challenges are still open. Two properties of the environments where service robots are used make these challenges particularly hard: (1) domestic environments are semi-structured and unpredictable, e.g., the location of relevant objects and obstacles may change unexpectedly, and (2) humans are around, expecting the robots to interact naturally with them. In order to solve these challenges, various technical and scientific problems have to be addressed, such as navigation, perception, decision-making, planning, execution, manipulation, and human-robot interaction (HRI).
HARODE focuses on some of these problems, targeting at advances beyond the state-of-the-art while employing components off-the-shelf (COTS) to implement the others. The problems we plan to focus on are: (a) semantic mapping, that is, the problem of perceiving and representing relevant aspects of a physical environment, such as locations of certain objects and of humans to interact with, as well as of dynamic obstacles, e.g. closed doors and moved furniture, (b) human-aware planning and execution, comprising the problem of performing tasks where close human involvement is expected, while being capable of detecting and coping with unexpected events, and (c) benchmarking, in the sense of assessing performance of the robot system against a reference performance.
The research team’s extensive experience is the basis to solve all these problems, using available robot platforms, a ROS-based operational software architecture, COTS for the sub-systems outside the scope of this project, and a testbed for domestic robots, enabling an integrated system. This also manifests through SocRob, a team comprised of students and researchers highly motivated to test new benchmark methods for evaluating challenging new aspects of state-of-the-art robots.
Even though the research results are applicable to domains other than domestic robots there is a target on a functioning integrated system, including its evaluation against a reference benchmark, both in a domestic testbed and by participating in scientific competitions, namely RoboCup@Home.
Institute for Systems and Robotics (IST/ISR), from Instituto Superior Técnico (Lisbon, Portugal) is a university based R&D institution where multidisciplinary advanced research activities are developed in the areas of Robotics and Information Processing, including Systems and Control Theory, Signal Processing, Computer Vision, Optimization, AI and Intelligent Systems, Biomedical Engineering. Applications include Autonomous Ocean Robotics, Search and Rescue, Mobile Communications, Multimedia, Satellite Formation, Robotic Aids.
The MBot is composed of two main parts: body and head. The head can pan and has LED backlight to express emotions through a drawn mouth, eyes and checks. The body has all of the CPU devices (two motherboards with i7 processors), a touchscreen and all of the navigation mechanics, based on a Four-Wheel Omnbidirectional Mecanum drive.
Regarding additional sensors and actuators needed specifically for @Home competitions, a Cyton 1500 Robai Arm with 7 DoF was attached to the left side of the body for manipulation capabilities and a microfone was placed on the top of the head for voice recognition.
This robot replaced ISR-Cobot for our @Home missions, as it proved to have more computational power, robustness and better aesthetics towards our goal.
The experimental methodology aims at the use of scientific robot competitions for both pushing for progress and for evaluation. In this respect, it should be highlighted the enormous effort and progress attained during the second year in terms of robot skills, recognized by the increasingly better results obtained in these competitions. Topics of focus in 2018:
[Omnidirectional vision based semantic mapping]
On top of the work developed in the first year, a technique for robust detection using deep reinforcement learning and omnidirectional systems was developed fundamental algorithms in omnidirectional cameras, for problems such as self-localization/navigation (we consider the cases useful for a domestic robots, namely: fisheye; catadioptric; and multi-perspective camera systems). The work developed in this topic was published in two conferences. We also focused on algorithms that, through an image bounding box (which were obtained by a neural network) and a depth map, estimate robustly the object’s position in the environment. These algorithms were tested in a real scenario on two main tasks of the challenge RoboCup 2018 for service robots: 1) object detection, recognition and grasping; and 2) for people following in challenging scenarios.
[Human-aware planning and execution under uncertainty]
We have integrated a task planning architecture into the Mbot robot that is able to execute commands that are given to the robot either via voice or text. The input audio gets converted into text by using a speech recognition software and later on is fed to a Natural Language Understanding custom component, that ultimately gets converted into semantic goals that need to be set, to trigger the planning and execution pipeline. A planner receives the problem instance information along with the domain model to produce a plan, which is a sequence of actions that the robot needs to execute to accomplish the given task. The last component called planner executor, receives the plan, iterates over the actions executing one at a time by using re-factored finite state machines that encapsulate a robust execution behavior.
We have built on the work described in the first year to develop our probabilistic benchmarking approach which is independent of the metrics used to assess the performance of the subsystems composing a robot system. The approach uses probability theory as the common language to quantify the performance of distinct functionalities of a robot system and their impact on the performance of a task carried out by that system. The approach can be used to analyse the performance of a task plan from the performances if its composing functionalities, or to (re)plan when a performance degradation in functionality is predicted to cause performance degradation of the task plan beyond acceptable limits.
[System integration, evaluation, and dissemination] The experimental part of this project is based on the SocRob@Home team, a team that has participated in several competitions, that both pushed for progress on the integrated architecture and allowed us to evaluate our approach. All tests feature an apartment like scenario where there is an owner requesting for some task to be accomplished by the robot. The given tasks are diverse and include: storing groceries, finding objects in the apartment, actuating remote devices such as blinds or lights, etc. The main task in which we focused our efforts from a research perspective was the General Purpose Service Robot, where the robot needs to integrate all of his available behaviours: navigation, people following, object recognition, manipulation, a speech synthesizer, etc. Watch some of the tests take place in the SocRob team playlist.
- Rute Luz, Guilherme Lawless, Oscar Lima, Rodrigo Ventura. Small Obstacle Detection and Avoidance using a depth camera: a Case Study in RoboCup@Home, Workshop on Robots for Assisted Living of IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain.
- Mithun Kinarullathil, Pedro H. Martins, Carlos Azevedo, Oscar Lima, Guilherme Lawless, Pedro U. Lima, Luís Custódio, Rodrigo Ventura. From User Spoken Commands to Robot Task Plans: a Case Study in RoboCup@Home. Workshop on Language and Robotics of IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain
- Oscar Lima, Rodrigo Ventura, and Iman Awaad. Integrating classical planning and real robots in industrial and service robotics domains. In Workshop on Planning and Robotics (PlanRob), International Conference on Automated Planning and Scheduling (ICAPS), Netherlands, 2018. URL: http://users2.isr.tecnico.ulisboa.pt/~yoda/papers/Lima18.pdf
- João Cartucho, Rodrigo Ventura, and Manuela Veloso. Robust object recognition through symbiotic deep learning in mobile robots. In IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain.
- Pedro U. Lima. A Probabilistic Approach to Benchmarking and Performance Evaluation of Robot Systems. In IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain.
- Pedro Miraldo, Francisco Eiras, Srikumar Ramalingam. Analytical Modeling of Vanishing Points and Curves in Catadioptric Cameras. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, Salt Lake City, USA. URL: https://arxiv.org/abs/1804.09460
- Pedro Miraldo, Tiago Dias, Srikumar Ramalingam. A Minimal Closed-Form Solution for Multi-Perspective Pose Estimation using Points and Lines. In European Conference on Computer Vision (ECCV), 2018, Munich, Germany. URL: https://arxiv.org/abs/1807.09970
- Gonçalo Pais, Jacinto C. Nascimento, Pedro Miraldo. OmniDRL: Robust Pedestrian Detection using Omnidirectional Cameras and Deep Reinforcement Learning. In PMLR Conference on Robot Learning (CoRL), 2018, Zurich, Switzerland
- Gonçalo Pais, Pedro Miraldo, and Jacinto C. Nascimento. Multi-task Learning for Pedestrian Detection in Omnidirectional Vision Systems. International Journal on Computer Vision (IJCV), 2018
- José Iglésias, Pedro Miraldo, Rodrigo Ventura, Towards an Omnidirectional Catadioptric RGB-D Camera, Proc. of IROS 2016 – IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Korea. http://users2.isr.tecnico.ulisboa.pt/~yoda/papers/Iglesias16.pdf
- Oscar Lima, Rodrigo Ventura, A case study on automatic parameter optimization of a mobile robot localization algorithm, Proc. of ICARSC 2017 – IEEE 17th International Conference on Autonomous Robot Systems and Competitions, Coimbra, Portugal. http://users2.isr.tecnico.ulisboa.pt/~yoda/papers/Lima17.pdf
rodrigo.ventura (at) isr.tecnico.ulisboa.pt
+351 21 841 8289
Torre Norte, Av. Rovisco Pais 1, 1049-001 Lisboa