HARODE: Human-aware service robots for domestic environments

Funding

Instituição Proponente: IST

Unidade de Investigação Principal: ISR

FCT – PTDC/EEI-SII/4698/2014

logo-fct
  • RoboCup 2018

  • Mbot @Home Testbed

  • Mbot @Lab

  • @Ciência Viva

  • RoboCup 2018 Team

The problem of service robots in domestic environments is not new, but a substantial amount of challenges are still open. Two properties of the environments where service robots are used make these challenges particularly hard: (1) domestic environments are semi-structured and unpredictable, e.g., the location of relevant objects and obstacles may change unexpectedly, and (2) humans are around, expecting the robots to interact naturally with them. In order to solve these challenges, various technical and scientific problems have to be addressed, such as navigation, perception, decision-making, planning, execution, manipulation, and human-robot interaction (HRI). 

 

HARODE focuses on some of these problems, targeting at advances beyond the state-of-the-art while employing components off-the-shelf (COTS) to implement the others. The problems we plan to focus on are: (a) semantic mapping, that is, the problem of perceiving and representing relevant aspects of a physical environment, such as locations of certain objects and of humans to interact with, as well as of dynamic obstacles, e.g. closed doors and moved furniture, (b) human-aware planning and execution, comprising the problem of performing tasks where close human involvement is expected, while being capable of detecting and coping with unexpected events, and (c) benchmarking, in the sense of assessing performance of the robot system against a reference performance.

 

The research team’s extensive experience is the basis to solve all these problems, using available robot platforms, a ROS-based operational software architecture, COTS for the sub-systems outside the scope of this project, and a testbed for domestic robots, enabling an integrated system. This also manifests through SocRob, a team comprised of students and researchers highly motivated to test new benchmark methods for evaluating challenging new aspects of state-of-the-art robots.

Even though the research results are applicable to domains other than domestic robots there is a target on a functioning integrated system, including its evaluation against a reference benchmark, both in a domestic testbed and by participating in scientific competitions, namely RoboCup@Home.

Partners

Institute for Systems and Robotics (IST/ISR), from Instituto Superior Técnico (Lisbon, Portugal) is a university based R&D institution where multidisciplinary advanced research activities are developed in the areas of Robotics and Information Processing, including Systems and Control Theory, Signal Processing, Computer Vision, Optimization, AI and Intelligent Systems, Biomedical Engineering. Applications include Autonomous Ocean Robotics, Search and Rescue, Mobile Communications, Multimedia, Satellite Formation, Robotic Aids.

People

null

Rodrigo Ventura

Principal Investigator, IST/ISR
null

Pedro Lima

Full Professor, IST/ISR
null

Pedro Miraldo

Post-Doc Researcher
null

Oscar Lima

PhD Student - Team Leader
null

Tiago Dias

PhD Student
null

Enrico Piazza

PhD Student
null

Guilherme Lawless

PhD Student - Team Leader
null

Carlos Azevedo

PhD Student
null

Mithun Kinarullathil

Researcher Engineer
null

Jhielson Pimentel

Researcher Engineer
null

Rute Luz

MSc Student
null

Diogo Serra

Lab Technician
null

Louis Chopot

MSc Student
null

João Gonçalves

MSc Student
null

João Cartucho

MSc Student
null

Miguel Silva

MSc Student
null

Inês Alexandre

MSc Student

Robot Description

The MBot is composed of two main parts: body and head. The head can pan and has LED backlight to express emotions through a drawn mouth, eyes and checks. The body has all of the CPU devices (two motherboards with i7 processors), a touchscreen and all of the navigation mechanics, based on a Four-Wheel Omnbidirectional Mecanum drive.

Regarding additional sensors and actuators needed specifically for @Home competitions, a Cyton 1500 Robai Arm with 7 DoF was attached to the left side of the body for manipulation capabilities and a microfone was placed on the top of the head for voice recognition.

This robot replaced ISR-Cobot for our @Home missions, as it proved to have more computational power, robustness and better aesthetics towards our goal.

Objectives

The experimental methodology aims at the use of scientific robot competitions for both pushing for progress and for evaluation. In this respect, it should be highlighted the enormous effort and progress attained during the second year in terms of robot skills, recognized by the increasingly better results obtained in these competitions. Topics of focus in 2018:

[Omnidirectional vision based semantic mapping]

On top of the work developed in the first year, a technique for robust detection using deep reinforcement learning and omnidirectional systems was developed fundamental algorithms in omnidirectional cameras, for problems such as self-localization/navigation (we consider the cases useful for a domestic robots, namely: fisheye; catadioptric; and multi-perspective camera systems). The work developed in this topic was published in two conferences. We also focused on algorithms that, through an image bounding box (which were obtained by a neural network) and a depth map, estimate robustly the object’s position in the environment. These algorithms were tested in a real scenario on two main tasks of the challenge RoboCup 2018 for service robots: 1) object detection, recognition and grasping; and 2) for people following in challenging scenarios.

[Human-aware planning and execution under uncertainty]

We have integrated a task planning architecture into the Mbot robot that is able to execute commands that are given to the robot either via voice or text. The input audio gets converted into text by using a speech recognition software and later on is fed to a Natural Language Understanding custom component, that ultimately gets converted into semantic goals that need to be set, to trigger the planning and execution pipeline. A planner receives the problem instance information along with the domain model to produce a plan, which is a sequence of actions that the robot needs to execute to accomplish the given task. The last component called planner executor, receives the plan, iterates over the actions executing one at a time by using re-factored finite state machines that encapsulate a robust execution behavior.

[Benchmarking methodology]

We have built on the work described in the first year to develop our probabilistic benchmarking approach which is independent of the metrics used to assess the performance of the subsystems composing a robot system. The approach uses probability theory as the common language to quantify the performance of distinct functionalities of a robot system and their impact on the performance of a task carried out by that system. The approach can be used to analyse the performance of a task plan from the performances if its composing functionalities, or to (re)plan when a performance degradation in functionality is predicted to cause performance degradation of the task plan beyond acceptable limits.

Media

[System integration, evaluation, and dissemination] The experimental part of this project is based on the SocRob@Home team, a team that has participated in several competitions, that both pushed for progress on the integrated architecture and allowed us to evaluate our approach. All tests feature an apartment like scenario where there is an owner requesting for some task to be accomplished by the robot. The given tasks are diverse and include: storing groceries, finding objects in the apartment, actuating remote devices such as blinds or lights, etc. The main task in which we focused our efforts from a research perspective was the General Purpose Service Robot, where the robot needs to integrate all of his available behaviours: navigation, people following, object recognition, manipulation, a speech synthesizer, etc. Watch some of the tests take place in the SocRob team playlist.

Publications

 

  • Rute Luz, Guilherme Lawless, Oscar Lima, Rodrigo Ventura. Small Obstacle Detection and Avoidance using a depth camera: a Case Study in RoboCup@Home, Workshop on Robots for Assisted Living of IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain.
  • Mithun Kinarullathil, Pedro H. Martins, Carlos Azevedo, Oscar Lima, Guilherme Lawless, Pedro U. Lima, Luís Custódio, Rodrigo Ventura. From User Spoken Commands to Robot Task Plans: a Case Study in RoboCup@Home. Workshop on Language and Robotics of IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain
  • Oscar Lima, Rodrigo Ventura, and Iman Awaad. Integrating classical planning and real robots in industrial and service robotics domains. In Workshop on Planning and Robotics (PlanRob), International Conference on Automated Planning and Scheduling (ICAPS), Netherlands, 2018. URL: http://users2.isr.tecnico.ulisboa.pt/~yoda/papers/Lima18.pdf
  • João Cartucho, Rodrigo Ventura, and Manuela Veloso. Robust object recognition through symbiotic deep learning in mobile robots. In IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain.
  • Pedro U. Lima. A Probabilistic Approach to Benchmarking and Performance Evaluation of Robot Systems. In IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS), 2018, Madrid, Spain.
  • Pedro Miraldo, Francisco Eiras, Srikumar Ramalingam. Analytical Modeling of Vanishing Points and Curves in Catadioptric Cameras. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, Salt Lake City, USA. URL: https://arxiv.org/abs/1804.09460
  • Pedro Miraldo, Tiago Dias, Srikumar Ramalingam. A Minimal Closed-Form Solution for Multi-Perspective Pose Estimation using Points and Lines. In European Conference on Computer Vision (ECCV), 2018, Munich, Germany. URL: https://arxiv.org/abs/1807.09970
  • José Iglésias, Pedro Miraldo, Rodrigo Ventura, Towards an Omnidirectional Catadioptric RGB-D Camera, Proc. of IROS 2016 – IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Korea. http://users2.isr.tecnico.ulisboa.pt/~yoda/papers/Iglesias16.pdf
  • Oscar Lima, Rodrigo Ventura, A case study on automatic parameter optimization of a mobile robot localization algorithm, Proc. of ICARSC 2017 – IEEE 17th International Conference on Autonomous Robot Systems and Competitions, Coimbra, Portugal. http://users2.isr.tecnico.ulisboa.pt/~yoda/papers/Lima17.pdf
  • G. Pais, T. J. Dias, J. Nascimento, and P. Miraldo (2019),
    OmniDRL: Robust Pedestrian Detection using Deep Reinforcement Learning on Omnidirectional Cameras,
    IEEE Int’l Conf. Robotics and Automation (ICRA), [arXiv:1903.00676]

Contacts

Rodrigo Ventura

rodrigo.ventura (at) isr.tecnico.ulisboa.pt

+351 21 841 8289

Torre Norte, Av. Rovisco Pais 1, 1049-001 Lisboa

null

Rodrigo Ventura

Principal Investigator, IST/ISR