Award Abstract # 1734362
NRI: INT: COLLAB: Robust, Scalable, Distributed Semantic Mapping for Search-and-Rescue and Manufacturing Co-Robots

NSF Org: IIS
Div Of Information & Intelligent Systems
Recipient: RUTGERS, THE STATE UNIVERSITY
Initial Amendment Date: August 3, 2017
Latest Amendment Date: March 7, 2019
Award Number: 1734362
Award Instrument: Standard Grant
Program Manager: James Donlon
jdonlon@nsf.gov
 (703)292-8074
IIS
 Div Of Information & Intelligent Systems
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: September 1, 2017
End Date: August 31, 2020 (Estimated)
Total Intended Award Amount: $426,161.00
Total Awarded Amount to Date: $442,161.00
Funds Obligated to Date: FY 2017 = $426,161.00
FY 2019 = $16,000.00
History of Investigator:
  • Dario Pompili (Principal Investigator)
    pompili@rutgers.edu
Recipient Sponsored Research Office: Rutgers University New Brunswick
3 RUTGERS PLZ
NEW BRUNSWICK
NJ  US  08901-8559
(848)932-0150
Sponsor Congressional District: 12
Primary Place of Performance: Rutgers University New Brunswick
94 Brett Road
Piscataway
NJ  US  08854-3925
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): M1LVPE5GLSD9
Parent UEI:
NSF Program(s): NRI-National Robotics Initiati
Primary Program Source: 01001718DB NSF RESEARCH & RELATED ACTIVIT
01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 8086, 9251
Program Element Code(s): 801300
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

The goal of this project is to enable multiple co-robots to map and understand the environment they are in to efficiently collaborate among themselves and with human operators in education, medical assistance, agriculture, and manufacturing applications. The first distinctive characteristic of this project is that the environment will be modeled semantically, that is, it will contain human-interpretable labels (e.g., object category names) in addition to geometric data. This will be achieved through a novel, robust integration of methods from both computer vision and robotics, allowing easier communications between robots and humans in the field. The second distinctive characteristic of this project is that the increased computation load due to the addition of human-interpretable information will be handled by judiciously approximating and spreading the computations across the entire network. The novel developed methods will be evaluated by emulating real-world scenarios in manufacturing and for search-and-rescue operations, leading to potential benefits for large segments of the society. The project will include opportunities for training students at the high-school, undergraduate, and graduate levels by promoting the development of marketable skills.

The project will advance the state of the art in robust semantic mapping from multiple robots by 1) developing a new optimization framework that can handle large, dynamic, uncertain environments under significant measurement errors, 2) explicitly allowing and studying interactions and information exchanges with humans with an hybrid discrete-continuous extension of the optimization framework, and 3) allowing an intelligent use and sharing of the limited computational resources possessed by the network of co-robots as a whole by enabling approximations and balancing of the computations. These developments will be driven by two particular case studies: a job-shop (small factory) scenario, where robots and fixed cameras are used to track and assist human workers during production and assembly of parts; and a classic search-and-rescue scenario, where operators use an heterogeneous team of robots to quickly assess damages and to discover survivors. These two applications, when considered together, highlight all the limitations of the currently prevalent geometric mapping solutions, and will be used as benchmarks for the project's results.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Zachary Serlin, Brandon Sookraj "Consistent Multi-Robot Object Matching via QuickMatch" International Symposium on Experimental Robotics , 2020 10.1007/978-3-030-33950-0_64 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project enabled networks of co-robots to collaborate among themselves and with humans to understand their surrounding environment at a semantic level. This allowed co-robots to work alongside human operators in a wide range of environments and applications. This project advanced the state of the art in robust semantic mapping from multiple robots as it introduced a new framework that can handle large, dynamic, and uncertain environments that explicitly allowed the study of interactions and information exchanges with humans; also, the framework allows an intelligent use and sharing of the limited computational resources possessed by the network of co-robots as a whole. These developments were driven by two case studies: a job-shop (small factory) scenario, where robots and fixed cameras track and assist human workers during production and assembly of parts; and a classic search-and-rescue scenario, where operators use a heterogeneous team of robots to quickly assess damages and discover survivors.  

In terms of intellectual merit, this project achieved three goals: 1) robust, scalable, semantic mapping of static environments, 2) semantic mapping of dynamic environments with multiple robots, and 3) computation control. By semantic mapping, we intend solutions that, using computer-vision algorithms, detect and classify meaningful entities, such as objects and geometric primitives (lines and planes) and incorporate them in a geometric map by following semantically meaningful constraints (e.g., objects should be on top of horizontal planes). The complexity of non-linear measurements from multiple robots and the combinatorial nature of the semantic constraints prevented previous solutions from being robust, flexible, and scaling to large problems. Our work started from where previous geometric SLAM algorithms ended, and provided novel, robust and scalable algorithms, while characterizing uncertainties and ambiguities in the produced maps. 

Besides supporting MS and PhD students, this project exposed undergraduate students to research and allowed them to contribute building, debugging, and testing a testbed that consists of three main parts: 1) CPU-based computational units, 2) GPU-based computational units, and 3) Robotic platforms. The testbed was used to verify the feasibility of our approach on a single CPU-based computational unit such as a Raspberry Pi and then helped us validate our approach to a team of Raspberry Pis. We then extended our solution to GPU-based computational units. We carefully profiled the resource utilization, both in terms of memory and processing power of the different computational units under different locations, input sizes, application deadline, and different conditions such as illumination and background clutter.

Specific achievements accomplished during this project are summarized below:

-- Implementation of end-to-end object detection algorithm on Raspberry Pi. This involved implementing traditional computer-vision algorithms and convolution neural network-based algorithms. Both exact and approximate versions of these algorithms were implemented. Furthermore, a Markov Decision Process (MDP)-based decision framework for adaptive selection of computer vision algorithms was implemented.

-- Establishing connection between Raspberry Pi and the quadcopter using the Robot Operating System (ROS) and running experiments in different locations and conditions to study the performance of the proposed solutions.

-- Implementation of a distributed computing framework to establish communication between multiple quadcopters. This activity involved building a resource-task mapper to decide the different tasks of the object-detection algorithm executed by quadcopters in the team based on the resources available.

-- Finally, we extended our analysis to GPUs, whose parallel structure makes them more effective than general-purpose CPUs for algorithms like deep learning where the processing of large blocks of visual data is done in parallel. We showed that it is possible to bring a many-fold increase in the overall processing speed by running an application in parallel (e.g., by exploiting the inherent parallel structure of the application or executing computer-vision algorithm on multiple images in parallel) along with approximation.


Last Modified: 01/29/2021
Modified by: Dario Pompili

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page