COVID-19 or Coronavirus cases have spiked across the world. There were more than a million confirmed cases in the US as of May 6, 2020 - an increase of more than 25,000 cases from the day before. To slow the spread of COVID-19, CDC and WHO are encouraging people to practice "social distancing" measures, i.e. 2 meters away from other humans. With COVID-19, the goal of social distancing is to slow down the outbreak in order to reduce the chance of infection among high-risk populations and to reduce the burden on the health care system. Some epidemiologists are predicting that the need to enforce social distances is even higher in densely populated areas, like New York City. In fact, some countries such as Singapore have recently adopted new laws to enforce social distances in public places . Furthermore, many cities are investigating the use of drones to monitor the pedestrians , that is only limited to large open spaces.
The need for social distancing: The incubation period – the time between infection and symptoms appearing – has been found to be around five days on average for COVID-19, although it can take up to 14 days for symptoms to appear. Furthermore, during this time period people can be asymptomatic carriers. At the moment, it appears that social distancing is the only viable option to stop the spread of this infection. Staying at least two meters away from other people reduces the chances of catching COVID-19.
To address the need for social distancing enforcement, GAMMA Group Members of the University of Maryland, led by Professors Dinesh Manocha (ECE/CS/UMIACS/ISR/Robotics) and Aniket Bera (UMIACS/CS/Robotics), are developing novel COVID-19 Prevention Robots (CPR) using mobile robots and commodity sensors. The NSF EAGER project intends to monitor pedestrian movements, using cameras and other sensors, that will automatically check for vital signs to gather reliable data, and investigate techniques to influence the behaviors of pedestrians to change their social behavior using robots. CPR will automatically monitor pedestrian movements and detect whether they are maintaining social distances. CPR will also combine prior work in social psychology and behavior modeling to develop new methods to influence the behaviors of pedestrians using robots, in terms of social distances.
We are developing novel COVID-19 Prevention Robots (CPR) using mobile robots and commodity sensors to help deal with the challenges arising from COVID. Our goals include:
1. Automatically monitor pedestrian movements and detect that they are maintaining social distances.
2. Use camera, thermal sensors, and microphones on the robots to automatically check for the vital signs, like body temperature, respiratory rate, heart rate, blood pressure, etc. Our goal is to develop methods to gather reliable data with these sensors.
3. Investigate techniques to influence the behaviors of pedestrians to change their social behavior (i.e. maintain social distances) using robots.
We will monitor pedestrian environments in realtime and give feedback to pedstrians who show symptoms and refer them to get tested. We will also send aggregate data (social distance violations, cough and fever symptoms) to local authorities after de-identifying them. The authorities will be able to take more direct measures to clean and disinfect the area and/or enforce a stronger social distancing.
Privacy Protection: De-Identification of Pedestrians
In the context of video surveillance or live video processing data, a significant threat to privacy is facial data, which can be misused. We will take the utmost care in terms of privacy protection. We will use visual image redaction methods for face de-identification. No identifiable information will be stored on our servers or used for any kind of detection. We will only use non-identifiable information like gaits and gestures for our detections. Our work will use a realtime algorithm to protect individuals’ privacy in video surveillance data by de-identifying the faces such that the face cannot be reliably recognized. Since our detection algorithms don’t require facial information, we will detect facial regions in realtime, and whenever a face is found, it will be blurred (Gaussian kernels) before sending this visual data to our computational models. Similar techniques will be used on the gesture or gait data.