Presentation Title

Indoor Search and Rescue Using Unmanned Aerial Systems

Faculty Mentor

Subodh Bhandari

Start Date

18-11-2017 1:45 PM

End Date

18-11-2017 2:00 PM

Location

9-245

Session

Engineering/CS 2

Type of Presentation

Oral Talk

Subject Area

engineering_computer_science

Abstract

Unmanned Aerial Systems (UASs) can be cost effective and efficiently used for indoor search and rescue missions. These environments pose dangerous and risky scenarios for rescue personnel. UASs can locate and assist victims that are in need during the event of natural disaster with increased safety and low response time, without posing any danger to the rescuers. However, the lack of GPS signal in the indoor environments poses many difficulties for the use and navigation of these systems. A team from Cal Poly Pomona is using two small unmanned aerial systems, one for search and another for rescue, that can help mitigate this problem. The search UAS, a quadcopter, uses a front-facing camera for the detection of victims, a Pixhawk flight controller, and ultrasonic sensors for collision detection. Using computer vision and machine learning, the search quadcopter navigates through the indoor environments and identifies survivors of disaster, and then relays this information to the rescue UAS, also a quadcopter, via a ground control station (GCS). The rescue quadcopter then navigates to the location of the victim and releases the payload. The use of multiple unmanned aerial systems, allow for smaller, lighter, and more agile vehicles to perform better distribution of tasks. This presentation will discuss how the UASs will be able to fly autonomously within GPS-denied environments while detecting victims using artificial neural networks.

Summary of research results to be presented

Over the span of the summer, a neural network, autonomous control for a quadrotor vehicle, and a method for localization and mapping were completed for the indoor search and rescue using unmanned aerial systems research. For the neural network, a convolutional neural network was used for image processing. The input for the neural network was a live camera video where the neural network would then process and recognize one of the five types of trained images. The five types of images are faces, hallways with individuals around, and three hallway conditions (wall on the left-hand side, wall on the right-hand side, and center of a hallway.) The accuracy for the neural network was above 90% after testing over 2000 iterations. Quadrotor autonomous control was done by utilizing a Linux based off-board control station. The successful autonomous control was operated by communicating to the quadrotor’s flight controller by using MAVLINK messages with Python script. Although there was no user input for control, the quadrotor still had the capability to move with 6-degrees of motion. Lastly, the method for localization and mapping, was the method of Hector SLAM. By using an infrared sensor, a two-dimensional map was constructed. For localization, the inertial measurement unit and gyroscope on the flight controller were used to determine the orientation of the quadrotor, and the quadrotor is located based on the map generated by Hector SLAM.

This document is currently not available here.

Share

COinS
 
Nov 18th, 1:45 PM Nov 18th, 2:00 PM

Indoor Search and Rescue Using Unmanned Aerial Systems

9-245

Unmanned Aerial Systems (UASs) can be cost effective and efficiently used for indoor search and rescue missions. These environments pose dangerous and risky scenarios for rescue personnel. UASs can locate and assist victims that are in need during the event of natural disaster with increased safety and low response time, without posing any danger to the rescuers. However, the lack of GPS signal in the indoor environments poses many difficulties for the use and navigation of these systems. A team from Cal Poly Pomona is using two small unmanned aerial systems, one for search and another for rescue, that can help mitigate this problem. The search UAS, a quadcopter, uses a front-facing camera for the detection of victims, a Pixhawk flight controller, and ultrasonic sensors for collision detection. Using computer vision and machine learning, the search quadcopter navigates through the indoor environments and identifies survivors of disaster, and then relays this information to the rescue UAS, also a quadcopter, via a ground control station (GCS). The rescue quadcopter then navigates to the location of the victim and releases the payload. The use of multiple unmanned aerial systems, allow for smaller, lighter, and more agile vehicles to perform better distribution of tasks. This presentation will discuss how the UASs will be able to fly autonomously within GPS-denied environments while detecting victims using artificial neural networks.