This research theme comprises two research projects, Aerial Sensing using Single and Multiple UAVs, and Underwater Sensing for the Red Sea Exploratorium.
Aerial Sensing using Single and Multiple UAVs
This project aims to develop the algorithms and software for collecting 3D image data for the purpose of reconstructing urban areas and monitoring urban areas using unmanned aerial vehicles (UAVs). The purpose of this project is to support the large number of applications in urban planning that require 3D models of large urban areas or other sites. Some applications include reconstruction of archeological sites or historical sites for conservation purposes. The project also supports the wide range of applications of monitoring large urban areas with dynamic objects (e.g., humans). Some of the application areas include crowd monitoring and automatic detection and tracking of nefarious activities (e.g., riots). In many of these applications, static cameras in the scene are infeasible, for example, in reconstructing a whole neighborhood of a city or tall buildings. In many monitoring applications it is not feasible to setup a static camera to monitor the area, for example, in concerts in parks it would not be convenient to setup cameras in the whole park, but only monitor an area with UAVs in areas of interest. With these applications as motivation, we this project aims to construct the computer vision and control algorithms to enable urban reconstruction and monitoring from multiple unmanned aerial vehicles (UAVs). The project will focus on developing online computer vision and control algorithms that are designed for low-cost hardware on inexpensive off-the-shelf UAVs. On the computer vision side, the project develops techniques for enabling 3D reconstruction from moving cameras by designing automatic algorithms for image collection, simultaneous 3D map construction and localization of the UAV, motion detection of objects from the ground reference frame, and activity detection and recognition. The key aspect of this project is to develop these computer vision techniques in an online manner and in a way that can be implemented on inexpensive hardware. This is in contrast to traditional algorithms that are designed for batch processing of the entire video. The project will also focus on algorithms for automatic control of multiple UAVs, and distributed control techniques to share the relevant visual information extracted from the computer vision algorithms among multiple UAVs and monitor an area effectively.
Underwater Sensing for the Red Sea Exploratorium
Lying between Africa and Asia, the coastal zone of the Red Sea is a unique marine ecosystem that houses one of the most extensive, diverse, and colorful coral reefs in the world. Mapping this vibrant underwater world in 3D and analyzing its biological content (e.g., fish species) are important goals for Saudi Arabia, whose longest oceanic border is with the Red Sea. With its unique access to the Red Sea, we are also interested in this endeavor from a research point-of-view, as it poses serious technical problems (e.g., underwater 3D reconstruction and object recognition/detection).
The task of exploring and monitoring oceanic resources remains expensive and challenging because it requires human divers who can only explore underwater environments during short periods of time and within limited depths. While underwater vehicles have proven to be very useful for safely exploring oceans at greater depths, they lack human dexterity, which is necessary for performing fine manipulation tasks, e.g. sample collection, in-situ experimentation, manipulation, etc. Furthermore, existing underwater vehicles are large and cumbersome, with mechanical characteristics that make them extremely difficult to operate in closely confined fragile spaces or turbulent fluid environments.
For this project, we plan on using a robotic platform dubbed O2 that was jointly developed by RSRC at KAUST, the AI Lab at Standford, and Meka robotics. O2 will play the role of a virtual ocean explorer that is remotely tele-manipulated and -operated by a human observer. Much thought has been invested in designing O2, so as to enable and facilitate the manipulation of its components while a specific task (e.g., underwater navigation or haptic exploration of coral reef) is underway. This underwater robot has two arms like a human diver, which allow it to explore and map coral reefs while safely placing sensors and collecting samples. O2 has multi-modal sensors, unique manipulation skills, and an ability to gain different lines-of-sight by repositioning itself. This presents an unprecedented opportunity to obtain the rich visual, auditory, and tactile information required to digitally reconstruct the Red Seas marine environment. The design of O2 has been finalized and the last stages of its manufacturing and assembly are in progress.