August 30, 2025
Why drones and AI cannot quickly find missing flood victims
Uncategorized

Why drones and AI cannot quickly find missing flood victims

AI is not more precise for searching and rescue than people, but it is much faster.

The recent success in the use of computer vision and machine learning on drone images for the rapid determination of buildings and street damage to hurricanes or postponement of forest fire lines suggest that artificial intelligence could be valuable for missing people when looking for missing people.

Machine learning systems usually need less than a second to scan a high -resolution picture of a drone compared to one to three minutes for one person. In addition, drones often produce more pictures that are possible in the critical first hours of a search if survivors may still be alive.

Unfortunately, today’s AI systems are not up to the task.

We are robotics researchers who examine the use of drones in disasters. Our experiences in search of victims of floods and numerous other events show that current implementations of AI are neglected.

However, the technology can play a role in the search for flood victims. The key is a Ki-Human cooperation.

A large red SUV with a white horizontal strip and symbols and lettering along the side
Drones have become standard equipment for first aiders, but floods represent unique challenges. Eric Smalley, CC BY-ND

KIS potential

The search for flood victims is a kind of wilderness search and rescue that represents unique challenges. The goal for mechanical learning scientists is to evaluate and display the images of victims where the search and rescue staff should concentrate in these images. When the responder sees signs of a victim, pass the GPS location in the picture to search teams in the field to check it.

The ranking takes place by a classifier that is an algorithm who learns to identify similar instances of objects – cats, cars, trees – from training data in order to identify these objects in new images. In a search and rescue context, for example, a classifier instance of human activities such as garbage or backpacks would recognize in order to get to wilderness and rescue teams in wilderness or even identify the missing person himself.

A classifier is required due to the mere volume of pictures that can produce drones. For example, a single 20-minute flight can create over 800 high-resolution images. If there are 10 flights – a small number – there would be over 8,000 pictures. If a responder only spends 10 seconds to look at every picture, it would make more than 22 hours. Even if the task is divided into a group of “squinter”, people tend to miss pictures of pictures and show cognitive fatigue.

The ideal solution is a AI system that scans the entire image, prioritizes images that have the strongest signs of victims and emphasizes the image area so that a responder is inspected. It could also decide whether the location of search and rescue crews should be made aware of special attention.

Where Ai is too short

While this seems to be a perfect opportunity for computer vision and machine learning, modern systems have a high error rate. If the system is programmed in such a way that the number of candidate locations in the hope of not missing victims to miss, it will probably cause too many wrong candidates. This would mean that Squinters or, worse, the search and rescue teams that would have to navigate through rubble and dirt to check the candidate locations.

The development of computer vision and machine learning systems for the search for flood victims is difficult for three reasons.

One of them is that existing computer vision systems are certainly able to identify people who are visible in aerial photographs. Flood victims are often hidden, camouflaged, involved in rubble or immersed in water. These visual challenges increase the possibility that existing classifiers will miss the victims.

Second, machine learning requires training data, but there are no data records of aerial photographs in which people are involved in dirt that are covered with mud and not in normal attitudes. This deficiency also increases the possibility of classification errors.

Third, many of the drone images, which were often taken up by seekers, must be slanted instead of looking directly down. This means that the GPS location of a candidate area does not match the GPS position of the drone. It is possible to calculate the GPS location if the height and camera hinges of the drone are known, but unfortunately these attributes are rare. The inaccurate GPS location means that teams have to spend additional time with the search.

How AI can help

Fortunately, search and rescue teams can work with people and AI to successfully use existing systems to support the limitation and prioritization of images for further inspection.

In the event of flooding, human remains between vegetation and rubble can be tangled. Therefore, a system could identify clumps of rubble that are large enough to contain the remains. A frequent search strategy is to identify the GPS locations where flotely collected, since the victims can be part of these deposits.

Agriculture of a landscape with green rings overlaid

An algorithm for machine learning identified stacks of ruins that are large enough to recreate body in an air view of a flood. Center for robot-supported search and rescue and university of Maryland

A AI classifier could find rubble that are often associated with remains, such as artificial colors and construction residues with straight lines or 90-degree corners. The respondents find these signs when they systematically go to the river bank and flood areas, but a classifier could help prioritize the areas in the first few hours and days if there may be survivors, and later the teams were no longer able to miss the difficult landscape on foot.

This article will be released from the conversation, a non -profit, independent news organization that brings you facts and trustworthy analyzes to help you understand our complex world. It was written by: Robin R. Murphy, Texas A&M University and Thomas Manzini, Texas A&M University

Read more:

Robin R. Murphy receives funds from the National Science Foundation. It is connected to the Center for Robot-Assisted Search and Rescue.

Thomas Manzini is connected to the Center for Robot Assisted Search & Rescue (Crasar), and his work is financed by the AI Institute for Societal Decision Making (AI-SDM) of the National Science Foundation.

Leave a Reply

Your email address will not be published. Required fields are marked *