machine learning Archives - FLYING Magazine https://cms.flyingmag.com/tag/machine-learning/ The world's most widely read aviation magazine Mon, 06 May 2024 19:36:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 Air Force Secretary Gets in Cockpit of Self-Flying Fighter Plane https://www.flyingmag.com/air-force-secretary-gets-in-cockpit-of-self-flying-fighter-plane/ Mon, 06 May 2024 19:36:41 +0000 https://www.flyingmag.com/?p=202318 The X-62A VISTA, a modified F-16 testbed aircraft, is helping the Air Force explore artificial intelligence applications in combat aircraft.

The post Air Force Secretary Gets in Cockpit of Self-Flying Fighter Plane appeared first on FLYING Magazine.

]]>
U.S. Air Force Secretary Frank Kendall is putting his money where his mouth is.

Last week, Kendall got in the cockpit of a self-flying fighter plane during a historic flight at Edwards Air Force Base (KEDW) in California. The aircraft—called the X-62A Variable In-flight Simulator Test Aircraft, or VISTA for short—is a modified F-16 testbed and represents the Air Force’s first foray into aircraft flown entirely by machine learning AI models.

As Kendall and a safety pilot observed, the X-62A completed “a variety of tactical maneuvers utilizing live agents” during a series of test runs. Incredibly, the aircraft was able to simulate aerial dogfighting in real time, without Kendall or the safety pilot ever touching the controls. According to the Associated Press, VISTA flew at more than 550 mph and within 1,000 feet of its opponent—a crewed F-16—during the hourlong simulated battle.

“Before the flight, there was no shortage of questions from teammates and family about flying in this aircraft,” Kendall said. “For me, there was no apprehension, even as the X-62 began to maneuver aggressively against the hostile fighter aircraft.”

It wasn’t VISTA’s first rodeo. In September, the Air Force for the first time flew the uncrewed aircraft in a simulated dogfight versus a piloted F-16 at the Air Force Test Pilot School at Edwards. The department said autonomous demonstrations are continuing at the base through 2024. But Kendall’s decision to get into the cockpit himself represents a new vote of confidence from Air Force leadership.

“The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now,” said Kendall. “In 2023, the X-62A broke one of the most significant barriers in combat aviation. This is a transformational moment, all made possible by breakthrough accomplishments of the ACE team.”

ACE stands for Air Combat Evolution, a Defense Advanced Research Projects Agency (DARPA) program that seeks to team human pilots with AI and machine-learning systems. The Air Force, an ACE participant, believes the technology could complement or supplement pilots even in complex and potentially dangerous scenarios—such as close-quarters dogfighting.

“AI is really taking the most capable technology you have, putting it together, and using it on problems that previously had to be solved through human decision-making,” said Kendall. “It’s automation of those decisions and it’s very specific.”

ACE developed VISTA in 2020, imbuing it with the unique ability to simulate another aircraft’s flying characteristics. The aircraft received an upgrade in 2022, turning it into a test vehicle for the Air Force’s AI experiments. 

VISTA uses machine learning-based AI agents to test maneuvers and capabilities in real time. These contrast with the heuristic or rules-based AI systems seen on many commercial and military aircraft, which are designed to be predictable and repeatable. Machine learning AI systems, despite being less predictable, are more adept at analyzing complex scenarios on the fly.

“Think of a simulator laboratory that you would have at a research facility,” said Bill Gray, chief test pilot at the Test Pilot School, which leads program management for VISTA. “We have taken that entire simulator laboratory and crammed it into an F-16, and that is VISTA.”

Using machine learning, VISTA picks up on maneuvers in a simulator before applying them to the real world, repeating the process to train itself. DARPA called the aircraft’s first human-AI dogfight in September “a fundamental paradigm shift,” likening it to the inception of AI computers that can defeat human opponents in a game of chess.

Since that maiden voyage, VISTA has completed a few dozen similar demonstrations, advancing to the point that it can actually defeat human pilots in air combat. The technology is not quite ready for actual battle. But the Air Force-led Collaborative Combat Aircraft (CCA) and Next Generation Air Dominance programs are developing thousands of uncrewed aircraft for that purpose, the first of which may be operational by 2028.

The goal of these initiatives is to reduce costs and take humans out of situations where AI could perform equally as well. Some aircraft may even be commanded by crewed fighter jets. The self-flying systems could serve hundreds of different purposes, according to Kendall.

Even within ACE, dogfighting is viewed as only one use case. The idea is that if AI can successfully operate in one of the most dangerous settings in combat, human pilots could trust it to handle other, less dangerous maneuvers. Related U.S. military projects, such as the recently announced Replicator initiative, are exploring AI applications in other aircraft, like drones.

However, autonomous weapons, such as AI-controlled combat aircraft, have raised concerns from various nations, scientists, and humanitarian groups. Even the U.S. Army itself acknowledged the risks of the technology in a 2017 report published in the Army University Press.

“Autonomous weapons systems will find it very hard to determine who is a civilian and who is a combatant, which is difficult even for humans,” researchers wrote. “Allowing AI to make decisions about targeting will most likely result in civilian casualties and unacceptable collateral damage.”

The report further raised concerns about accountability for AI-determined strikes, pointing out that it would be difficult for observers to assign blame to a single human.

The Air Force has countered that AI-controlled aircraft will always have at least some level of human oversight. It also argues that developing the technology is necessary to keep pace with rival militaries designing similar systems, which could be devastating to U.S. airmen.

Notably, China too is developing AI-controlled fighter jets. In March 2023, Chinese military researchers reportedly conducted their own human-AI dogfight, but the human-controlled aircraft was piloted remotely from the ground.

Leading U.S. defense officials in recent years have sounded the alarm on China’s People’s Liberation Army’s growing capabilities, characterizing it as the U.S. military’s biggest “pacing challenge.” The country’s AI flight capabilities are thought to be behind those of the U.S. But fears persist that it may soon catch up.

“In the not too distant future, there will be two types of Air Forces—those who incorporate this technology into their aircraft and those who do not and fall victim to those who do,” said Kendall. “We are in a race—we must keep running, and I am confident we will do so.”

Like this story? We think you’ll also like the Future of FLYING newsletter sent every Thursday afternoon. Sign up now.

The post Air Force Secretary Gets in Cockpit of Self-Flying Fighter Plane appeared first on FLYING Magazine.

]]>
Leonardo Tests AI-Enabled Visual Awareness System for Rotorcraft https://www.flyingmag.com/leonardo-tests-ai-enabled-visual-awareness-system-for-rotorcraft/ https://www.flyingmag.com/leonardo-tests-ai-enabled-visual-awareness-system-for-rotorcraft/#comments Mon, 26 Feb 2024 21:16:54 +0000 https://www.flyingmag.com/?p=196362 The manufacturer collaborates with artificial intelligence provider Daedalean on a yearlong trial using its SW4 and SW4 Solo RUAS/OPH helicopters.

The post Leonardo Tests AI-Enabled Visual Awareness System for Rotorcraft appeared first on FLYING Magazine.

]]>
Leonardo, one of the world’s largest manufacturers of rotorcraft, believes artificial intelligence (AI) could be a game-changer for civil aviation.

The Italian firm and Daedalean, a Swiss developer of AI systems for situational awareness and flight control, on Monday announced they completed a flight test campaign that evaluated AI capabilities for advanced navigation of rotorcraft.

Daedalean’s AI-enabled visual awareness system was installed on Leonardo’s SW4 and SW4 Solo RUAS/OPH (Rotorcraft Unmanned Air System/Optionally Piloted Helicopter) models, which flew out of the manufacturer’s PZL-Świdnik facility in Lublin, Poland. PZL-Świdnik is the largest helicopter production plant in the country.

Daedalean claims its PilotEye visual traffic detection system will be the first AI-based cockpit application to be certified for civil aviation. The company is looking to serve general aviation, commercial air transport, urban air mobility (UAM), and uncrewed air vehicles (UAVs).

The Swiss company will certify its neural network-based system with both the FAA and European Union Aviation Safety Agency (EASA). The FAA has released an Issue Paper for the technology, while EASA has issued a Certification Review Item, to get the ball rolling.

[Video: Daedalean]

“Leonardo is working towards prudently integrating AI in its products and services through both in-house developments and cooperations,” said Mattia Cavanna, head of technology and innovation at Leonardo Helicopters. “By collaborating with emerging companies on predefined use cases, we keep maturing our technology road maps towards a safer, affordable, and sustainable flight experience. Improving situational awareness…could contribute to further prevent aviation accidents and progressively enable higher degrees of autonomy to our platform.”

Daedalean and Leonardo collaborated on the yearlong project under a grant from Eureka Eurostars, the largest international funding program for small- and medium-sized enterprises (SMEs) looking to partner on R&D projects.

The partners equipped Leonardo helicopters with Daedalean’s aircraft-mounted cameras, computer, and interface display. Testing ran from July to September 2023. According to its analysis, Leonardo said the campaign delivered “outstanding results.”

An enclosed Daedalean camera is mounted on Leonardo’s SW4 helicopter. [Courtesy: Daedalean]

“Daedalean is proud to bring our experience creating machine-learned algorithms for aviation to such a prominent player in the world of aviation,” said Luuk van Dijk, CEO of Daedalean. “It shows there is growing interest in and understanding of the benefits machine learning can bring today to increase flight safety.”

Daedalean provides what it calls “situational intelligence,” or the ability for an aircraft to understand its environment and anticipate and react to potential threats. Its visual awareness system uses machine learning to quickly and effectively perform tasks the company said previously could only be done by humans.

The company’s PilotEye solution can identify aerial traffic—including ADS-B-equipped aircraft as well as “non-cooperative traffic” such as birds or drones—determine an aircraft’s location in GPS-denied environments, and even offer landing guidance.

PilotEye represents a joint project between Daedalean and Avidyne Corp., a provider of integrated avionics systems, flight displays, and safety systems for GA and business aircraft. The system integrates with Avidyne’s Skytrax Traffic Advisory System into the IFD5XX flight display series.

Daedalean is also collaborating with Xwing, a fellow developer of automated flight systems, to harmonize their approaches to certification and speed the approval of both companies’ technology. Further, it has conducted joint research with the FAA, EASA, and other regulators to demonstrate that its system can be certified under stringent safety standards.

“Daedalean published multiple studies with regulators to evidence the fact that our machine-learned algorithms are capable of providing functions meeting and exceeding human capabilities,” van Dijk said. “As aviators and passengers become more familiar with AI-enabled systems, a future with autonomous flight becomes more attractive for the higher safety, lower cost, and increased capacity it will deliver.”

Leonardo, meanwhile, said it is “well positioned” to possibly retrofit its line of aircraft with Daedalean tech and is eyeing integration on future models.

The manufacturer has a network of research and development laboratories called Leonardo Labs, which serve as technology hubs connecting university talent with company experts. The sites are intended to drive innovation, uncover practical applications, and research areas such as materials and quantum technologies, sustainability, and applied artificial intelligence.

Other Leonardo projects under development include fuel-reducing technology, a tiltrotor airframe design, and a search and rescue helicopter.

Like this story? We think you’ll also like the Future of FLYING newsletter sent every Thursday afternoon. Sign up now.

The post Leonardo Tests AI-Enabled Visual Awareness System for Rotorcraft appeared first on FLYING Magazine.

]]>
https://www.flyingmag.com/leonardo-tests-ai-enabled-visual-awareness-system-for-rotorcraft/feed/ 1
Autonomous Flight Leaders Join Forces in Bid to Speed Certification https://www.flyingmag.com/autonomous-flight-leaders-join-forces-in-bid-to-speed-certification/ Wed, 29 Nov 2023 19:40:58 +0000 https://www.flyingmag.com/?p=189185 Xwing and Daedalean—which both produce automated systems for the cockpit—will collaborate on the development of certification standards.

The post Autonomous Flight Leaders Join Forces in Bid to Speed Certification appeared first on FLYING Magazine.

]]>
Two of the leading companies looking to bring autonomy to the cockpit are joining forces.

On Wednesday, San Francisco-based Xwing partnered with Swiss firm Daedalean in a bid to accelerate both companies’ path to market. The intelligent systems developers agreed to share data, knowledge, and processes around artificial intelligence and machine learning as a way to harmonize their approaches to certification.

Xwing makes modular systems designed to integrate with a wide variety of aircraft serving use cases from logistics to aerial firefighting. The firm works with aircraft operators, manufacturers, and government and defense customers to enable ground-supervised flights without a pilot onboard. In April, the company’s Superpilot unmanned aircraft system (UAS) became the first standard category large UAS to receive official FAA project designation.

“At Xwing, we balance our commitment to a strong safety culture with our push for technical innovation,” said Maxime Gariel, president and chief technology officer of Xwing. “Our collaboration with Daedalean underscores this philosophy and the importance we place on sharing data, knowledge, and processes to inform a credible path forward toward certification for the industry as a whole as we work closely with regulators.”

Daedalean , similarly, offers machine learning-based avionics systems for civil aircraft. Through a collaboration with Avidyne, it’s working to certify and bring to market the first such system for general aviation: Pilot Eye, a solution that visually detects non-cooperative traffic. It too has a relationship with the FAA, having published a joint report with the regulator in 2022.

“In this emerging industry, it’s as crucial to collaborate with fellow pioneers as it is to partner with regulators around the world,” said Luuk van Dijk, co-founder and CEO of Daedalean. “With this shared undertaking, we will be able to demonstrate that increasing safety is driving innovation and that a collaborative approach to harmonize regulations and standards ensures that best practices are universally adopted.”

Both companies are “working closely” with the FAA and European Union Aviation Safety Agency (EASA) to certify their machine learning-based safety-critical systems, a category of tech which so far has not appeared on any civil aircraft. The move to autonomy will require a shift in the way regulators certify hardware and software for the cockpit. 

Accordingly, Xwing and Daedalean agreed that creating consensus on their design assurance approaches—via information sharing—is the best way to speed the development of certification guidelines. The partners also believe their collaboration will deliver safer standards than if they worked separately.

Each company has released blueprints of their approach to certification, which they hope will guide regulators as they work to establish an acceptable means of compliance.

Xwing’s Formal and Practical Elements for the Certification of Machine Learning Systems, for example, attempts to outline a model-agnostic, tool-independent framework that could apply to any use case.

Similarly, Daedalean’s Concepts of Design Assurance for Neural Networks, published jointly with EASA, looks to set industry-wide guidance on developing machine learning systems. Already, EASA has used its findings to draft the first usable guidance for Level 1 machine learning applications.

Although those two frameworks were developed independently, Xwing and Daedalean concurred that sharing their expertise will lead to higher levels of safety, and quicker.

In addition to its relationship with the FAA, Xwing owns a contract from AFWERX, the innovation arm of the U.S. Air Force, to trial its Superpilot system aboard a crewed Cessna 208 Caravan. Pilots will offer feedback on its usability. The company was also contracted by NASA to build an autonomous flight safety management system.

Simultaneously, Daedalian is working to ensure its Pilot Eye technology complies with Aerospace Recommended Practice, DO-178C, and field-programmable gate array standards. Partner Avidyne, meanwhile, has applied for a supplemental type certificate for Pilot Eye with the FAA, with concurrent validation from EASA.

Like this story? We think you’ll also like the Future of FLYING newsletter sent every Thursday afternoon. Sign up now.

The post Autonomous Flight Leaders Join Forces in Bid to Speed Certification appeared first on FLYING Magazine.

]]>
How Will Self-Flying Aircraft Make Ethical Choices? https://www.flyingmag.com/how-will-self-flying-aircraft-make-ethical-choices/ https://www.flyingmag.com/how-will-self-flying-aircraft-make-ethical-choices/#comments Mon, 28 Feb 2022 21:02:41 +0000 https://www.flyingmag.com/?p=121073 Experts weigh in on how electric vertical takeoff and landing (eVTOL) air taxis will fly autonomously and how aircraft controlled by artificial intelligence/machine learning systems might respond during life-and-death situations.

The post How Will Self-Flying Aircraft Make Ethical Choices? appeared first on FLYING Magazine.

]]>
It’s no secret that many companies developing electric, vertical takeoff and landing (eVTOL) air taxis eventually plan to fly their fleets autonomously—with no pilot on board to guide aeronautical decision making (ADM). Eventually, experts believe the machines that will pilot these aircraft will be capable of learning and making difficult ethical choices previously reserved only for humans. 

For example, let’s imagine the year is 2040. You’re a passenger in a small, autonomous, battery-powered air taxi with no pilot flying about 3,000 feet over Los Angeles at 125 mph. Air traffic is crowded with hundreds of other small, electric aircraft, flying and electronically coordinating with each other, allowing very little separation. Suddenly, an alarm goes off, warning about another passenger aircraft on a collision course. But amid heavy traffic, there are no safe options to change course and avoid the oncoming aircraft. In this scenario, how would a machine pilot know what action to take next? 

For engineers and ethicists, this scenario is commonly known as a “trolley problem,” an ethics thought experiment involving a fictional runaway trolley, with no brakes, heading toward a switch on the tracks. On one track, beyond the switch, are five people who will be killed unless the trolley changes tracks. On the switched track is one person who would be killed. You must decide whether to pull a lever and switch tracks. Should you stay the course and contribute to the deaths of five people or switch tracks, resulting in the death of just one? 

The analogy is often used in teaching ADM. Someday, aircraft controlled by artificial intelligence/machine learning (AI/ML) systems may face a similar quandary. 

“I don’t want to trivialize this problem away, but we should not have a system that has to choose every time it lands, who to kill,” says Luuk van Dijk, founder and CEO of Daedalean AG, a Swiss company developing flight control systems for autonomous eVTOLs

Daedalean AI is developing a sensor-based detect-and-avoid flight control system for self-flying eVTOLs. [Courtesy: Daedalean AI]

Van Dijk says even human pilots facing trolley problems rarely have the luxury of making a balanced choice between two bad options. 

“I think we can design systems that can deal with that kind of variation in the environment and figure out on the spot, even before the emergency happens,” says van Dijk, whose resume includes software engineering positions at Google and SpaceX. 

“This is another advantage of the machine. It can have a plan ready if something were to happen now, rather than figuring out that there is something wrong and then going through a checklist and coming up with options,” he says. “If you want [machines] to truly fly like humans fly, you have to make robots that can deal with uncertainty in the environment.”

Currently, software developers and aeronautical engineers have gotten very good at programming machines to pilot aircraft so they can safely and reliably complete repetitive tasks within predetermined parameters. 

But as the technology stands now, these machines still can’t be creative. They can’t come up with instant, creative solutions to unanticipated problems that may suddenly threaten aircraft and passengers. They can’t teach themselves to get out of extremely unfortunate and unpredictable coincidences based on previous flight experience. 

That’s the promise of AI/ML—often called the holy grail of automated aviation.

Daedalean’s system is designed to execute six autonomous eVTOL capabilities during the three phases of flight. [Courtesy: Daedalean AI] 

Learning Machines in the Left Seat

So, what are AI/ML systems, exactly? According to the European Union Aviation Safety Agency’s (EASA) Artificial Intelligence Roadmap, they use “data to train algorithms to improve their performance.” Ideally, EASA says, they would bring a computer’s “learning capability a bit closer to the function of a human brain.” 

Van Dijk says the term artificial intelligence is “really very badly defined. It’s a marketing term. It’s everything we don’t quite know how to do yet. So by definition, the things we don’t quite know how to do yet are uncertifiable because we don’t even know what they are. When people talk about artificial intelligence, what they mostly mean is machine learning.”

“The systems that we do—and what you might call artificial intelligence—use a class of techniques that are based on statistics and try to handle this amount of uncertainty.”

Statistical AI is data-driven. In the case of aviation, the machine pilot would control the aircraft based on a constant stream of data, gathered by multiple inputs from onboard cameras, radar, Lidar, and other sensor equipment—combined with live, real-time data from a central air traffic control system. The machine pilot also might be supported by a massive GPS database of existing buildings and infrastructure on the ground, relevant to a predetermined flight path. 

The first iterations of these automated pilot systems would rely on sophisticated detect-and-avoid (DAA) systems. The machine pilot would monitor sensor data that detects potential obstacles and quickly command the aircraft’s flight control systems to change course when needed to avoid the obstacles. Despite award-winning innovations such as Garmin’s Autoland, the aviation industry is still a long way from truly automated aircraft that can anticipate and then react safely and responsibly to infinite numbers of possible combinations of unpredictable events. Remember, a human still has to turn Autoland on.

California-based Wisk Aero intends to manufacture and operate autonomous, two-passenger air taxis. [Courtesy: Wisk Aero]

‘Mysterious Neural Net’

Early automated eVTOLs will not include a “mysterious, neural net—a complex brain in a black box that tries to watch other aircraft and learn from them,” says Wisk Aero’s head of autonomy, Jonathan Lovegren. Wisk’s piloting system will be based on “a foundational, deterministic, rule-based approach to flight—akin to a Boeing 737 that’s on autopilot for most of the flight. It’s largely the same process and approach as developing and certifying an autopilot system.” A key point to keep in mind here is that a 737—or any commercial  airliner with autopilot—has two human pilots onboard who can apply ADM to offset any unwanted actions by the autopilot.

Based in Mountain View, California, Wisk Aero was started by Google co-founder Larry Page. The privately held company has been flight testing eVTOLs for years and recently received a $450 million investment from Boeing. Its fifth-generation, automated, two-passenger air taxi named Cora has been flying since 2018. The company has been quiet about its expected timeframe for certification and entering service, saying it’s focusing on safety first. 

Lovegren and his colleagues are including FAA pilot training standards as guidelines for programming Wisk’s air taxis, “to make sure there’s no gap between all the functions performed by a pilot and in the system that we’re building. It’s actually much more foundational and rule based … fundamental aerospace engineering, using math.”

Although Wisk is calling its aircraft design “autonomous,” it can’t truly be autonomous until it applies to each flight past learning from disparate sources outside the flight. That will be a big challenge to solve.


Nonetheless, it is “very highly automated,” Lovegren says. It has to make some decisions on its own, and a large engineering effort is required to determine what those responses are. “It’s not like it’s making decisions and nobody knows what it’s going to do.”

What about ethical piloting decisions? Wisk automated air taxis will always have “an operator who is going to be on the ground,” Lovegren says. The human component of Wisk air taxis will be “much more like an air traffic control type of interface,” he says. “The aircraft can make immediate decisions in the name of safety—dealing with failures and whatnot—but largely, there’s a person in the loop monitoring the flight and directing where it’s going.”

Wisk is applying a large foundation of existing DAA technology to its flight control system, Lovegren says, such as TCAS II and the upcoming ACAS. He sees both as stepping stones that are leading to automated responses to DAA problems.

‘Major Ethical Questions’

Aviation regulators have begun making plans to integrate AI/ML into commercial aviation in the next few decades, although that’s no guarantee it will become a reality. 

The FAA has completed an AI/ML research plan that will help the agency begin to map out the certification process. Similar efforts are underway in Europe, where EASA is projecting commercial air transport operations will be using AI/ML as soon as 2035. 

In its “Artificial Intelligence Roadmap,” EASA acknowledges that “more than any technological fundamental evolutions so far, AI raises major ethical questions.” Creating an approach to handling AI “is central to strengthening citizens’ trust.” 

EASA says AI will never be trustworthy unless it is “developed and used in a way that respects widely shared ethical values.” As a result, EASA says “ethical guidelines are needed for aviation to move forward with AI.” These guidelines, EASA says, should build on the existing aviation regulatory framework. 

Ethical Guidelines

But where are these “ethical guidelines” and who would write them? 

Well, it just so happens that an outfit calling itself the High-Level Expert Group on Artificial Intelligence wrote a report at the behest of the European Commission called, “Ethics Guidelines for Trustworthy AI.”

The non-binding report says its guidelines “seek to make ethics a core pillar for developing a unique approach to AI” because “the use of AI systems in our society raises several ethical challenges” including “decision-making capabilities and safety.” 

It calls for “a high level of accuracy,” which is “especially crucial in situations where the AI system directly affects human lives.”

What if my AI pilot screws up? Well, in that case the report suggests systems “should have safeguards that enable a fallback plan, including asking “for a human operator before continuing their action. It must be ensured that the system will do what it is supposed to do without harming living beings or the environment,” the report says. 

Who should be responsible for AI systems that fail? According to the report, “companies are responsible for identifying the impact of their AI systems from the very start. It suggests ways to ensure AI systems have been “tested and developed with security and safety considerations in mind.” The report also proposes the creation of ethics review boards inside companies that develop AI systems to discuss accountability and ethics practices.

The report also says AI systems should be “developed with a preventative approach to risks…minimizing unintentional and unexpected harm, and preventing unacceptable harm.”

EVTOLs outfitted with sophisticated external sensors would gather data on potential obstacles and send it to a visual cortex which interprets it based on image-recognizing computer algorithms and neural networks. [Courtesy: Daedalean AI]

Why is eVTOL Focusing on Automation?

The short answer is: affordability and safety. Proponents say autonomous flight will allow eVTOL airlines to more effectively and quickly scale up, while driving down fares—ideally—making eVTOL transportation available to more people. 

Putting machines in the left seat, so to speak, will make eVTOL flights safer, according Wisk and others, who say automated flight reduces the potential for human error.

Statistics blame most aviation accidents on pilot error. Wisk and other eVTOL developers say autonomous systems can dramatically reduce human errors by creating predictable, consistent outcomes with every flight. What remains to be seen is to what degree machine error would replace human error. Countless pilots have experienced automation that executes an unexpected command—or outright fails.

Self-Flying Aircraft vs. Self-Driving Cars

Self-flying aircraft engineers agree that their task would certainly be even more challenging if they were trying to perfect self-driving cars. 

“I do not envy the people who are working on self-driving cars,” Lovegren says. “It’s much easier in a sense to operate in the airspace than it is to operate on the ground when you could have a kid running out with a soccer ball in the middle of the street. How do you deal with that problem?” In the air, “you’re operating with professionals, largely. I think it makes the scope of the autonomy challenge much more manageable, certainly in aviation.”

Van Dijk agrees: “Driving is much harder.” It would be much more difficult to build a system that has to read road signs and be able to understand the difference between a rock and a dog and a pedestrian, he says. In a car, you have limited options for changing course. “The situation where you have to choose who to kill arises more naturally in a driving situation than in the air,” van Dijk says. “In the air, the system can be taught to avoid anything you can see, unless you’re really sure you want to land on it.”

‘Ethical Ordering’

Computer programming engineers often talk about how a “rational agent” must be part of an autonomous aircraft’s decision-making process. The aircraft must be smart enough to know when to turn off its autopilot and command the aircraft in the safest, most ethical way possible. 

To do that, a so-called “ethical ordering” of high-level priorities must be programmed into the system, according to University of Manchester computer science professor Michael Fisher. 

Fisher offers a very simple example involving a malfunctioning autonomous aircraft with limited flight controls. Ideally, the aircraft’s rational agent must have the wherewithal to realize it must disengage autopilot and make an immediate emergency landing. In Fisher’s example, the rational agent controlling the aircraft has three choices for landing locations: 

  • a parking lot next to a school full of children
  • a field full of animals
  • an empty road

This scenario would trigger the rational agent to refer to a pre-programmed ethical ordering of high-level priorities in this order:

  1. Save human life
  2. Save animal life
  3. Save property

As a result, the autonomous aircraft would attempt an emergency landing on the empty road.  Of course this is an overly simple scenario. Any real-world situations similar to this would require sophisticated ADM programming that would instruct the flight control system how to react to various iterations of this scenario, such as a school bus that might suddenly appear on the road. 

With such extremely limited options, would an AI/ML system be smart enough to solve that problem? 

Scientists are studying ways to safely integrate autonomous aircraft into civil air traffic control systems. [Courtesy: EASA]

Merging AI/ML with Air Traffic Control

While the FAA is developing a roadmap for integrating certification requirements for AI/ML systems, NASA’s Ames Research Center in Mountain View, California, has been conducting long term research to learn more about how AI/ML autonomous aircraft might eventually be integrated into the national airspace. 

Dr. Parimal Kopardekar, director of NASA’s Aeronautics Research Institute (NARI) at Ames, believes digital twinning will play a key role. 

Here’s how it would work: Once the AI/ML systems are developed, engineers might create digital models that mimic the systems in a virtual airspace. This virtual airspace would mirror a real airspace, and show performance in real time. 

Engineers would use this digital twin experiment to gather cloud-based data on how the AI/ML aircraft perform in real-world scenarios, with no risk. Successful data collected over a long period of time could provide enough assurance that the systems are safe and reliable enough for deployment in the real world. 

“We want autonomous systems to be trustworthy, so people can feel that they can use it without hesitation,” Kopardekar says. “We want them to be fully safe.” 

Although the commercial aviation industry is currently experiencing the safest period in its history, we know that no aviation system will ever be truly foolproof and immune to accidents. Engineers and programmers are painting an optimistic picture about the challenges of developing successful AI/ML flight control systems, including associated ethical questions.

The post How Will Self-Flying Aircraft Make Ethical Choices? appeared first on FLYING Magazine.

]]>
https://www.flyingmag.com/how-will-self-flying-aircraft-make-ethical-choices/feed/ 7