EASA Updates Aviation-Centric Artificial Intelligence ‘Roadmap 2.0’

4

The European Aviation Safety Agency (EASA) has been working with industry stakeholders to study the issue of how Artificial Intelligence (AI) impacts aviation. Last year, the agency released its Roadmap 2.0 to bring the study up to date and outline the pathway for further progress. EASA recently released its annual update to Roadmap 2.0.

In announcing the new plan last year, EASA wrote: “The purpose of this AI Roadmap 2.0 is not only to communicate on the Agency vision for the deployment of AI in the aviation domain, but also to further serve as a basis for interaction with its stakeholders on this topic. In this perspective, this document is further intended as a dynamic document, which will be revised, improved and enriched with time as the Agency will gain experience on AI developments and stakeholders will provide their input and share their vision with the Agency.”

Two previous concept papers set the pathway for approval and deployment of AI systems focused on safety issues for pilots, air traffic control personnel and more. The systems are already being applied to current projects through special conditions.

Areas EASA cited as needing to be addressed include: establishing public confidence in AI-enabled aviation products; preparing for certification and approval of advanced automation; integrating transparency, non-discrimination, and fairness into the oversight process; and deciding on what additional processes, methods and standards are required to take fullest advantage of AI to improve safety.

Roadmap 2.0, with annual revisions, follows up on the first EASA AI Roadmap 1.0 established in 2020. At that time, areas of exploration included: key opportunities and challenges to incorporating AI into aviation; potential impact on organization, processes, and regulations; and what actions should the agency take to meet those challenges.

Mark Phelps
Mark Phelps is a senior editor at AVweb. He is an instrument rated private pilot and former owner of a Grumman American AA1B and a V-tail Bonanza.

4 COMMENTS

  1. Runway incursions, airport ground movements and security come first to mind. Unless this is well thought through this will be a rabbit hole for badly needed funds to see that airport safety improves. Yes, G.I.G.O. can suck up a lot of cash instead of better personnel training and recruitment. Does A.T.C. heartily embrace this concept? D.E.I. for A.I? Please, I’m from “Missouri”, spell it out for me.

    • I reply tentatively because I don’t want to get shot down, but neither the article nor the EASA roadmap actually say DEI. On the other hand, even simple AI models are very susceptible to bias, and what EASA are very keen to test is that any AI algorithm is “fair”. So, if an AI is allocating departure slots, don’t give all the best ones to AA because it comes first in the alphabet. if we’re in the hold waiting to shoot an approach, don’t make me wait the longest because I am in a slow airplane. You get the idea – there are a zillion things that someone’s going to try to automate with AI – but they all need to pass some kind of sense-check that goes beyond the existing functional certification models we are all used to.

  2. First off, anything starting off with EASA has to be taken with a hundredweight of salt. Euro agencies historically want to jump in and controll things (an do so for their own best interests). When you can set the standards, you control the industry It’s not about “safety”.

  3. Quantity (and partly quality) of comments looks like a steep homework assignment for people tasked with writing press releases on behalf of EASA. This one went sideways.

LEAVE A REPLY