And Now, A ChatBot For All Your Aviation Questions

ChatGPT is a know-it-all machine learning app. So how does it do with aviation questions? Better than I would have thought.

Fads and trends blow through the cyberverse like tumbleweeds in Texas and far be it from me to feign being above it all. So the hot new thing this month is ChatGPT which journalists are fooling with to grind out copy for The Bottomless Maw of Endless Content. ChatGPT has a potential connection to the entire universe, including aviation. So let’s take it for a spin.

First, in case you haven’t been paying attention, what exactly is ChatGPT? Easy. Just ask it:

“GPT (short for "Generative Pre-training Transformer") is a type of machine learning model developed by OpenAI that has been used for a variety of natural language processing tasks, including language translation, question answering, and text generation. GPT models are trained on large amounts of text data and are able to generate human-like text by predicting the next word in a sequence based on the context of the previous words.”

So it’s really rudimentary AI with an emphasis on machine learning, meaning it improves itself reiteratively, but is primarily a language processor. It is not sentient and doesn’t seem remotely near the singularity. It won’t even pass the Turing test because if you press forward on some subjects, it will give up the game by admitting it’s a machine with programed limits.

Nonetheless, even though it doesn’t think the way you and I do, its logical language processing in pursuit of even complex questions provides competent enough answers to make me think that 10 years from now, or maybe 20, distant iterations of it could easily be a player in many aspects of life including aviation. But the designers will have to give ChatGPT something it doesn’t now have: judgment. Can that challenge be met? I would guess yes, eventually.

For now, I peppered ChatGPT—or really, the Assistant, which is the human interface part of it—with all sorts of aviation questions and it spat back credible answers in under 10 seconds. Here’s an example: Which is safer, driving a car or flying in a light aircraft? The initial answer compared cars to airlines, but that’s not what I asked. Still, it correctly answered that airline flying is far safer than driving and gave the best data methodology—accidents per miles traveled—for the reader to understand the risks. The app has a do-over button labeled “regenerate response.” Without changing the wording, I asked again. This time, the answer was properly focused on light aircraft, but also mangled.

“Generally, driving a car is safer than flying in a light aircraft. According to data from the National Highway Traffic Safety Administration (NHTSA), there were an estimated 38,680 traffic fatalities in the United States in 2020. In contrast, the National Transportation Safety Board (NTSB) reported a total of 571 aviation accidents in the United States in 2020, with a total of 598 fatalities. This means that, on a per-mile basis, the risk of dying in a car accident is much higher than the risk of dying in an aviation accident.”

The problem? The Assistant lacked the ability to understand and calculate rate-based incidence by considering normalized exposure, say miles traveled or hours flown or driven. I’ve struggled with this myself when making these comparisons, but light aircraft flying has a higher fatal risk that driving in your car so the casual inquirer would be misled. It gets that part of the answer right, but runs off the rails on the per-mile analysis. So how’s it coming up with these answers? It’s not a Google search, as explained by Assistant itself. It uses predictive analysis on the likelihood of its internal text searches matching the query, or as it says:

“I am a computer program that is designed to generate human-like text based on the information and knowledge that I have been trained on. I am not a chatbot, and I do not have the ability to engage in conversations in the same way that a person would. Instead, I am designed to generate responses to specific questions or prompts that are provided to me. I do not have access to the internet and cannot search for information or browse websites. All of the knowledge and information that I use to generate responses is contained within my own programming. I do my best to provide accurate and useful information to the best of my ability based on my training and knowledge."

That means that the accuracy of the answer you seek may be related to the precision of the question; how it’s worded. I asked this: Show an example of drag coefficient calculation. In five seconds, back came an answer with bullet points explaining the top line considerations—drag defined, density and velocity, reference area. It then gave a sample calculation that illustrated the point well enough, although it was hardly a master class on the subject. It was easier and faster than a Google search. Speaking of which, some in the tech industry think ChatGPT may threaten Google, but I’d guess Google would acquire it before that. Or build its own version.

Here’s the Assistant’s answer to my question on how to land a light aircraft in a crosswind:

"The process of landing a light airplane in a crosswind is similar to landing a larger airplane in a crosswind. Here are the steps that a pilot may take to land a light airplane in a crosswind:

  1. Set up the approach: The pilot will set up the approach to the runway by aligning the aircraft with the centerline of the runway and establishing the appropriate descent rate.
  2. Crab into the wind: As the airplane approaches the runway, the pilot may need to "crab" into the wind by aligning the aircraft with the wind and using rudder and aileron inputs to maintain the proper alignment. This allows the aircraft to maintain a stable approach while still compensating for the wind.
  3. Flare and touchdown: As the airplane approaches the runway, the pilot will flare the aircraft to reduce the descent rate and prepare for touchdown. The pilot may need to use additional rudder inputs to maintain alignment with the runway centerline as the aircraft touches down.
  4. Use of spoilers and brakes: After touchdown, the pilot may deploy the spoilers and use the brakes to bring the aircraft to a stop. In a crosswind, the pilot may need to use additional rudder inputs to maintain alignment with the runway centerline as the aircraft slows down."

Not bad, but biased toward heavy airplanes with spoilers. And no side slips? When I asked separately about side slips, I got a good explanation of what side slips are and how to do them, including a mention of using them for crosswind landings. But the Assistant didn’t make the connection in the first answer.

As a push to test, I asked the app if it was safe to fly my Cirrus from Florida to Boston, on a day when New England was getting slammed with snow and ice. Not going there, says the Assistant.

“It is not appropriate for me to provide specific safety advice for your flight, as I do not have access to current weather or other pertinent information. Additionally, I am not a certified aviation professional and do not have the necessary knowledge or expertise to assess the safety of your flight.

Before planning or conducting any flight, it is important to carefully assess the safety of the route and the aircraft, and to consider any potential risks or hazards. You should consult with a qualified aviation professional, such as a pilot or flight instructor, and consult relevant weather forecasts and other resources to help you make an informed decision about the safety of your flight.”

A good answer, with no risk of an ego faking it just to look knowledgeable. I might suggest the same reticence for any question involving humor. I asked the assistant to write a joke about a pilot who becomes lost.

Why was the pilot who became lost always calm? Because they had a GPS!

Hit the snare drum. Then the hook. I do find it curious that the Assistant wasn’t troubled by inserting an awkward pronoun. Is it possible that language learning is somehow sensitive to vague political correctness? Bad as the joke is—and it’s awful—I would have used he, the slings and arrows of sexism be damned.

ChatGPT is evidently loaded with trillions of lines of text to search, interpret and analyze for its answers. And however it does it, it does it quick. Most of these answers come within 10 seconds, although several times it demurred with a busy server response. For example, I asked it if pigs will ever fly and it basically said, “I got nothin.”

Enthralled with my own stature as a landmark contributor to aviation literature, I asked the bot if it knew of me or of Berge, who actually has raised the literary bar in what is otherwise a vast wasteland of babbling about stalls, crashes and the price of avgas.

“I'm sorry, but I don't have any information about Paul Berge. Without more context or information, I am unable to provide any further details.

I'm sorry, but I don't have information about Paul Bertorelli. It is possible that Paul Bertorelli is a person who is involved in the aviation industry or who has written about aviation. However, without more context or information, I am unable to provide any further details.”

Now there’s an interesting quirk. It’s the Law of Who Spews the Most Gets a Hit. I’ve been hurling words down the black pipe of the internet intake manifold longer than Berge has, thus I merit at least a desultory mention that I might be an aviation journalist. Or perhaps not.

I’ll check back in a few years for a more definitive determination. One can only hope machines really can learn.