Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Study on fighter pilots and drone swarms sheds light on the dynamics of trust within human-machine teams

by Eric W. Dolan
February 25, 2024
in Artificial Intelligence, Aviation Psychology and Human Factors
(U.S. Air Force photo/R. Nial Bradshaw)

(U.S. Air Force photo/R. Nial Bradshaw)

Share on TwitterShare on Facebook

In a new study published in the Journal of Cognitive Engineering and Decision Making, researchers from the U.S. Air Force, Leidos, and Booz Allen Hamilton have taken a significant leap in understanding the dynamics of trust within human-machine teams, specifically in the context of military operations involving unmanned aerial vehicles (UAVs).

This research illuminates how fighter pilots’ trust in one component of a technological system can affect their trust in the system as a whole—a phenomenon known as the pull-down effect. Crucially, the study finds that experienced pilots can differentiate between reliable and unreliable UAVs, suggesting that the pull-down effect can be mitigated, thereby enhancing mission performance and reducing cognitive workload.

The trust between humans and machines is a pivotal factor in the successful deployment of autonomous systems, especially in high-stakes environments like military operations. Trust is considered a cornerstone of effective human-machine interaction, influencing operators’ reliance on technology.

Prior research has shown that trust in automation is directly linked to system reliability. However, when multiple autonomous systems are involved, as in the case of UAV swarms, judging reliability becomes significantly more complex. This complexity introduces the risk of the pull-down effect, where trust in all system components is reduced due to the unreliability of a single element.

“I was interested in conducting this research because the pull-down effect is a phenomena based in heuristic responding that has relevance to the U.S. Air Force,” explained study author Joseph B. Lyons, the senior scientist for Human-Machine Teaming at the Air Force Research Laboratory and co-editor of Trust in Human-Robot Interaction.

“In the Air Force, we need to understand how humans respond to human-machine interactions and, in this case, if perceptions of one technology can propagate to others, that is something we need to account for when fielding novel technologies. Also, this is a topic that has only been done in laboratory settings, so it was not clear if the observed effects would translate into more Air Force relevant tasks with actual operators.”

To investigate this phenomenon, the researchers employed a highly immersive cockpit simulator to create a realistic operational environment for the participants. The study involved thirteen experienced fighter pilots, including both retired and currently active pilots, with a wealth of flying hours in 4th and 5th generation Air Force Fighter platforms, such as the F-16 and F-35.

Participants were exposed to a series of six flight scenarios, of which four included an unreliable UAV exhibiting errors, while the other two scenarios featured perfectly reliable UAVs. Each pilot encountered 24 UAV observations in total, with a mix of reliable and unreliable UAVs designed to simulate real-world operational conditions closely.

Google News Preferences Add PsyPost to your preferred sources

The simulation environment was designed to reflect the complexities of managing UAV swarms, requiring pilots to monitor and control multiple UAVs (referred to as Collaborative Combat Aircraft or CCAs) simultaneously. The scenarios tasked pilots with monitoring four CCAs for errors, communicating any unusual behaviors, and selecting one CCA for a mission-critical strike on a ground target, all while managing the cognitive workload and maintaining situational awareness.

Contrary to what might have been expected based on previous research, the study revealed that the presence of an unreliable UAV did not significantly diminish the trust that experienced fighter pilots placed in other, reliable UAVs within the same system. This suggests that experienced operators, such as the fighter pilots participating in this study, are capable of nuanced trust evaluations, effectively distinguishing between the reliability of individual system components.

This finding challenges the assumption underpinning the pull-down effect — that the unreliability of one component can tarnish operators’ trust in the entire system. Instead, the pilots demonstrated what can be described as a component-specific trust strategy, suggesting that their expertise and familiarity with operational contexts enable them to make more discerning judgments about technology.

Moreover, the study found a significant increase in cognitive workload associated with the unreliable UAV compared to the reliable ones. This was an expected outcome, logically aligning with the notion that dealing with unreliable system components requires more mental effort and monitoring from human operators.

Yet, the researchers observed that higher trust in UAVs corresponded with lower reported cognitive workload, hinting at the potential for trust to mitigate the cognitive demands placed on operators by unreliable technology.

“After reading about this study, people should take away a couple things,” Lyons told PsyPost. “First, we found no evidence that negative experiences with one technology contaminate perceptions of other similar technologies in realistic scenarios with actual operators (in this case fighter pilots). While this is interesting, it also represents one study and thus requires replication in other settings. Second, people should take away the idea that theories and concepts should be tested in realistic domains with non-student samples wherever possible.”

Interestingly, despite introducing heterogeneity in the UAV systems to see if this could mitigate the pull-down effect — through different naming schemes and suggested capability differences — the study did not find significant evidence that such measures influenced the pilots’ trust evaluations. This result suggests that the pilots’ ability to maintain specific trust towards reliable components was not necessarily enhanced by these attempts at system differentiation.

“I was surprised that our manipulation of different asset types did not seem to have any bearing on the pilot’s attitudes or behaviors,” Lyons said. “However, and as noted in the manuscript, it is possible that the scenario did not pull out the need (or affordance) for this asset heterogeneity as much as it should have. I think this is an area that is ripe for additional research.”

However, the study is not without its limitations. The small sample size and the specific context of military aviation may limit the generalizability of the findings. Furthermore, the researchers acknowledge the need for future studies to explore the mitigating effects of factors such as system heterogeneity and operator training on the pull-down effect.

“There are always caveats with any scientific work,” Lyons said. “This was just one study, with a pretty small sample size. These findings need to be replicated across other samples and other domains of interest. Never put all of your eggs into the basket of just one study.”

This research opens up new avenues for enhancing the design and deployment of autonomous systems in military operations, ensuring that trust calibration is finely tuned to the demands of high-stakes environments.

“Within the Air Force, we seek to understand how to build effective human-machine teams,” Lyons told PsyPost. “A significant part of that challenge resides in understanding why, when, and how humans form, maintain, and repair trust perceptions with advanced technologies. The types of technologies we care about are diverse, and we care about the gamut of Airmen and Guardians across the Air and Space Force.”

“I feel that the heuristics we use in making trust-related judgements are underestimated and underrepresented in the literature,” he added. “This is a great topic for academia to advance our collective knowledge.”

The study, “Is the Pull-Down Effect Overstated? An Examination of Trust Propagation Among Fighter Pilots in a High-Fidelity Simulation,” was authored by Joseph B. Lyons, Janine D. Mator, Tony Orr, Gene M. Alarcon, and Kristen Barrera.

Previous Post

Celebrity admiration vs. obsession: New study sheds light on stalking behaviors

Next Post

Selfie culture and self-esteem: Study unravels the impact of social media on adolescent girls

RELATED

Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Therapists test an AI dating simulator to help chronically single men practice romantic skills

March 9, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Artificial Intelligence

Brain-controlled assistive robots work best when they share the workload with users

March 8, 2026
Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage
Artificial Intelligence

Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage

February 28, 2026
People with social anxiety more likely to become overdependent on conversational artificial intelligence agents
Artificial Intelligence

AI therapy is rated higher for empathy until people learn a machine wrote the text

February 26, 2026
New research: AI models tend to reflect the political ideologies of their creators
Artificial Intelligence

New research: AI models tend to reflect the political ideologies of their creators

February 26, 2026
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

AI and mental health: New research links use of ChatGPT to worsened psychiatric symptoms

February 24, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

How personality and culture relate to our perceptions of artificial intelligence

February 23, 2026
Young children are more likely to trust information from robots over humans
Artificial Intelligence

The presence of robot eyes affects perception of mind

February 21, 2026

STAY CONNECTED

LATEST

Therapists test an AI dating simulator to help chronically single men practice romantic skills

Women with tattoos feel more attractive but experience the same body anxieties in the bedroom

Misophonia is strongly linked to a higher risk of mental health and auditory disorders

Brain scans reveal the unique brain structures linked to frequent lucid dreaming

Black Lives Matter protests sparked a short-term conservative backlash but ultimately shifted the 2020 election towards Democrats

Massive global study links the habit of forgiving others to better overall well-being

Neuroscientists have pinpointed a potential biological signature for psychopathy

Supportive relationships are linked to positive personality changes

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc