Introduction: The Paradox of Trust in an Age of AI
We are constantly warned to be cautious of Artificial Intelligence (AI). The narrative of skepticism is pervasive, urging us to question its biases, its black-box decisions, and its potential to make human contribution obsolete. However, despite these warnings, our relationship with AI is far more complex and somewhat more trusting than we would like to accept. This creates an interesting paradox- When we are stuck on a report, or unsure how to respond to a difficult email, few years ago we’d walk to an experienced colleague and ask “Hey can I run something past you?”. Today we run that question through AI instead because its instant, convenient and won’t make you feel like an idiot.
Our intuition about trust, built over centuries of human interaction, seems to fall short in the age of intelligent machines. This post distills five surprising and counter-intuitive findings from recent research that reveal the fascinating and complex psychology behind how we really trust AI.
——————————————————————————–
1. You Might Trust AI More Than Humans And You’re Not Alone.
In a direct challenge to the narrative of widespread AI skepticism, a 2024 UK-based study revealed a striking preference for machine-based decision-making. A clear majority of participants (67%) reported that they trust AI more than they trust other humans.
The core reasons for this preference may be rooted in a belief that AI operates with a kind of purity that humans lack. AI is perceived as impartial, logical, and accurate, functioning without the personal biases and self-interests that can influence human behavior. Participants believed that machines can neutrally evaluate situations (61% of responses) and have no personal interests of their own to prioritise (41% of responses).
Additionally, the study highlighted a deep-seated distrust in fellow humans. Participants felt that people are often motivated by self-interest (49% of responses), lie frequently (37%), and are influenced by untrustworthy media narratives (52%). This preference for AI, therefore, isn’t a vote of confidence in machine perfection; it’s a stark reflection of our growing disillusionment with human institutions and a retreat to the perceived purity of code.
——————————————————————————–
2. Being a “Trusting Person” Doesn’t Mean You’ll Trust AI.
It seems logical to assume that people who are naturally trusting of others would extend that same attitude toward technology. However, research suggests that this may not be the case. The personality trait of being trusting toward people has no connection to whether someone trusts AI.
A 2023 study found that self-reported trust in humans and trust in AI products were not significantly correlated. The researchers concluded that these two forms of trust are “dissociable constructs”, they are fundamentally different psychological phenomena that operate independently.
This conclusion was reinforced by a neuroscientific finding from the same study. While interpersonal trust was associated with the brain’s physical structure (specifically, gray matter density in striato-thalamic and prefrontal regions), researchers found no such link for trust in AI. This reveals that our trust in technology is not an evolutionary hand-me-down from our social instincts. Instead, it’s a calculated assessment based on utility, performance, and design—a cognitive ledger of reliability rather than a gut feeling of kinship.
——————————————————————————–
3. Your Trust in AI Is Really About the Humans Behind It.
When we decide to trust an AI system, we aren’t just evaluating an algorithm. According to a socio-technical perspective, our trust is heavily influenced by our confidence in the network of people associated with that system. Interviews with both AI practitioners and “decision subjects” (people affected by AI decisions) reveal that trust in the AI system is often secondary to trust in the human actors connected to it.
This plays out in several key ways:
• Trust in the AI Team: If users trust the people who designed, built, and regulate the system, that trust can be established “before the system exists.” Confidence in the creators is transferred directly to their creation.
• Trust in Other Users: Experiences of peers profoundly have an impact on people. For example, if 90% of 10,000 users give an AI positive feedback, new users will naturally tend to trust it more.
• Trust in the Human User: For a person affected by an AI’s output (like a patient receiving an AI-assisted diagnosis), their trust in the AI is inseparable from their trust in the human using it (the doctor). The human professional acts as the ultimate guarantor of the technology’s recommendation.
Therefore, when you decide to ‘trust’ an AI, you are not just evaluating a machine; you are placing a bet on the integrity of an entire human supply chain.
——————————————————————————–
4. Trust and Distrust Aren’t Opposites on a Sliding Scale.
We tend to think of trust and distrust as two ends of the same continuum- the more you have of one, the less you have of the other. However, psychometric research suggests this is a flawed model. Trust and distrust can be separate constructs that coexist simultaneously.
Studies on trust scales show that a “two-factor model,” with one scale for trust and another for distrust, is more accurate than a single continuum. This model captures the ambivalent feelings many people have toward complex systems, including AI. An analogy from the research explains this psychological state perfectly:
“…smokers trying to quit smoking may have both positive and negative feelings towards cigarettes, suggesting that positive and negative attitudes can coexist simultaneously.”
Consider a social media algorithm. We can simultaneously trust its competence to curate an entertaining feed, while profoundly distrusting its underlying intention to maximize our screen time for its own commercial gain. This is not a contradiction; it is the default psychological state for interacting with complex, modern AI. This complex, even contradictory, state of mind is not confusion- it’s a rational response to a powerful and multifaceted technology.
——————————————————————————–
5. We Can Trust Machines That Have “Intentions”- Just Not in the Way You Think.
A common philosophical argument is that AI is fundamentally untrustworthy because it cannot possess genuine motives or intentions. Since it lacks consciousness, goodwill, or remorse, applying the very notion of trust is a philosophical mistake. However, this argument overlooks the power of what can be called “technically mediated intentions.”
While an AI does not have its own motives, its design is saturated with the intentions of its human creators. Sociologist Bruno Latour used the analogy of the “Berlin key” to make this concept clear. This 20th-century key was designed to force tenants to re-lock a building’s main door. After unlocking the door from the outside, the key would remain trapped in the lock until the user went inside, pushed the key through to the other side, and used it to lock the door again.
The key itself has no “desire” for the door to be locked, but its physical design enforces the building owner’s intention. As Latour explains:
“From being a simple tool, the steel key assumes all the dignity of a mediator, a social actor, an agent, an active being.”
If we see AI in this way, not as a simple tool but as a powerful mediator executing a complex action programme embedded by its creators, then placing trust in it is no longer a philosophical mistake. We are simply trusting the human intentions encoded deep within its architecture.
——————————————————————————–
Conclusion: A New Blueprint for Trust
The common understanding of trust in AI is often simplistic, focused on technical benchmarks like accuracy and reliability. As these findings show, trust however, is not a technical problem to be solved with better algorithms alone. It is a deep human puzzle that involves our psychology, our social structures, and even our philosophy.
The aim of this article has been to get us to see AI trust not as a technical problem, but as a deeply human one. It’s a puzzle of human perception (why we might trust an algorithm over a neighbour), human psychology (how our brains forge entirely new pathways for this technological faith), and human systems (the vast, invisible ecosystems of people who design, deploy, and regulate the code).
This leaves us with a critical question for the future. As AI becomes woven into the very fabric of our society, on what basis will we grant our trust?- on a system’s flawless performance, or on the demonstrated integrity of the people and institutions behind it? Furthermore, in the past our trust in technology may have been based on the fact that a human existed at the other end of the technology. But, AI simulates human agency, therefore it is critical that with the prevelance of AI in all aspects of our lives- how trust is built between AI and humans must be investigated.
