作者:Michael Hogan, Ph.D., is a lecturer in psychology at the National University of Ireland, Galway.
Imagine you’re working in a three-person team on a challenging bridge design project. Every decision about every truss is critical in the iterative design process. You have a good idea for the next design move to optimise the strength-to-weight ratio for a given truss. You state your case, and two of your teammates disagree, so you’re outvoted. Now imagine that both of those teammates were artificial intelligence (AI) agents that overruled you. As increasingly autonomous AI systems emerge into collaborative team settings, they are starting to take on decision making roles. How would you feel if your AI teammates contradicted you and outvoted you in your bridge design project? In human-only teams, research indicates that both task conflict and relationship conflict can lead to negative emotional reactions (Brett & Goldberg, 2017; Rispens & Demerouti, 2016; Tekleab et al., 2009). But what about human-AI teams? A fascinating experimental study by Hu and colleagues (2025) explored this question.
The study involved 175 undergraduate engineering students, who each worked across 30 trials to design a bridge structure in a virtual collaborative setting with two AI teammates named Alex and Taylor. Each team member had equal voting power when making decisions about the next best design move when building a bridge structure. Each of the 30 trials presented a unique and challenging design problem, and thus there were no easy solutions. All three team members needed to propose their best design solution. All options were put on the table, then each of the three team members independently cast a vote for the best design solution from those on offer, and the majority vote was selected for implementation. The team was then shown whether the decision led to a good or bad outcome for the bridge.
Hu and colleagues experimentally manipulated both the voting scenario (i.e., AI agents outvote the human vs. vote in agreement with the human) and the AI performance levels (i.e., humans work with high-performing AIs who make correct decisions 80% of the time, or low-performing AIs who are correct only 20% of the time).
The researchers focused on two key questions. First, how do people feel when they are outvoted by an AI teammate? Second, how does the performance of AI teammates impact human confidence and behaviour?
By analysing the response of human participants throughout and after the teamwork interactions, Hu and colleagues revealed several interesting findings. First, when emotional responses were surveyed, participants did not report strong negative emotional reactions when outvoted by AI teammates, especially if the final team-voted action was advantageous (i.e., emotional responses were similar regardless of whether the final team-voted action was an AI’s design action or the participants’ design action). However, participants did report feeling more submissive and having less control when they were outvoted by the two AI teammates for their own design actions and subsequently received disadvantageous feedback for their own design actions. By examining confidence rating across the 30 trials, curiously, Hu and colleagues found that participants increased in self-confidence when outvoted by AI and the outcome was disadvantageous for the team, despite receiving no feedback about the soundness of their own design idea. Also, when participants did not vote for their own design action but the two AI teammates did (and therefore they are outvoted for their own idea), self-confidence decreased significantly, regardless of whether the outcome for the bridge was advantageous or disadvantageous. These findings point to complex psychological responses that don't align with outcome quality. The findings are curious, as they indicate that human self-confidence (increase or decrease) when interacting with AI teammates is not consistent with the direction of the team outcome (advantageous or disadvantageous).
Critically for team performance, the study found that even one low-performing AI teammate significantly hurt overall team performance. Also, human confidence declined rapidly across trials for poor AI performers but grew slowly for good AI performers — revealing asymmetric trust development and calibration for low and high performing AI agents.
In general, rather than showing blind deference, participants in the study actively learned to distinguish between high and low-performing AI teammates and voted for themselves more than for AIs overall. However, they showed remarkable emotional neutrality to being outvoted when AI decisions led to positive outcomes — a pattern quite different from human-human team conflict. In the context of human-AI teamwork, this suggests not passive acceptance, but a pragmatic, outcome-focused approach that could be both promising for collaboration but also concerning for maintaining critical oversight of AI decisions. This is particularly problematic in situations where the ‘pragmatic’ or ‘best’ outcome is uncertain. Notably, if AI systems can outvote or override human decisions with a lack of resistance, we must have plans in place to prevent us from trusting AI to the point where we stop thinking critically. As human-AI teamwork becomes more common in education and workplace settings, this requirement for human oversight and critical thinking is becoming an increasing concern. What protocols are in place to monitor human confidence and trust dynamics in human-AI teams, and how do we avoid a tendency for humans to cognitively offload quality deliberation and trade their critical thinking for confidence in AI?
Ideally, AI systems need to be designed to encourage healthy scepticism and active collaboration. They need to actively prompt users to reflect, question and even disagree with their decisions. Unlike human teams, where disagreement can trigger strong emotions regardless of outcomes, the neutral response to being overruled by AI teammates is certainly remarkable. This likely occurs because, unlike in human-only teams, AI disagreement doesn't threaten relationships or trigger fears of personal rejection or reputational damage. Again, while this outcome-focused neutrality could benefit collaboration by reducing ego-driven conflict, it raises concerns about undermining collaborative oversight.
Another worrying finding from this study is that humans gained self-confidence when outvoted by AI that made poor decisions — a self-serving bias that could create dangerous accountability gaps in workplaces.
Overall, this study provides an informative look at the psychological dynamics of being overruled by AI and opens critical questions related to human-AI teamwork in the future. It will be important to explore how these human-AI teamwork dynamics apply across different tasks and contexts. It will also be critically important to develop frameworks that preserve human agency while leveraging AI capabilities. The calm acceptance of AI disagreement could enable better collaboration or dangerous abdication of human judgment — distinguishing between these outcomes is essential for responsible AI integration.
***
Connect with the authors: Anna King & Michael Hogan