Research Update: Trust & AI

Title slide that says Trust & AI
Title slide that says Trust & AI
Title slide that says Trust & AI

| Mar 23, 2024

We shared a research update on our obsession with trust and AI. The premise is simple: in order for AI to be useful, we need to know if and when to trust it.

This matters because of automation bias—we can be primed to rely on and trust AI beyond what we should. Understanding trust isn't optional; it underpins everything about the real-world value of AI.

The update covered four key areas:

Why trust in AI is important. Automation bias creates a default toward over-reliance. Without deliberate frameworks for calibrating trust, we risk either rejecting useful AI capabilities or accepting outputs we shouldn't.

Trust in AI as a technology. We discussed trustworthiness across eight dimensions and examined how trust in AI varies across populations and perspectives. Not everyone starts from the same baseline, and context shapes what trust means in practice.

Trust in AI companies. Each major AI company has different strategies, innovations, and weaknesses related to trust. Understanding these differences matters for anyone making decisions about which systems to deploy and depend on.

Trust in AI as a partner. Trust is multi-faceted—from relying on something to be capable, aligned, and predictable, to trusting it to be ethical, empathetic, and even vulnerable. As AI systems take on more collaborative roles, these dimensions become increasingly relevant.

This research laid groundwork for our ongoing focus on how humans develop relationships with AI systems—work that would later evolve into The Chronicle and our exploration of cognitive permeability, identity coupling, and symbolic plasticity.

Read the full research here.