• Login
  • Register

Work for a Member organization and need a Member Portal account? Register here with your official email address.

Publication

Seeing Is Not Believing: Realistic AI Videos Disrupt Confidence in Authentic Videos and Perceived Reality

Eitan Wolf, Yasith Samaradivakara, Om Gokhale, Sally Ahmed, Yuhan Wang, Pat Pataranutaporn, and Pattie Maes 

Wolf, E., Samaradivakara, Y., Gokhale, O., Ahmed, S., Wang, Y., Pataranutaporn, P., & Maes, P. (2026). Seeing Is Not Believing: Realistic AI Videos Disrupt Confidence in Authentic Videos and Perceived Reality. In Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems (CHI EA '26). ACM. https://doi.org/10.1145/3772363.3798358

Abstract

Visual media has long served as a reference for what feels real, but generative video models now produce synthetic footage closely resembling real-world scenes. Such media increasingly appears in short-form social feeds, where AI-generated and human-generated videos are consumed in rapid succession. While prior work has focused on detecting synthetic content, less is known about how exposure to such content shapes perception of subsequent human- generated media. We conducted a two-phase study (𝑁 = 100), comparing participants first exposed to AI-generated videos to a control group viewing human-generated videos, before both groups transitioned to human-generated content. Despite clear disclosure, the AI-exposure group reported increased doubt about subsequent  videos’ authenticity, reduced judgment confidence, greater perceptual disruption, and lower social connectedness. These findings suggest perceptual and psychological effects occur even when synthetic content is disclosed, highlighting the need for approaches addressing experiential impacts beyond deception and design strategies prioritizing perceptual safety.

Related Content