Researchers have demonstrated that large language models can be used to completely undermine online survey data by acting as synthetic respondents who evade standard detection tools with near-perfect success. The AI tool mimics human behavior by adopting demographic personas, generating realistic answers to open-ended questions, simulating reading and response times, and even typing keystroke-by-keystroke with minor typos and authentic-looking errors. In testing, this synthetic respondent bypassed attention check questions, behavioral flags, logic puzzles, and other safeguards, achieving a 99.8% detection avoidance rate in trials. The study warns that this vulnerability threatens the reliability of survey-based research because a small number of fake responses could sway poll results or distort scientific findings, and that current recruitment and validation methods may no longer be sufficient to ensure data integrity.

Recent news