In 2025, cybercriminals increasingly used artificial intelligence to sharpen scams and social engineering, making attacks far more realistic, personalized, and harder to detect. AI boosted the scale, speed, and believability of phishing and other fraud through convincing text, deepfake voice impersonations—including of relatives and public officials—and autonomous AI agents that can research targets and craft tailored lures. Scammers also combined public social media data with stolen information to support romance scams, sextortion, and fake products, while techniques like prompt injection showed how AI platform vulnerabilities themselves could be misused. As AI-generated content becomes more lifelike, experts warn verifying identities and communications will be crucial to resisting these evolving threats

Recent news