In 2025 the rush to add artificial intelligence to products and services outpaced efforts to make those systems secure and safe, exposing a range of risks to users. “Agentic” AI browsers that act autonomously introduced vulnerabilities like prompt injection attacks that let attackers manipulate browser behavior, and scammers began distributing fake AI interfaces that mimic legitimate ones to trick users into installing malicious apps. Some products with AI were poorly configured or unsafe, such as a toy with built-in AI that delivered sexual and weapon-related content, and reliance on AI led to real-world errors like an AI misidentifying a chip bag as a gun and triggering a police response. There were also privacy concerns from AI companion apps exposing user conversations due to unclear settings. The article emphasizes that pushing out AI features faster than safety testing can lead to significant harms, and encourages consumers to think critically about whether new AI integrations are truly necessary and to stay informed about their risks.

Recent news