The piece reports that AI-driven speech-to-text systems struggle significantly with accents and non-standard dialects, leading to higher error rates for Black speakers compared to white speakers — and these errors are showing up in high-stakes areas like job interviews, education, and healthcare.It notes that tools used by major companies (such as automated interview platforms) transcribe and score responses in ways that could disadvantage certain groups. Developers are aware of the problem and claim to be improving by diversifying datasets (for example, through models like Whisper by OpenAI) but the article warns the “listening gap” could become a new front of bias if not addressed.

Recent news