Chinese AI startup DeepSeek kicked off 2026 by publishing a research paper outlining a new method called Manifold-Constrained Hyper-Connections (mHC) for training large language models, aiming to improve how models scale by enabling richer internal information sharing while keeping training stable and efficient, a longstanding technical challenge. Analysts like Wei Sun at Counterpoint Research called the technique a “striking breakthrough” that could boost performance without dramatically raising computational cost and could influence the future design of foundational AI models. The paper, co-authored by DeepSeek founder Liang Wenfeng, signals the company’s growing engineering maturity after its earlier success with the R1 reasoning model in 2025, though some experts are cautious about DeepSeek’s commercial reach, especially in Western markets. This research could lay groundwork for the company’s next generation of models, with ripple effects across the industry as rivals explore similar approaches

Recent news