Companies

DeepSeek Enhances Competitive Edge for Chinese Chipmakers in AI Sector

Published February 13, 2025

DeepSeek's advancement in artificial intelligence (AI) models is giving Chinese chipmakers, including Huawei, a better chance to compete within the domestic market against more powerful U.S. processors.

For years, Huawei and its Chinese competitors have faced challenges in matching Nvidia in producing high-end chips capable of rivaling the U.S. company's offerings for training AI models. Training involves feeding data into algorithms, which helps them learn to make precise decisions.

DeepSeek's models, focusing on "inference"—when an AI model generates conclusions—prioritize computational efficiency rather than just raw processing power. This approach is expected to help narrow the performance gap between Chinese-made AI processors and their more advanced U.S. counterparts, according to analysts.

Recently, Huawei, along with other Chinese AI chipmakers like Hygon, Tencent-backed EnFlame, Tsingmicro, and Moore Threads, have announced intentions to support DeepSeek models, although specific product details remain scarce.

While Huawei declined to comment, companies like Moore Threads, Hygon, EnFlame, and Tsingmicro did not respond to requests for more information.

Experts in the industry predict that the open-source nature of DeepSeek, coupled with its low fees, could encourage broader AI adoption and spur real-world applications. This could potentially assist Chinese firms in navigating U.S. export restrictions on their most powerful chips.

Prior to the attention DeepSeek garnered this year, products like Huawei's Ascend 910B were considered by clients such as ByteDance to be more appropriate for less computationally demanding inference tasks. This stage follows training and involves trained AI models making predictions or executing functions, such as through chatbots.

Several companies in China, ranging from automakers to telecommunications providers, have announced initiatives to incorporate DeepSeek's models into their products and operations.

Lian Jye Su, a chief analyst at the tech research firm Omdia, commented, "This development aligns well with the capabilities of Chinese AI chipset vendors. While Chinese AI chipsets find it difficult to compete with Nvidia’s GPU in AI training, inference workloads are more accommodating and depend heavily on localized and industry-specific knowledge."

However, Bernstein analyst Lin Qingyuan pointed out that while Chinese AI chips may be cost-effective for inference, their market is primarily limited to China, as Nvidia chips still outperform them, even in inference tasks.

Despite U.S. export restrictions prohibiting Nvidia’s most advanced AI training chips from being sold in China, the company can still provide less powerful training chips for inference tasks.

Nvidia recently discussed the increasing inference time as a new scaling law in a blog post, asserting that their chips will be critical in enhancing the utility of DeepSeek and other "reasoning" models.

In addition to their computing power, Nvidia's CUDA, a parallel computing platform allowing developers to use Nvidia GPUs not only for AI but also for general computing, plays a vital role in their market dominance.

Chinese AI chip manufacturers have typically avoided challenging Nvidia directly by asking users to abandon CUDA, instead claiming compatibility with the platform.

Huawei is actively trying to reduce reliance on Nvidia by introducing a CUDA equivalent known as Compute Architecture for Neural Networks (CANN). However, experts assert that Huawei faces significant hurdles in convincing developers to shift away from CUDA.

Omdia’s Su added, “The software performance of Chinese AI chip companies is also currently inadequate. CUDA offers a comprehensive library and diverse software functionalities which require extensive investment over time to replicate.”

AI, Semiconductors, Competition