While international hostilities continue to be a hot topic, Chinese chip makers and cloud service providers appear to be making concerted efforts to support DeepSeek and other locally made AI models.
DeepSeek is receiving special attention from its home country, which also has some of the biggest tech companies in the world.
For status, on Saturday, Huawei Technologies announced that it is collaborating with the AI startup SiliconFlow to offer DeepSeek’s models to customers. It will do this through its Ascend cloud service.
A letter from @deepseek_ai acknowledging @Huawei ‘s support.#DeepSeek #Huawei pic.twitter.com/cRgGCS1iZ3
— Living In Harmony ⭕☯Ⓜ🔥🥉🏅🔄🦾🍞🆙🆗📢🎯🔑💼🛡️👑 (@LivingInHarmony) February 5, 2025
Huawei also produces its own AI chips. This means that their collaboration could provide the AI startup with AI chips.
On Monday, Moore Threads and Hygon Information Technology, which develop AI processors, announced that their computing clusters and accelerators will support DeepSeek’s R1 and V3 models. Both companies are working to compete with Nvidia.
Moore Threads said, “We pay tribute to DeepSeek.” They added that progress by DeepSeek’s models using domestically made graphic processing units (GPU) “could set on fire China’s AI industry.”
Next, Gitee AI, a Shenzhen-based one-stop service website for AI developers said it was offering four DeepSeek-R1-based models. The models will be available through servers powered by GPUs from Shanghai-based chip designer MetaX.
DeepSeek still has global supporters
DeepSeek joined the US AI chatbots in providing a free AI assistant. However, the company said its free AI assistant uses less data at a fraction of the cost of existing services. Clearly, its strategy stands out.
Some companies have embraced the Chinese AI startup’s models even with the controversies. In fact, the app overtook US rival ChatGPT in downloads from Apple’s App Store, further triggering a global selloff in tech shares.
Deepseek has delivered model after model in an uncharacteristically short amount of time. It released its new DeepSeek-V3 model in December 2024. Next, DeepSeek-R1, DeepSeek-R1-Zero, and DeepSeek-R1-Distill came out on January 20, 2025. On January 27, 2025, the company added a new Janus-Pro-7B AI model, which focuses on vision.
The DeepSeek-R1-Zero model has 671 billion parameters, while the DeepSeek-R1-Distill series includes models with between 1.5 billion and 70 billion parameters.
Now, Amazon Web Services (AWS), Microsoft, and Google Cloud all offer the model to their customers. But as of now, they have not started using the per-token price system that other AI models, like Meta’s Llama 3, use.
In addition, on Monday, Alibaba Group’s cloud services offered DeepSeek’s AI models on its platform. Baidu and Tencent’s cloud services have also announced that they are offering DeepSeek’s models to their users.
Bernstein analysts said, “DeepSeek demonstrates that competitive large language models (LLM) can be deployed on China’s ‘good enough’ chips, easing reliance on cutting-edge U.S. hardware.”
However, countries like Italy and the Netherlands have blocked the service and are investigating DeepSeek’s AI app due to privacy issues.
DeepSeek AI shakes up pricing models
Market analysts insist cloud providers will profit more from infrastructure rentals than direct model usage fees.
Renting cloud servers for AI tasks is technically more expensive than using models through APIs. AWS charges up to $124 an hour for a cloud server that’s optimized for AI. This costs about $90,000 a month for 24/7 usage.
Microsoft Azure users don’t have to rent special servers for DeepSeek. However, they pay for the computing power they use. This means the cost can change based on how well they run the model.
On the other hand, groups using Meta’s Llama 3.1 through AWS, pay $3 for every 1M tokens. These tokens are parts of text, and 1,000 tokens are about 750 words.
Smaller cloud companies like Together AI and Fireworks AI have started using a simple pricing system. They charge a fixed amount for each token with their DeepSeek-R1 model.
Another cheaper option for DeepSeek-R1 is through its parent company’s API at $2.19 per million tokens.This is three to four times cheaper than some Western cloud providers.
Cryptopolitan Academy: Are You Making These Web3 Resume Mistakes? – Find Out Here
Source: https://www.cryptopolitan.com/chinese-chip-makers-cloud-embrace-deepseek/