The artificial intelligence chip race is hot, and Microsoft and OpenAI also need to join the competition separately

The technology giants Microsoft and OpenAI, which have established close cooperation around ChatGPT, have frequently reported reports of covert competition recently. Now, these two seemingly detached partners may further distance themselves and even compete on the artificial intelligence (AI) chip circuit

The technology giants Microsoft and OpenAI, which have established close cooperation around ChatGPT, have frequently reported reports of covert competition recently. Now, these two seemingly detached partners may further distance themselves and even compete on the artificial intelligence (AI) chip circuit.

According to Reuters on October 6th local time, OpenAI, which is developing ChatGPT, is exploring independently developing AI chips and has even evaluated a potential acquisition target. Meanwhile, the technology media TheInformation learned from a report that Microsoft plans to launch its first AI chip at next month's annual developer conference.

OpenAI plans to independently develop AI chips and is evaluating acquisition targets

Since the launch of ChatGPT last year, the demand for specialized AI chips in the entire AI field has grown rapidly. This specific chip, also known as an AI accelerator, is necessary for training and running the latest generative AI technology. NVIDIA controls over 80% of the global AI chip market.

OpenAI CEO Sam Altman has made acquiring more AI chips a top priority for the company, publicly complaining about the shortage of graphics processing units (GPUs) and Nvidia's dominant position in the market. He believes that OpenAI lacks advanced processors to drive its software, and the hardware costs required to run and support its work and products are very high.

Running ChatGPT is indeed very expensive for OpenAI. Bernstein analyst Stacy Rasgon analyzed that the cost of ChatGPT per query is approximately 4 cents. If the query size grows to one tenth of Google's search, it will initially require the deployment of AI chips worth approximately $48.1 billion for computing, and an additional $16 billion worth of chips will be needed annually to maintain operations.

Developing its own AI chips will also place OpenAI among a small group of top large technology companies, such as Google and Amazon, who are both trying to control the development of chips that are crucial to their business. Meanwhile, Andrej Karpathy, a former AI director at Tesla who has returned to OpenAI, predicted that OpenAI's next generation large language model GPT-5 may require 30000 to 50000 Nvidia H100 chips. The shortage of chips may seriously limit the development of OpenAI.

According to recent internal discussions held by Reuters, OpenAI has not yet decided whether to continue advancing this AI chip development plan. However, people familiar with the situation say that since at least last year, the company has been discussing ways to solve the shortage of AI chips, including manufacturing AI chips on its own, collaborating more closely with other chip manufacturers such as Nvidia, and diversifying suppliers, such as finding other companies outside of Nvidia.

As of now, OpenAI has not directly responded to questions about its AI chip plan.

If OpenAI continues to advance its plan to independently develop chips, it will be a significant strategic initiative and investment. According to industry insiders, the annual cost may reach hundreds of millions of dollars. Even if OpenAI invests resources in this task, it cannot guarantee success. In fact, some large technology companies have been building their own processors for many years, but have only achieved limited results, such as Meta's self-developed chip plan, which has repeatedly suffered setbacks. The company once outsourced some of its chip design work to Samsung, and now it has turned to collaborating with Qualcomm and MediaTek to purchase finished chips, and plans to use Qualcomm chips to replace self-developed chips.

The recent job vacancies on the OpenAI website indicate that the company is interested in recruiting talents to evaluate and jointly develop artificial intelligence hardware.

Acquiring chip companies may accelerate the process of building OpenAI homemade chips, just like Amazon's acquisition of AnnapurnaLabs in 2015. People familiar with the plan say that OpenAI's acquisition plan has progressed to the level of due diligence on potential acquisition targets. However, the identity of the company examined by OpenAI is currently unclear.

Even if OpenAI continues to advance its plan to independently develop chips, including acquiring companies, it may take several years to achieve, and the company will still rely on chip suppliers such as NVIDIA and AMD.

Microsoft plans to launch its first artificial intelligence chip next month

According to The Information, Microsoft plans to launch its first AI chip next month, codenamed "Athena". The development of this chip began in 2019 and may be unveiled at the Microsoft Ignite conference held in Seattle on November 14th. It is revealed that this chip has been secretly tested in a small team of Microsoft and OpenAI.

Investment company UBS stated in August that Microsoft is facing the risk of a shortage of GPU supply, which may affect its ability to generate AI business revenue next year. If Microsoft could manufacture its own GPUs, it could reduce this risk.

As early as 2020, OpenAI developed its generative artificial intelligence technology on Microsoft's large supercomputers, which used 10000 Nvidia GPUs. In order to support OpenAI's product and research needs, Microsoft has placed an order for at least 100000 Nvidia chips. Due to the surge in demand, these GPUs have been in a state of insufficient supply.

The performance details of Athena are not yet clear, but Microsoft hopes it can rival the highly sought after Nvidia H100. Although many companies promote their hardware and cost advantages, NVIDIA's GPU remains the preferred choice for AI developers due to the CUDA platform (a universal parallel computing architecture launched by NVIDIA). Attracting users to use new hardware and software will be crucial for Microsoft.

Microsoft is still working closely with AMD to develop the upcoming AI chip MI300X. This diverse approach of partners can help Microsoft cope with the challenges of rapidly increasing AI workloads.

Currently, Microsoft Cloud (Azure) also relies on NVIDIA's GPU to support the needs of cloud customers. With Athena, Microsoft can emulate its competitors Amazon and Google by providing cloud customers with self-developed AI chips. Amazon and Google have integrated their AI chip strategy into their cloud business promotional activities. Amazon has provided financial support to OpenAI's competitors, Anthropic, and stipulated that Anthropic use Amazon's AI chips, called Training and Inferentia. Meanwhile, Google Cloud claims that AI image generation companies such as Midjournal and CharacterAI are using Google's self-developed Tensor Processing Unit (TPU).


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])