Apple and OpenAI to become TSMC's first A16 customers

OpenAI is getting serious about developing its own AI chips. The company is said to be one of the first customers of TSMC's upcoming production technology.

Save to Pocket listen Print view

(Image: HomeArt/Shutterstock.com)

2 min. read
This article was originally published in German and has been automatically translated.

OpenAI reportedly wants to design competitive chips for training AI algorithms in two steps in order to satisfy its hunger for computing power and free itself from dependence on Nvidia. In the first step, 3-nanometer accelerators are to roll off the production line at chip contract manufacturer TSMC. In a second step, OpenAI will reportedly switch to TSMC's next-but-one generation A16 process (formerly known as 1.6 nm).

The Taiwanese United Daily News (UDN) claims to have learned this from sources close to the company. According to the report, OpenAI has already secured corresponding production capacities from TSMC –, making the company behind ChatGPT the first customer for A16 wafers alongside Apple.

The A16 process combines the so-called nanosheet transistors of the N2 generation with a new type of backside power supply, which is intended to improve the electrical properties significantly. TSMC plans to mass-produce the A16 chips from the second half of 2026 – by which time OpenAI will have state-of-the-art AI accelerators.

OpenAI CEO Sam Altmann is said to have backed away from the plan to build its own semiconductor plants together with TSMC just for AI chips. Instead, the company is likely to use the existing production capacities of the world's largest chip contract manufacturer.

Meanwhile, the two US companies Broadcom and Marvell are to help out with chip development. Both have a lot of experience in development and already maintain relationships with TSMC. They also run custom programs to design chips together with partners according to their needs. OpenAI has reportedly been planning to spin off from Nvidia since 2022.

Broadcom in particular emphasizes its memory integration and networking capabilities – important for AI accelerators that train large AI models. Marvell emphasizes, among other things, its multi-chip expertise to design particularly fast accelerators from multiple chiplets. The arithmetic units of AI chips have a much simpler structure than modern CPUs or GPUs: they multiply heaps of matrices and add them up (multiply accumulate, MAC). Above all, it is the peripherals that are important, such as caches and the connection between them.

(mma)