Amazon with new AI accelerator and Nvidia technology in the future
AWS introduces Trainium3, a chip four times faster and more energy-efficient. Nvidia's NVLink Fusion will be used from the next chip generation.
(Image: JHVEPhoto/Shutterstock.com)
Amazon's subsidiary Amazon Web Services (AWS) on Tuesday unveiled a new version of its training chip for artificial intelligence (AI) called Trainium3 at its cloud computing conference AWS re:Invent 2025 in Las Vegas – just over a year after the previous model was introduced. The new state-of-the-art 3-nanometer chip has recently been installed in some data centers and is now available to customers. This was stated by Dave Brown, Vice President for AWS Compute and Machine Learning Services, in an interview with the news agency Bloomberg. "At the beginning of next year, we will start scaling very, very quickly," he said.
Chip development is considered a key factor in Amazon's AI strategy. The cloud division AWS is one of the largest providers of rented computing power and data storage and drives Amazon's growth. However, in the development of AI tools, the US group is in the shadow of Google and Microsoft, which is involved with ChatGPT manufacturer OpenAI.
More computing power, less energy consumption
With its Trainium chips, Amazon hopes to offer a cost-effective alternative to the competition, Brown told the news agency Reuters. "We have to prove to them that we have a product that gives them the performance they need and achieves a reasonable price so that they get that price-performance advantage," said Brown. According to the company, the Trainium processors are capable of performing the intensive calculations behind AI models more cost-effectively and efficiently than, for example, the graphics processors from market leader Nvidia.
Videos by heise
In the fierce AI competition, AWS is trying to develop more energy-efficient systems. According to Brown, the new AWS servers based on the Trainium3 chip, Trainium3 UltraServer, each contain 144 chips and have more than four times the computing power of AWS's previous AI generation. This makes the new servers more than four times faster; and they have four times more memory. And perhaps more importantly, given the immense energy demand of AI data centers: according to AWS, the chips and systems consume 40 percent less electricity than the previous generation.
Nvidia technology in new chip generation
In an effort to attract large AI customers to use its services, AWS also announced on Tuesday that from the fourth generation of its AI accelerator Trainium, which is already under development, Nvidia's "NVLink Fusion" technology will be used. NVLink technology creates fast connections between different types of chips and is considered one of Nvidia's "crown jewels" according to Reuters. Chip manufacturers such as Intel or Qualcomm already use Nvidia's high-speed chip interconnection technology.
NVLink is expected to provide the Trainium4 chip with another major performance leap and help AWS build larger AI servers that can recognize and communicate with each other faster. This is a crucial factor when training large AI models where thousands of units need to be connected. Systems powered by Trainium4 could, according to the tech portal TechCrunch, make it easier to attract large AI applications designed for Nvidia graphics processors to Amazon's cloud.
In addition, Amazon on Tuesday introduced new versions of its AI model Nova. Compared to competitors like OpenAI's ChatGPT, Anthropic's Claude, or Google's Gemini, Nova has only a small market share so far. This is expected to change with the new versions. Among the new Nova 2 products is a variant called Omni, which is capable of responding to text, image, voice, or video inputs with both text and images. Another model called Sonic can respond to voice commands with speech output, in a way that AWS CEO Matt Garman described as "human-like" according to Reuters.
(akn)