Meta invests 14.3 billion US dollars for 49 percent of Scale AI

Scale AI CEO Alexandr Wang founds a new AI team at Meta, reporting directly to Mark Zuckerberg. That comes at a cost.

listen Print view
Meta sign at property entrance

(Image: Michael Vi/Shutterstock.com)

3 min. read

Now it's confirmed: Meta is spending 14.3 billion US dollars to remain at the forefront of the AI race. The money is going to Scale AI, a company that processes data for AI training, among other things. Alexandr Wang, founder and CEO, will reportedly report directly to Mark Zuckerberg, but will remain on the supervisory board of Scale AI.

There had previously been rumors about the takeover. Now both sides have announced the merger. Wang will lead a new AI team at Meta with the aim of creating a superintelligence. Wang's position as CEO will initially be taken over by Jason Droege. Scale AI already works with most major AI providers as well as numerous governments and authorities. Based on Meta's Llama AI model, they offer Defense Llama for US security agencies, for example.

Videos by heise

Meta plans to provide further information on the collaboration in the coming weeks. By acquiring only 49% of Scale AI, Meta secures the company's AI expertise on the one hand, but avoids competition law requirements on the other. Nevertheless, it is possible that the US competition authority will take a closer look at the deal. Meta is already in court there and has to defend the takeovers of Instagram and WhatsApp retrospectively. Meta paid around 19 billion US dollars for WhatsApp in 2014. Instagram only cost around one billion US dollars in 2010.

Zuckerberg himself is said to have been responsible for recruiting the new AI team. There are reports that he has invited AI experts to his home. There is also a WhatsApp group called “Recruiting Party”. As The Verge reports, Zuckerberg has sent emails or WhatsApp messages directly to potential candidates.

Meta is apparently taking a two-pronged approach to AI. While the new superintelligence team is apparently developing generative AI and large language models, another AI team in Paris is working on AI models that function differently. V-Jepa, which has only just been presented in its second version, learns from non-annotated data. Yann LeCun and his team see this as a major advantage. AI should therefore learn how a child internalizes the physical world. Large language models, on the other hand, learn from text and annotated data, such as that provided by Scale AI.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.