Microsoft subsidiary: LinkedIn relies on user content for AI training
With a revised privacy policy, LinkedIn reserves the right to "develop and train" AI models with user data. Exception: the EU.
LinkedIn informed its European members on Friday about a change to its own privacy policy, which came into force on September 18. The reason for the revision of the guidelines is that the social network, which focuses on job placement, wants to move forward with the use of generative artificial intelligence (AI). The email to customers states: "We have added an explanation of how we use the information you provide to develop the products and services of LinkedIn and its affiliates. This includes the training of AI models used for content creation ('generative AI'), as well as safeguards and security measures."
The Microsoft subsidiary's recently revised global privacy policy now states the following on this point: "We may use your personal data to improve, develop and provide products and services, develop and train artificial intelligence (AI) models, and develop, provide and personalize our services." At the same time, the network operator reserves the right to "use AI, automated systems and inference to gain insights to make our services more relevant and useful to you and others".
To this end, LinkedIn refers to its own principles for "responsible AI" from the beginning of 2023. Similar to its parent company Microsoft, the social media group assures that it wants to use AI above all to increase the success and productivity of its members, as well as to ensure "fairness and inclusion" and transparency. The company is relying on internal regulation of AI "that includes the assessment and resolution of potential harm and appropriateness, as well as ensuring human oversight and accountability".
The EU and the UK are being left out for now
"Currently, we do not enable training of generative AI based on member data from the European Economic Area" (EEA), "Switzerland and the United Kingdom", LinkedIn explains further. In addition to the 27 EU member states, the EEA also includes Iceland, Liechtenstein and Norway. In these countries, the General Data Protection Regulation (GDPR) and related laws containing special regulations for AI apply. Outside these regions, LinkedIn offers members who do not wish to make their information available for technology training a "proactive" opt-out option.
Meta recently admitted to using publicly available posts, images and other data from Australian users on Facebook and Instagram to train its own AI models. An opt-out option is not offered to users down under – unlike in Europe and the USA, for example. In June, Meta also stopped AI training with data covered by the GDPR for now. Australia does not yet have an equivalent to the EU regulations. The US company has generally excluded the accounts of underage Australian users from AI training, but not recordings of children posted by parents.
LinkedIn is also tweaking the user agreement
Microsoft is building on a close partnership with OpenAI, among others, in the AI sector. The Redmond-based company has invested billions in the ChatGPT operator. However, there is also increasing competition between the two companies. According to a report, the LinkedIn parent company is working on a new, large in-house language model called MAI-1, which should be powerful enough to compete with those of Google and OpenAI. Previously, Microsoft only trained smaller open-source models for generative AI.
At the same time, LinkedIn has announced an update to the user agreement, which is due to come into force on November 20. This includes further details on content recommendations and moderation practices, as well as new provisions relating to "the generative AI features we offer". Creative members will also be better empowered to "promote their brand beyond LinkedIn".
(nie)