LinkedIn in the app store

Leaked: How LinkedIn Is Allegedly Using Your Data to Train AI Without Consent

The war between AI innovation and data privacy has been going on for quite some time now. This year, LinkedIn is taking the brunt of it.

A recent class-action lawsuit accuses the professional networking social media of secretively using its users’ data—including private messages, to train AI models. Of course, this has landed them in the crosshairs of consent issues, transparency and corporate ethics. 

The LinkedIn Lawsuit. 

On January 21st, 2025, a group of LinkedIn Premium users filed a lawsuit against the platform on the grounds that their private data had been shared with third parties without explicit consent. According to the plaintiffs, LinkedIn quietly implemented a privacy policy update in September 2014, which allowed user data to be utilized for AI training. However, the users claim that this update was buried in the fine print, making it easy for unsuspecting users to overlook. 

The lawsuit accuses LinkedIn of violating User trust, breaching contractual agreements and potentially running afoul of the U.S. data protection laws. Now if you’ve been on the internet for a while, this isn’t the only company that has mined user data to enhance AI capabilities. The AI companies themselves, that is to say META, OpenAI, Microsoft to list a few, have been in continuous legal battles regarding how they source their AI training data. 

How LinkedIn is Justifying their Actions. 

LinkedIn has stressed that it provides users the option to opt out of data mining via the “Data Privacy” setting introduced in August 2024. However, users and critics argue that the company defaulted users into data-sharing without clear and upfront disclosure, which they find highly misleading. Truth to point, many users only realized their data was being used for AI training after the lawsuit made headlines.  

Ironically, this isn’t an entirely new case. As per BBC article, LinkedIn suspends AI training using UK user data, dating back to September 2024, LinkedIn suspended the use of user data for AI training following concerns raised by the Information Commissioner’s Office (ICO). The ICO emphasized the necessity for transparency and user control over personal data usage. 

Similarly, the South African Artificial Intelligence Association (SAAIA) filed a complaint against LinkedIn for using citizens’ data to train AI models without consent, highlighting potential violations of local data protection laws. South African group files complaint against LinkedIn for using citizens’ data to train AI models – Techpression 

But then again, should we really have to actively opt out of data-sharing practices that potentially compromise our safety? Or should the companies that demand we trust them with our data be required to obtain explicit, opt-in consent before using it? 

While true that AI development requires real-world data, it is also true that personal data should not be harvested without direct user approval because then it would defeat the purpose of it being “personal data.” consumers are actively demanding more transparency and control over their digital footprint. 

For those concerned about LinkedIn, or any other platform for that matter, using their data for AI or third-party sharing, here are a few tricks to try. 

  • Review the Terms and conditions of your platform. As we share in What Companies Don’t Want You To Know About Terms and Conditions, you never can really tell what power your company has if you don’t read through before hitting the “I agree” button. 
  • As for LinkedIn users, navigate to the platform’s “Settings & Privacy” section and disable the “Data for Generative AI Improvement” option. 
  • Stay Informed. Keep an eye on updates to privacy policies as companies often change them quietly…sneakily if you ask me.
  • Advocate for Stricter Policies. Support regulatory measures that push for better consumer data protection and AI ethics.

As artificial intelligence continues its growth, it’s only clear that we are going to have many more of these cases. The only hard question is whether companies should prioritize AI development at the cost of user privacy, or if there is a way to achieve both. 

Leave a Comment

Your email address will not be published. Required fields are marked *