LinkedIn’s New Generative AI Feature: What You Need to Know
[email protected]> wrote:
> The double-edged sword here is this: Most data used to train AI models are
> Western Data, which creates the biases we see in AI. So, do we continue to
> be excluded from AI models?
>
Another reality is that AI platforms are billions of dollars industry. Do
they continue to profit from our data for free? Is there a win-win
situation where it’s not just extraction but shared prosperity?
>
> Regards
>
> *Ali Hussein*
>
> Fintech | Digital Transformation
>
>
> Tel: +254 713 601113
>
> Twitter: @AliHKassim
>
> LinkedIn: Ali’s Profile <ke.linkedin.com/in/alihkassim>
> <ke.linkedin.com/in/alihkassim>
>
>
>
>
>
>
>
>
>
> Any information of a personal nature expressed in this email are purely
> mine and do not necessarily reflect the official positions of the
> organizations that I work with.
>
>
> On Thu, Sep 19, 2024 at 2:48 PM Jacinta Wothaya via KICTANet <
> [email protected]> wrote:
>
>> Dear listers,
>>
>> LinkedIn has stirred up controversy by introducing a feature allowing
>> the platform and its affiliates to use personal data and user-generated
>> content to train generative AI models
>> <www.linkedin.com/help/linkedin/answer/a6278444>. While this
>> move reflects the growing trend of data commodification in the age of
>> artificial intelligence, it raises serious concerns regarding user consent
>> and privacy. The new feature allows LinkedIn to leverage the vast amount of
>> data generated by its users to enhance its AI capabilities. This decision
>> is not unexpected; as AI technology becomes more sophisticated, data is
>> increasingly recognized as a valuable asset. However, LinkedIn’s
>> implementation has come under fire for its lack of transparency. *Many
>> users were automatically opted in to this feature without prior
>> notification*, igniting fears over data misuse. The company has just
>> updated the privacy policy on its website
>> <www.linkedin.com/legal/privacy-policy#use> to reflect the new
>> changes, effective September 18, 2024.
>>
>> According to LinkedIn’s FAQs
>> <www.linkedin.com/help/linkedin/answer/a5538339>, opting out
>> means that the platform and its affiliates won’t use your personal data or
>> content to train models going forward. However, this does not affect any
>> training that has already taken place. Furthermore, opting out does not
>> prevent LinkedIn from using your personal data for training
>> non-content-generating generative AI models. Users must object to this
>> latter use by filling out a separate opt-out form provided by LinkedIn
>> <nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fhelp%2Flinkedin%2Fask%2FTS-DPRO&data=05%7C02%7Ctviano%40linkedin.com%7C1de2bb57c76840ca36da08dca1bff3c2%7C72f988bf86f141…>
>> .
>>
>> The move appears to contravene several important regulations designed to
>> protect user privacy. Under the General Data Protection Regulation (GDPR)
>> in the EU, Article 6 stipulates that personal data must be processed
>> lawfully, fairly, and transparently. LinkedIn’s failure to notify users may
>> violate these principles, particularly the requirement for informed
>> consent. Furthermore, Article 7 mandates that consent must be freely given
>> and can be withdrawn at any time. LinkedIn’s FAQ for its AI training claims
>> that it uses “privacy-enhancing technologies to redact or remove personal
>> data” from its training sets. Notably, the platform states it does not
>> train its models on users located in the EU, EEA, or Switzerland, which may
>> provide some level of assurance for users in those regions.
>> Similarly, the Kenya Data Protection Act (2019) emphasizes the importance
>> of consent. Section 26 of this act requires data controllers to obtain
>> explicit consent from users before processing their personal data. By
>> automatically opting users in, LinkedIn could be infringing upon these
>> legal protections, raising significant questions about its compliance with
>> data protection laws.
>>
>> Notably, LinkedIn’s recent move isn’t an isolated case but is part of a
>> broader trend where tech giants exploit user data to fuel AI advancements.
>> Only recently, Meta allegedly confessed to using all public text and
>> photos of adult Facebook and Instagram users to train its AI models since
>> 2007
>> <www.theverge.com/2024/9/12/24242789/meta-training-ai-models-facebook-instagram-photo-post-data>
>> .
>>
>> Such practices raise important questions about user rights, data
>> ownership, and ethical considerations in AI development. While the
>> potential for innovation is significant, the risks associated with
>> unauthorized data use cannot be overlooked. Tech giants will continue to
>> push the boundaries of data utilization, and we are likely to see
>> increasing scrutiny from governments and regulatory bodies worldwide.
>> Nonetheless, existing laws may not be sufficient to address the
>> complexities introduced by AI and big data, and the need for robust
>> legislation to increase transparency, consent, and accountability in data
>> usage has never been more pressing. At the moment, it is the user’s
>> responsibility to stay informed and proactive about their data privacy but
>> we look forward to a time when all tech companies innovate with user
>> protection as the priority.
>>
>> *How to Opt-Out of Your Account Being Used for Training Generative AI *
>>
>> 1. While logged into your LinkedIn account, go to *Settings & Privacy*
>> .
>> 2. Click on *Data Privacy*.
>> 3. Select *Data for Generative AI Improvement* and turn off the
>> feature.
>> 4. To stop your data from being used for non-content-generating AI
>> models, complete the following form provided by LinkedIn
>> <www.linkedin.com/help/linkedin/ask/TS-DPRO>.
>>
>>
>> Best,
>>
>> *Jacinta Wothaya,*
>> *Digital Resilience Fellow @**KICTANet* <www.kictanet.or.ke/>, @*tatua
>> <tatua.digital/>*
>> LinkedIn: *Jacinta Wothaya
>> <www.linkedin.com/in/jacinta-wothaya-510a8b153>*
>>
>>
>>