Critics question the use of Facebook and Instagram posts for AI training

AI, criticised, , Instagram

Plans to utilize public posts and images on Facebook and Instagram to train artificial intelligence (AI) tools owned by social media giant Meta have come under fire from digital rights groups. These organizations argue that the processing of user content for AI purposes constitutes an abuse of personal data. As a result, the European campaign group Noyb has filed complaints with 11 data protection authorities in Europe, urging them to take immediate action to halt Meta’s plans.

Meta, formerly known as Facebook, has recently informed UK and European users of its platforms that their information may be used to develop and improve its AI products. This includes public posts, images, image captions, comments, and Stories that users over the age of 18 have shared on Facebook and Instagram. However, private messages are excluded from this data collection.

In response to the complaints, Meta has stated that its approach is compliant with relevant privacy laws and similar to how other large tech firms use data for AI development in Europe. The company argues that the use of European user information will contribute to a wider rollout of generative AI experiences by providing more diverse and relevant training data. Tech firms have been actively seeking fresh, multiformat data to enhance their AI models, powering chatbots, image generators, and other AI products.

Meta’s unique access to vast amounts of publicly shared images, videos, and text posts positions the company as a key player in the AI landscape. CEO Mark Zuckerberg has expressed the importance of this “unique data” as a key part of Meta’s AI strategy going forward. Additionally, Meta’s chief product officer, Chris Cox, has confirmed the use of public Facebook and Instagram user data for generative AI products offered worldwide.

Despite Meta’s reassurances, concerns have been raised about the way in which the company has informed users about the change in data usage. Facebook and Instagram users in the UK and Europe have received notifications or emails detailing how their information will be used for AI. Meta relies on legitimate interests as the legal basis for processing user data, implying that users must actively opt out to prevent their data from being used for AI. This process has been criticized for being unclear and cumbersome, potentially deterring users from objecting.

Furthermore, Noyb and other digital rights advocates argue that Meta should obtain explicit consent from users and implement an opt-in system instead of relying on an opt-out model. They assert that by requiring users to actively object to the use of their data, Meta is shifting the responsibility to the user, which is deemed unacceptable.

While Meta claims that its process is legally compliant and used by competitors, critics argue that the company should prioritize user consent and transparency. Meta’s privacy policy states that objections will be upheld and information will cease to be used unless there are compelling grounds that outweigh user rights or interests. However, even users without Meta accounts or those who successfully object may still have some of their information used for AI purposes if they appear in publicly shared images on Facebook or Instagram.

The Irish Data Protection Commission, responsible for ensuring Meta’s compliance with EU data law due to the company’s Dublin headquarters, has confirmed that it is investigating the complaints filed by Noyb. This further highlights the significance of this issue and the potential implications for Meta’s data practices.

In conclusion, the controversy surrounding Meta’s plans to use public posts and images for AI training highlights the ongoing debate over the balance between data privacy and AI development. Digital rights groups argue that Meta’s approach constitutes an abuse of personal data, while the company maintains that it is compliant with privacy laws and industry standards. The outcome of the investigations by data protection authorities will provide insight into the future of data usage for AI purposes and the responsibilities of tech giants in protecting user privacy.

Source link

Leave a Comment