Facebook is using private photos from your phone galltery to train Meta AI models

AhmadJunaidBlogJune 30, 2025359 Views


Meta is quietly testing a controversial new feature on Facebook that scans users’ camera rolls, even for photos and videos they haven’t shared, raising fresh concerns about data privacy and transparency.

Initially reported by TechCrunch, the feature appears as a pop-up when some Facebook users attempt to upload a Story. It invites them to enable “cloud processing,” which allows Meta to automatically access and upload images from their phone’s gallery to its cloud servers on a regular basis. In return, the company promises personalised content, such as photo collages, themed recaps, and AI-generated filters for events like birthdays and graduations.

At first glance, the feature appears designed to offer creative tools and convenience. However, tapping “Allow” gives Meta permission to scan all photos and videos on the device, including those never posted online. The company’s AI can then analyse metadata (such as date and location), facial features, and objects within the images to generate suggestions and improve its AI capabilities.

What’s troubling privacy advocates is not just the extent of access, but the lack of transparency. Meta has not issued a formal announcement or blog post about the rollout, aside from a low-profile help page for Android and iOS users. The feature’s sudden appearance and vague description mean many users may be consenting without fully understanding the implications. Once enabled, the uploads continue quietly in the background, turning personal, unpublished media into potential training material for Meta’s AI systems.

Although Meta states that this is an optional feature and users can disable it at any time, questions remain. For instance, while the company claims these images are not currently being used to train its generative AI models, it has not ruled out doing so in future. Nor has it clearly explained what rights it retains over user content uploaded via cloud processing.

Meta has previously admitted to scraping public content from Facebook and Instagram to train its AI models. However, the definitions of what counts as “public” or who qualifies as an “adult” in these datasets remain unclear. The ambiguity only deepens with this new tool, especially as the updated AI terms of service, which came into effect on 23 June 2024, make no mention of whether unpublished photos gathered through cloud processing are exempt from AI training.

There is an option to opt out. Users can go into their settings and disable the cloud processing feature. Meta says that if the feature is turned off, it will begin deleting any unpublished images from its cloud servers within 30 days.

Still, the larger issue remains: this shift towards automatic media scanning marks a growing trend where big tech companies collect increasingly private user data under the guise of helpful AI features. In regions like India, where phones often store sensitive materials such as ID documents, family pictures, and personal screenshots, this kind of data access could have serious implications, particularly since the feature is not explained clearly in regional languages.

While Meta is currently testing the feature in the US and Canada, its global rollout could reignite debates over digital consent, algorithmic transparency, and the ethical boundaries of AI.

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Loading Next Post...
Follow
Trending
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...