Meta recently tested a new feature on Facebook for some users, allowing its Meta AI system to automatically extract previously uploaded photos from users' phone albums and generate creative content such as collages, retrospective videos, and stylized transformations. Although Meta emphasized that this feature is "opt-in" and will not be used for advertising or AI model training, it has still raised concerns among many users regarding the security of their personal information and privacy.
TechCrunch websiteReportThe feature is currently being tested on a very small number of Facebook users, primarily triggering a prompt message when a user uploads a Stories post. The pop-up window states that Facebook will continuously extract media files from users' mobile phone albums and upload them to the Meta cloud service based on information such as time, location, and subject, in order to generate artificial intelligence recommendations for sharing.
However, this prompt also raises questions about whether Meta uses this to obtain personal photos that have not been made public on mobile phones and use them for artificial intelligence model training.QuoteMaria Cubeta, Meta's public affairs manager, said Meta did not and would not use the photos to train its artificial intelligence models.
Maria Cubeta explained that this feature is currently a test project to "help users more easily create and share content." The resulting suggestions are entirely user-selectable. Furthermore, all photo processing is done with user consent, and this setting can be turned off at any time in Facebook's "Preferences > Camera Roll Sharing Suggestions."
According to some user reports, this feature was actually tested as early as early 2025. Meta has even quietly released the corresponding instruction document, providing detailed instructions on how to turn this feature on and off.
Although Meta has repeatedly emphasized this functionNot involving model training and commercial useHowever, since the data being processed involves private photos that users haven't uploaded yet, coupled with the descriptions of "continuous uploading" and "cloud processing," many people are still concerned about the risk of personal data leakage or misuse. Especially with the increasingly blurred line between artificial intelligence and personal data in recent years, any changes to the permissions associated with mobile photo albums are bound to become a highly sensitive issue for users.
As generative artificial intelligence continues to permeate everyday life, the transparency and use of user data by technology platforms will inevitably become crucial indicators of corporate responsibility and user trust. Meta's test may pave the way for future social interaction experiences, but striking a balance between innovative features and privacy protection remains a significant challenge.



