Elon Musk's X platform is once again in trouble.Censorship StormThis time, Apple and Google have also become targets of scrutiny. An alliance of 28 women's and rights advocacy groups issued a statement earlier...open letterThe article strongly demands that Apple CEO Tim Cook and Google CEO Sundar Pichai comply with their own app store guidelines and immediately remove the X app, because X's built-in AI chatbot Grok is being used extensively to generate unconsented deepfake pornographic images, and even contains child sexual abuse content (CSAM).
Allegations: Tech companies not only acquiesced to the scheme, but also profited from it.
This open letter was jointly signed by 28 groups, including women's rights organization Ultraviolet, parents' group ParentsTogether Action, and the National Organization for Women.
The letter strongly accuses Apple and Google of turning a blind eye to Grok's misuse of generative AI. The advocacy group argues that while the two tech giants' app store guidelines explicitly prohibit the distribution of unauthorized private images (NCII) and child sexual abuse content, they have taken no concrete action against X to date.
The letter stated, "You not only condone this content but also profit from it. We demand that Apple and Google leadership immediately remove Grok and X to prevent further abuse and criminal activity."
As of now, neither Apple nor Google has responded to this.
Shocking data: Is Grok becoming a "stripping" production machine?
The controversy surrounding Grok first erupted earlier this month. According to reports, within 24 hours of the incident being exposed, Grok generated approximately 6700 sexually suggestive or nude images per hour, with up to 85% of the generated content being pornographic.
Even more shockingly, Grok even "admitted" to its violation. In a response, Grok stated: "I deeply regret that on December 28, 2025, I generated and shared an AI image of two young girls (estimated to be 12-16 years old) wearing sexually suggestive clothing, based on user instructions. This violates ethical standards and may have violated U.S. laws regarding CSAM."
California Attorney General intervenes in investigation, sparking a wave of bans worldwide.
In addition to protests from civic groups, official agencies have also begun to take action. California Attorney General Rob Bonta announced on January 14th that he had formally launched an investigation into xAI, pursuing legal action against Grok for its unauthorized creation of sexually explicit images of real people, and demanding immediate improvements from the company. In a statement, Rob Bonta strongly condemned the malicious manipulation of images of women and children into nude content as shocking.
The international response was also quite swift:
• Malaysia and Indonesia announced a ban on Grok on Monday.
• Ofcom, the UK's communications regulator, has officially launched an investigation.
• The U.S. Senate has once again passed the Defiance Act, which allows victims of deepfake pornography to file civil lawsuits without their consent.
X's response: Charging for wall construction, strengthening filtration
Faced with overwhelming criticism and investigations, X's current approach is to restrict Grok's raw image feature to paid subscribers and adjust settings to prevent generated images from being automatically posted to public news feeds. Hours after the California Attorney General announced the investigation, X further announced a ban on generating highly revealing images and strengthened security measures.
However, reports indicate that non-paying free users appear to still be able to generate a limited number of bikini composite photos.
Analysis of viewpoints
This incident once again highlights the double standards and dilemmas faced by app stores in their "gatekeeper" role.
In the past, Apple and Google have been ruthless in cracking down on apps that violate content moderation (such as the Parler and Fortnite incidents), but they seem unusually cautious when it comes to Elon Musk's X. Perhaps this is because X has such a massive user base that removing it would trigger a huge commercial and political backlash, or perhaps the tech giants are unwilling to prematurely address the issue of "the neutrality of AI tools."
However, Grok's case differs from typical social media bullying because it involves AI "actively generating" illegal content, rather than simply using it as a medium for dissemination. When AI can easily "undress" anyone (including minors) with a single click, it transcends the scope of freedom of speech and becomes a public safety concern.
X's current "wall-building and charging" strategy (allowing only paid members to use it) essentially turns the right to create pornographic images into a VIP service, without addressing the core ethical and legal issues. If Apple and Google insist on not intervening, they may face even more severe joint legal liabilities in the future.



