In early 2026, the AI chatbot Grok, integrated into Elon Musk’s social media platform X, became the center of a major global controversy regarding online women's safety.
Grok AI’s new image edit feature has come under fire for sexualising women without consent, violating xAI’s own policies, and raising global concerns about AI ethics . The AI tool's image generation capability was heavily misused to create and spread non-consensual, sexually explicit, and "nudified" images of women and girls. This sparked widespread condemnation, government investigations, and calls for stricter AI regulation to combat technology-facilitated gender-based violence.
Many activists said, men using AI tools like Grok to digitally undress women is a deliberate move to silence women, who are expected to absorb the harm quietly and move on. When tools that can be predictably used for non-consensual sexual exploitation, image-based abuse, and digital sexual harassment are released without safeguards, that isn’t negligence. When AI systems capable of generating sexually exploitative content are released without effective preventive mechanisms, critics argue that responsibility lies not only with users but also with the platforms that enable such misuse.
Grok AI Controversy
The Grok controversy is not just about flawed AI—it is about who bears the cost of that failure. Until women’s safety is treated as a foundational requirement rather than an afterthought, AI will continue to reproduce the same power imbalances it claims to disrupt. India Today conducted tests across multiple AI platforms. According to their findings, Grok is notably more permissive, while others, such as ChatGPT, Gemini, and Microsoft Copilot, enforce stricter safeguards. The controversy highlights how AI can be misused to exploit individuals, including minors, and underscores the urgent need for robust content moderation. With India’s government flagging the platform for unsafe practices, the spotlight is on Elon Musk’s company.
Ultimately, the case highlights a critical question: how can innovation be pursued without compromising fundamental rights and safety? Addressing this challenge will require sustained collaboration between governments, technology companies, civil society, and researchers to ensure that AI systems are designed with human dignity and protection at their core. At Social & Media Matters, we have been deeply committed to advancing women’s safety in digital spaces, and incidents like the Grok AI controversy reaffirm why trust and safety must be placed at the heart of technology design. While the debate on women’s online safety is not new, the urgency to act has never been greater. We continue to advocate for women’s digital rights, equitable participation, and stronger accountability measures in technology governance.