X, the social media platform owned by Elon Musk, is tightening its rules around artificial intelligence content. The company has confirmed that its AI assistant, Grok, will no longer be allowed to generate sexualised images of real people. The move comes as part of broader efforts to address ethical concerns and misuse of AI-generated content.
Stronger AI Content Controls on X
X is updating Grok’s internal safeguards to ensure the tool cannot create explicit or sexualised visuals that resemble real individuals, including celebrities, public figures, or private persons. These changes aim to reduce harm, protect personal dignity, and limit the spread of non-consensual or misleading AI-generated imagery on the platform.
According to the platform, the restriction applies specifically to realistic depictions of real people, not fictional characters or clearly labelled artistic concepts. This distinction is meant to balance creative freedom with responsible AI use.
Why X Is Taking This Step
The decision reflects growing global concern around AI-generated images and deepfakes. Sexualised AI content involving real people has raised serious ethical, legal, and privacy issues. By blocking such outputs, X is positioning itself as more proactive in handling AI risks, especially as generative tools become more powerful and accessible.
The update also aligns with increasing pressure on tech companies to show accountability in how AI systems are trained, deployed, and moderated.
What This Means for Users and Creators
For everyday users, the change means stricter limits on what Grok can generate visually. For content creators and businesses, it signals that X is prioritising safer AI experiences and clearer boundaries. The company has indicated that it will continue refining its AI policies as new challenges emerge.
























