European Union regulators have opened a formal investigation into Elon Musk’s platform X over its AI chatbot Grok, focusing on the generation and circulation of non-consensual sexually explicit and sexualized deepfake images within the EU. Both liberal and conservative outlets agree that the probe is being conducted by the European Commission under the Digital Services Act, and that it expands an already ongoing investigation into X’s broader systems and compliance practices. Coverage from both sides notes that regulators are particularly concerned that Grok may have been used to produce manipulated sexual images of real individuals, potentially including child sexual abuse material, and that the investigation will scrutinize whether X has appropriate safeguards, moderation tools, and risk-mitigation measures in place. Reports also converge on the timeline: the new inquiry follows earlier enforcement steps and fines against X related to transparency and systemic risk obligations, and it looks specifically at how Grok operates for users in the EU.
Liberal and conservative sources also share the basic institutional and legal context in which this investigation unfolds, describing the Digital Services Act as the EU’s flagship content-regulation framework for large online platforms. Both note that the DSA requires companies like X to assess and mitigate systemic risks, including the spread of illegal content and harm from advanced AI tools, and that the Commission has the authority to demand internal documents, conduct audits, and ultimately impose substantial fines or remedial measures. Coverage across the spectrum presents Grok as part of a broader wave of generative AI features being integrated into social networks, highlighting concerns over the misuse of such technology to produce deepfakes and other abusive content. There is cross-ideological acknowledgment that this case may serve as an early test of how aggressively EU regulators will enforce AI-related safety, transparency, and child-protection obligations on major tech platforms.
Areas of disagreement
Framing of the EU’s role. Liberal-aligned outlets tend to frame the EU investigation as a necessary and expected use of regulatory power to protect users from illegal sexual content and to enforce the Digital Services Act on a high-risk platform. Conservative outlets are more likely to suggest that the EU may be overreaching, portraying the probe as part of a broader pattern of European regulators targeting Musk and X. While liberals emphasize the DSA as a vital tool for accountability in the age of AI, conservatives often stress concerns about bureaucratic overreach and potential politicization of tech regulation.
Portrayal of X and Musk. Liberal coverage generally casts X and Musk as lagging or negligent in moderating AI-generated sexual content, presenting the case as another example of Musk’s cost-cutting and deregulation undermining safety. Conservative coverage more often underscores the scale and novelty of AI threats, arguing that any platform could struggle and that Musk is being singled out because of his high profile and disputes with European officials. Liberals highlight a pattern of prior violations and fines against X under the DSA, whereas conservatives tend to contextualize those actions as part of an ongoing clash between Musk’s free-speech rhetoric and European regulatory norms.
Emphasis on victims and harms. Liberal-aligned outlets put stronger emphasis on the victims of sexual deepfakes and the potential presence of child sexual abuse material, stressing human-rights, dignity, and child-protection angles. Conservative outlets acknowledge these harms but often devote more space to the implications for innovation, AI development, and the risk that strict liability could chill technological progress. Liberals link this case to broader feminist and child-safety advocacy efforts around nonconsensual imagery, while conservatives more frequently frame it within debates over censorship, liability shields, and the burden on tech firms.
Implications for AI and platform governance. Liberal coverage commonly presents the investigation as a precedent that could push platforms to build safer AI systems, more robust risk assessments, and stricter guardrails before launching features like Grok. Conservative coverage tends to focus on the possibility that stringent EU enforcement will create heavy compliance costs, slow down AI deployment, and encourage companies to limit certain capabilities in Europe. While liberals see this as a test of whether law can keep pace with AI-generated abuse, conservatives are more inclined to treat it as a bellwether for whether regulators will micromanage product design and content moderation.
In summary, liberal coverage tends to depict the EU’s Grok probe as a justified and overdue intervention to curb sexual deepfakes, protect victims, and force X and Musk to meet their legal obligations, while conservative coverage tends to view it more skeptically as another instance of aggressive European regulation aimed at a controversial tech figure that could stifle innovation and free expression.