Elon Musk’s X faces EU probe over GDPR violations in AI training
Elon Musk’s X is facing a regulatory probe in Europe over its alleged use of public posts from EU users to train its Grok AI chatbot – an investigation that could set a precedent for how companies use publicly available data under the bloc’s privacy laws.
The Irish Data Protection Commission, in a statement, said it is examining whether X Internet Unlimited Company (XIUC), the platform’s newly renamed Irish entity, has complied with key provisions of the GDPR.
At the heart of the probe is X’s practice of sharing publicly available user data – such as posts, profiles, and interactions – with its affiliate xAI, which uses the content to train the Grok chatbot.
This data-sharing arrangement has drawn concern from regulators and privacy advocates, especially given the lack of explicit user consent.
Adding to the concerns, rival Meta announced this week that it would also begin using public posts, comments, and user interactions with its AI tools to train models in the EU—signaling a broader industry trend that may invite further scrutiny.
Ongoing regulatory scrutiny
Ireland’s probe into X’s use of personal data marks the latest step in the EU’s broader push to hold AI vendors accountable.
Many leading AI companies have adopted a “build first, ask later” strategy, often deploying models before fully addressing regulatory compliance.
“However, the EU does not look kindly to the approach of opting users into sharing data by default,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “Data scraping is especially a problem in the EU because of the establishment of GDPR back in 2018. At this point, GDPR is an established law with over 1 billion euro in annual fines consistently being handed out year over year.”
The DPC’s investigation into X could also become a regulatory inflection point for the AI industry.
Until now, many AI models have operated in a legal gray area when it comes to scraping publicly available personal data, according to Abhivyakti Sengar, practice director at Everest Group.
“If regulators conclude that such data still requires consent under GDPR, it could force a rethink of how models are trained, not just in Europe, but globally,” Sengar said.
More pressure on enterprise adoption
The probe is likely to impact enterprise adoption of AI models further trained on publicly available personal data, as businesses weigh legal and reputational risks.
“There’s a noticeable chill sweeping across enterprise boardrooms,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “With Ireland’s data watchdog now formally probing X over its AI training practices, the lines between ‘publicly available’ and ‘publicly usable’ data are no longer theoretical.”
Eighty-two percent of technology leaders in the EU now scrutinize AI model lineage before approving deployment, according to Greyhound Research.
In one case, a Nordic bank paused a generative AI pilot mid-rollout after its legal team raised concerns about the source of the model’s training data, Gogia said.
“The vendor failed to confirm whether European citizen data had been involved,” Gogia said. “Compliance overruled product leads and the program was ultimately restructured around a Europe-based model with fully disclosed inputs. This decision was driven by regulatory risk, not model performance.”
The world is watching
Ireland’s move could shape how regulators in other parts of the world rethink consent in the age of AI.
“This probe could do for AI what Schrems II did for data transfers: set the tone for global scrutiny,” Gogia said. “It’s not simply about X or one case – it’s about the nature of ‘consent’ and whether it survives machine-scale scraping. Regions like Germany and the Netherlands are unlikely to sit idle, and even outside the EU, countries like Singapore and Canada are known to mirror such precedents. The narrative is shifting from enforcement to example-setting.”
Park suggested that enterprise customers should seek indemnity clauses from AI vendors to protect against data compliance risks. These clauses hold vendors legally accountable for regulatory compliance, governance, and intellectual property issues linked to the AI models they provide. “Although most technology companies try to avoid indemnity clauses in most cases because they are so wide-ranging in nature, AI is an exception because AI clients require that level of protection against potential data and intellectual property issues,” Park added.