A federal government employee has allegedly leaked a sensitive API key attached to the XAI platform of Elon Musk – and may be serious implications for both national security and the future of AI development.
according to a Techradar’s reportMarco Alex, a 25 -year -old software developer with the department’s efficiency department (DOGE), was accidentally uploaded on Github, while working on a script titled Agent.py.
The key provided access to at least 52 private large language models from XAI, including the latest version of Grok (Grok, 4‑0709), a GPT-4-class model, which strengthens some of some of the most advanced AI services in Musk.
The exposed credentials remain active for the period of time, raising major questions about access control, data security and increasing use of AI in US government systems.
Why does it matter

Alex had access to high-level clearance and sensitive database to a sensitive database used by agencies such as the Department of Justice, Homeland Security and Social Security Administration.
If Xai credentials were abused before canceling, it could open the door for the abuse of the powerful language model, with scrapping ownership data to implement internal equipment.
The phenomenon follows a string of security flaws related to Dogi and combines the increasing chorus of criticism about the agency; Formed under the influence of Elon Musk to improve government efficiency, manages internal security measures.
What was leaked

The key to the leak was embedded in a githb repository owned by Alez and was publicly exposed.
It provided backnd access to XAI’s model suit, including Grok-4, without any clear use restrictions.
Researchers who discovered leakage were able to confirm its validity before taking the repository down, but not before that it could be scraped by others.
The most recent grooc model is used not only for public-support services such as X (East Twitter), but also within the federal contracts of the musk.
This means that API leakage may inadvertently create a possible attack surface in both commercial and government systems.
Larger than just one key

This is a warning indication that heavy -power AI devices are being carelessly handled, even by government internal sources.
“If a developer API key cannot keep private, then Cybercity firm CTO Philip Caturagi told Techradar, the question raises the question of how they are handling more sensitive government information behind closed doors.”
Alez has been involved in previous Dogi controversies, including inappropriate social media behavior and obvious disregard for cyber security protocols.
Tackway
According to the report, XAI has not issued a statement, and the leaked API key has not been officially canceled. So, so far, XAI has not disabled the key that it is a constant safety concern.
Meanwhile, government officials and guards are calling for better monitoring of strict credential management policies and high-dacoity AI infrastructure related to AI Infrastructure.
Although this violation cannot immediately affect the average user, it highlights a comprehensive issue: rapidly blurred lines between public and private AI development, and transparency, accountability and better data hygiene in both areas are very real requirement.
For now, the major tech uve is: such as the AI systems become more powerful, the human being behind them should be more careful. As we are already seeing, a careless upload can unlock the world of risk.
to follow Tom Guide on Google News Our up-to-date news, how-how, and to review in your feed. Be sure to click on the follow button.
More than Tom’s guide
Back to laptop

