Employee at Elon Musk’s Artificial Intelligence Company Xai But a private key leaked Github For the last two months, no one could allow anyone to query private XAI big language models (LLMS), which was made a custom to work with internal data from Musk’s companies. Spacex, Tesla And Twitter/X, Krebsonsecurity has learned.
Picture: Shuttersk, @SDX15.
feast“Chief Hacking Officer” on security counseling CerealisWas First to propagate leakage The GITHUB code repository of a technical staff member at Credentials XAI for X.AAI Application Programming Interface (API) was exposed.
Caturegli’s post on LinkedIn draws the attention of researchers GitguardianA company that specializes in detecting and removing exposed secrets in public and proprietary environment. Gitguardian’s systems continuously scan the Github and other code repository for API keys, and set fire to the affected users automated alerts.
Gitguardian Eric forear Krebsonsecurity exposed API key had access to several unpublished models RavineAI chatboats developed by XAI. Overall, the guards found that the key had at least 60 fine-tuned and access to private LLM.
Gitguardian has written to Xai in an email, interpreting its conclusions, “Credentials can be used to reach the X.AI API with the identity of the user.” “The affiliated account has access to not only public grouke models (Grok-2–1212, etc.), but also that unpublished (GROK-2.5V), development (research-Grok-2P5V-1018), and private models (tweet-rajctor, groke-spacex-2024-11-04).”
Fourier found that Guardian had alerted the XAI employee about the key to XEE on March 2, about two months ago. But on 30 April, when Gitguardian alerted the XAI’s security team for exposure, the key was still valid and usable. XAI asked Gitguardian to report the case through its bug bounty program MalevolentBut a few hours later, repository containing API key was removed from the githb.
“It seems that some of these internal LLMs were fine on spacex data, and some were fine with Tesla data,” Fourier said. “I certainly don’t think that a groke model that is fine on spacex data is intended to publicly expose.”
XAI did not respond to the remarks request. Nor were the 28 -year -old XAI technical staff members whose key was revealed.
Carroll VinakvistChief Marketing Officer at Guardian, stated that providing free access to private LLMS to potential hostile users is a recipe for disaster.
“If you are an attacker and you have a direct access to models and back end interfaces for things like grouke, it is definitely something that you can use to attack further,” he said. “An attacker can use it for early injections, (LLM) model to serve its objectives, or try to transplant the code in the supply chain.”
The inadvertent risk of internal llms to XAI comes as the so -called musk Government efficiency department (DOGE) is feeding sensitive government records in artificial intelligence devices. in February, Washington Post Informed The DOGE officials were feeding data in the AI equipment in the Education Department to check the programs and expenses of the agency.
The post said that Dogge has planned to repeat the process in several departments and agencies, which reach back-end software in different parts of the government and then uses AI technology to use AI technology through information about spending on employees and programs.
Post reporters wrote, “By feeding sensitive data in AI software, it has been kept in the possession of the operator of the system, which increases the possibility that it will leaked or flow into a cyber attack.”
Wired told in March that Dogi is near A proprietary chatbot was deployed called GSAI But 1,500 federal workers General service administrationA part of an attempt to automate the work done by humans, as Dogi continues his purification of the federal workforce.
A Roots Report Last month said that the Trump administration officials told some American government employees that Dogi was using AI to survey the communication of at least one federal agency for the President Trump and his agenda. Reuters have written that the Dogi team has heavy -deployed Musk’s Groke AI Chatbot as part of its work to the federal government, although Reuters said it could not really establish how the groke was being used.
Ceturegli said that when there is no indication that the federal government or user data can be accessed through the X.Ai API key, these private models are probably trained on ownership data and can unknowingly expose the details related to internal development efforts in XAI, Twitter or SpaceX.
“The fact that this key was publicly exposed for two months and provided access to the internal model,” Catugali said. “Such long -living credible exposures highlights weak major management and inadequate internal monitoring, questioning developers access and safety measures around wide operating safety.”

