
Five weeks before President Donald Trump announced his administration’s AI policy, major AI companies are continuing to strengthen their relations with the government – especially national security customers.
On Thursday, Anthropic announced a family cloud village of model “especially for American National Security Customers”. releaseThe purpose of Cloud village is everything from operations to analysis of intelligence and danger, and is designed to explain classified documents and defense references. This includes better language and bid proficiency as well as increased interpretation of cyber security data.
Apart from this: Anthropic’s popular Cloud Code AI Tool is now included in its $ 20/Month Pro Plan.
The company said, “Models have already been deployed by agencies at the highest level of US national security, and access to these models is limited to those who work in such classified environment.”
Developed with response to anthropic government users, the company stated that Cloud Goav passes through its safety testing standard (which the company applies to all cloud models). In release, Anthropic reiterated his commitment to safe, responsible AI, assuring that the cloud village is no different.
In January this year, Openai announced several months after Openai released a Chatgpt Gov, indicating extensive public changes in the major AI labs, which clearly serves government use cases with finely fine products. The National Guard already uses Google AI to improve its disaster response.
(Disclosure: ZDNET’s original company Ziff Davis filed a case of April 2025 against Openai, alleging that it violates Ziff Davis copyright training and operating its AI system.)
Prior to the Trump administration, military and other defense-related contracts between the AI companies and the US government were not almost publicized, especially between transferring the guidelines used in companies such as openaiI, which was originally vowed not to engage in arms manufacturing. Google has recently passed through similar guidelines amendments, despite maintaining claims of AI responsible.
Also: Ethropic mapped cloud morality. What’s the chatbot value here (and no)
Growing relations between the government and AI companies are being held in a big context of the AI action plan of the Trump administration, slapped for July 19. Ever since Trump took over, AI companies have adjusted the responsibility of biden-era, which was originally built with the US AI Safety Institute, as Trump withdrew any regulation biden. Openai is advocating low regulation in exchange for government access to model. Along with anthropic, it has also caused the government in new ways, including scientific partnership and $ 500 billion Stargate initiatives, while the Trump administration has cut funding within the US government for staff and AI and other science-related initiatives.
Get top stories of morning with us in your inbox every day Tech Today Newsletter.

