
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Windows 11 is adding AI agents that can take actions on your behalf.
- Copilot agents represent potential security and privacy risks.
- Expect testing and more security controls before the feature opens to the public.
Every computer security decision ultimately boils down to a question of trust. Should you install this program that you are going to download from an unfamiliar website? Are you sure your email messages are going straight to the recipient without any interruption? Is it safe to provide your credit card details to that merchant?
Soon, owners of PCs running Windows 11 will have another question to add to that list: Should you trust this Copilot agent to poke around in your files and interact with apps on your behalf?
Also: OpenAI’s own support bot doesn’t know how ChatGPT works
Here’s how Microsoft describes the CoPilot Actions feature, which is being released for testing by members of the Windows Insider Program:
copilot actions is one AI Agent That completes your tasks by interacting with your apps and files, using vision and advanced logic to click, type, and scroll like a human.
This transforms agents from passive assistants to active digital collaborators that can perform complex tasks for you to increase efficiency and productivity – like updating documents, organizing files, booking tickets or sending emails. After you grant the agent access, when integrated with Windows, the agent can take advantage of what’s already on your PC, like your apps and data, to accomplish its tasks.
These are decisions of great confidence. Allowing an agent to interact with your personal files requires a leap of faith. The idea of letting an agent act on your behalf in apps is similar – where, presumably, you sign in using some kind of secure credentials.
learning from the past
The last time Microsoft introduced a major AI feature with this level of access to your personal data, it… didn’t work well. The Windows recall feature was criticized by security researchers, delayed for months, and eventually relaunched with major privacy and security changes. Ultimately, it took almost a year to get the feature into public build.
This time, Microsoft doesn’t want to take any such risks. In a pair of on-the-record briefings ahead of the public debut of the CoPilot Actions feature, company executives went to great lengths to emphasize its commitment to privacy and security controls.
Also: How to get free Windows 10 security updates until October 2026
For starters, the feature is being released as a preview, in “experimental mode,” exclusively to customers who have opted into the Windows Insider program for pre-release builds of Windows.
This feature is disabled by default and is only enabled when the user flips the “Experimental Agentic Features” switch in Windows Settings > System > AI Components > Agent Tools.
Agents that integrate with Windows must be digitally signed by a trusted source, as are executable apps. That precaution will make it possible to cancel and block malicious agents.
The agents will run under a separate standard account that is provisioned only when the user enables the feature. For now, at least, the agent account will have access to a limited set of so-called known folders in the logged-on user’s profile – including Documents, Downloads, Desktop, and Pictures. The user needs to be explicitly given permission to access files in other locations.
Also: Microsoft Copilot AI can now pull information directly from Outlook, Gmail, and other apps
All those actions will take place in a contained environment called agent workspaceWith its own desktop and only limited access to the user’s desktop. In principle, this type of runtime isolation and granular control over permissions is similar to existing features like Windows Sandbox.
one in blog post Highlighting these security features, Dana Huang, corporate vice president of Windows Security, said, “(The) agent will start with limited permissions and will only have access to resources that you have explicitly allowed, such as your local files. There is a well-defined limit to the agent’s actions, and it has no ability to make changes to your device without your intervention. This access can be revoked at any time. Could.”
The security risks for this type of facility are high. As Huang said, “(a)Genetic AI applications introduce new security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions such as data exfiltration or malware installation.” And, of course, there is always the risk that an AI-powered agent will confidently commit the wrong thing.
Also: This new CoPilot trick will save you a lot of time in Windows 11 – here’s how
In an interview, Microsoft’s Peter Waxman confirmed that the company’s security researchers are actively “red-teaming” the Copilot actions feature, though he declined to discuss any specific scenarios they tested.
Microsoft said the feature will continue to evolve during the experimental preview period, with “more detailed security and privacy controls” coming in place before releasing the features to the public.
Will those warnings and disclaimers be enough to satisfy the notoriously skeptical community of security researchers? We’re about to find out.
Do you want to follow my work? Add ZDNET as a trusted source on Google,

