Microsoft is rolling out an experimental agentic AI feature in the latest developer preview version of Windows 11, allowing users to automate everyday tasks, but warns that improper security controls may create bigger risks than gains.
The experimental feature, called ‘agent workspace’, essentially creates a separate space on Windows where users grant AI agents access to their applications and data for background task completion.
Agents operate using their own accounts, separate from the user’s account, for scoped authorization and runtime isolation, and have restricted access to folders, unless the user grants each of them more permissions.
The agent workspace, Microsoft says, runs in a separate Windows session, in parallel with the user’s session, to ensure security isolation and user control, and is only enabled when the user toggles on the experimental agentic feature setting.
While the feature is off by default, the company warns that enabling it creates risks and that only users who understand the security implications should enable it.
“This setting can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” it notes.
Once enabled, the feature leads to the creation of agent accounts and of the agent workspace, and allows agentic applications, such as Copilot, to request access to users’ folders.
Overall, enabling agentic AI would turn the OS into a personal assistant, but it would also expose the system to risks such as hallucinations and to malicious actions triggered by crafted prompts, Microsoft warns.
“Agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation,” the company notes.
Agents, it says, are susceptible to attacks just as any user or software, and their actions should be containable. The user should always monitor these actions, and Windows should be able to verify them with a tamper-evident audit log.
According to Microsoft, agents should always operate under the principles of least privilege, should not have permissions higher than those of the initiating user, and should not be accessible by other entities on the system, other than their owner.
On the other hand, the company says it has implemented guardrails to ensure the security and privacy of users, and will gradually roll out agentic capabilities across Windows 11, including an Ask Copilot feature in the taskbar, Copilot in File Explorer, AI-generated summaries in Outlook, and others.
“Addressing the security challenges of AI agents requires adherence to a strong set of security principles to ensure agents act in alignment with user intent and safeguard their sensitive information. We’re establishing a set of durable security and privacy principles that you must meet to make use of new agentic capabilities in Windows,” Microsoft says.
Related: GitHub Copilot Chat Flaw Leaked Data From Private Repositories
Related: Microsoft Adds AI Agents to Security Copilot
Related: Microsoft Unveils Copilot Vision AI Tool, but Highlights Security After Recall Debacle
Related: Why Using Microsoft Copilot Could Amplify Existing Data Quality and Privacy Issues

