Introducing Workrooms: Secure AI Collaboration
Enterprise leaders have spent the last two years asking how to make AI useful. Increasingly, the better question is how to make AI usable by more than one person, in more than one function, without weakening the security and governance standards that matter most.
In regulated environments, work is inherently cross-functional—involving finance, legal, security, and operations—making collaboration the operating model, not a mere feature. Most existing AI tools fail this model because they assume a single user, a single thread, and simple permission boundaries. This approach is inadequate when work becomes shared and sensitive. As with zero trust security, collaborative AI requires knowing what each person is entitled to see, what the agent is authorized to do on their behalf, and what auditable record remains when the work is complete.
Now available in Kamiwaza 1.0, Workrooms is a secure AI workspace built for teams, not just individuals. It gives organizations a governed place where multiple participants can collaborate, including with Kamiwaza’s flagship agent Kaizen, maintain shared context across a project, and work in parallel without flattening security boundaries. Instead of forcing teams to choose between useful collaboration and careful access control, Workrooms is designed to support both at once.
That distinction matters because the risks around shared AI work are becoming easier to see. OWASP now identifies excessive agency as a leading risk in LLM-based systems, especially when systems have too much permission, too much autonomy, or too much ability to act across connected tools and data sources1. In practice, that means the danger often does not begin with a malicious actor. It begins with an AI system that has been given an overly broad operating context and too little discipline around what it may access, infer, or share.
For senior technology leaders, this creates a familiar but more dynamic version of an old problem. Teams need to move faster. They also need to preserve stewardship. If collaboration with AI depends on copying sensitive material into flat chat threads, or if every participant in a shared session inherits more visibility than they should have, the workflow becomes difficult to defend operationally and legally. The result is predictable: teams either avoid the tool, or they use it in informal ways that create new governance gaps.
Workrooms takes a different approach. Within a Workroom, teams collaborate in a governed space where access is tied to entitlement and context. Participants can work together, but they do not all see the same thing simply because they are in the same room. Instead, Workrooms applies relationship-based authorization so access reflects the current relationship among the user, the task, the data, and the organization. A security lead, a finance reviewer, and an AI agent may all contribute to the same outcome, but each does so within the boundaries appropriate to that role and moment.
This becomes especially useful when a workflow needs more than one line of inquiry at the same time. In a typical high-stakes review, the team does not have one question; it has several. One participant may need Kaizen to summarize policy obligations. Another may need a separate chat to compare recent operational evidence against those obligations. A third may need to prepare a response draft for leadership while preserving citations and decision context. Workrooms supports multiple chats with Kaizen inside a shared governed environment, which means the team can pursue parallel threads of analysis without losing the common operating picture. The collaboration remains coordinated, but the access remains controlled.
Consider a concrete example. A healthcare organization is assessing a third-party software provider after a security incident. The CIO wants a rapid recommendation on operational exposure. The CISO’s team needs to review security controls, prior exceptions, and related incident notes. Legal needs to examine contract terms and notification obligations. Procurement needs to understand whether substitute vendors exist and what transition costs might follow. In a conventional AI setup, that process tends to fracture. Teams shuttle excerpts through email, copy material into disconnected chats, and manually reconcile conclusions across tools and meetings.
In Workrooms, the organization can create a shared workspace for the incident review. The security team can use Kaizen in one chat to analyze internal control mappings and prior incident history. Legal can run a separate chat focused on contract language and obligations. Procurement can assess alternatives in another thread. Leaders can see the evolving state of the work, while each participant remains bound to the data, documents, and tools they are entitled to access. The room preserves context across the collaboration, yet it does not erase the distinctions that protect sensitive information. Just as important, the work leaves an auditable record of who asked what, what information informed the answer, and how the team arrived at a recommendation.
That blend of collaboration and control is not a cosmetic improvement. It speaks directly to the way leading organizations are beginning to scale AI. McKinsey’s 2025 State of AI research found that high performers are nearly three times as likely as others to fundamentally redesign workflows in their deployment of AI2. In other words, durable value does not come from sprinkling AI onto existing tasks. It comes from rethinking how work actually moves. For collaborative enterprise decisions, that means designing workflows where humans and AI can work together without turning governance into an afterthought.
The same pattern appears in McKinsey’s recent work on agentic IT infrastructure, where more complex issues are escalated to humans under clearly defined governance and agent execution proceeds within predefined boundaries3. This agentic model is the right mental model for Workrooms: it is not collaboration without control, nor control that prevents collaboration, but a workspace for situations in which teams need both.
As part of Kamiwaza 1.0, Workrooms extends the platform’s broader operating principle into shared, real-world enterprise work. AI should help teams act where data already lives, within the controls that already matter, and in a way that leaves the organization more accountable rather than less. That is particularly relevant for regulated industries, where the most valuable workflows are often the least suited to casual experimentation. The issue is not whether people can ask an AI a clever question. It is whether multiple stakeholders can pursue a meaningful outcome together, using AI as part of the process, without compromising access discipline, oversight, or trust.
For technology and security leaders, the priority is clear: if AI is going to move beyond individual tasks into team-wide projects, organizations need a secure environment built for collaboration rather than just another chat tool or complex exception process. Workrooms provides this governed workspace, allowing people and AI to work toward the same goal simultaneously while ensuring that access is always managed and secure.
To see how Workrooms can transform your governed AI projects, watch a demo today.
Citations:
- OWASP, LLM08: Excessive Agency
- McKinsey & Company, The State of AI in 2025: Agents, Innovation, and Transformation
- McKinsey & Company, Reimagining Tech Infrastructure for Agentic AI