Suneth Mendis

Suneth Mendis

A personal blog by a technologist and a leader contextualising thoughts…

  • Home
  • About

Your next direct report might be an AI agent. Are you ready to manage it?

Mar 27, 2026

—

Artificial Intelligence, Leadership

Something shifted quietly in the past few months, and most leaders have not yet caught up with what it means.

AI agents are no longer a concept on a roadmap. They are being deployed into real workflows right now — triaging requests, updating records, drafting proposals, routing decisions, executing multi-step processes with minimal human oversight. In many organisations, they are already doing work that sits alongside the work of your team.

Harvard Business School’s Tsedal Neeley describes it precisely: agentic AI systems can plan, reason, and act to complete entire workflows. They do not wait for a prompt. They pursue goals.

That changes something fundamental about what leadership means.

We have spent decades developing frameworks for leading people. How to set direction. How to delegate. How to build accountability. How to have the difficult conversation. These are the craft skills of management, and they matter enormously.

But none of them were designed for a team that includes autonomous systems.

A recent MIT Sloan and BCG study found that 76 percent of executives now view agentic AI as more like a coworker than a tool. That framing matters. A tool is something you operate. A coworker is something you manage. And managing requires a completely different set of questions.

Who is accountable when an AI agent makes a poor decision? What does oversight look like when the agent works faster than any human can review? Where does the boundary sit between what the system decides and what a person must decide?

These are not hypothetical questions. They are operational realities facing leaders right now, in technology, finance, healthcare, and increasingly in every industry that touches complex information flows.

I work in cybersecurity, and this tension is acute. An AI agent operating in a security workflow — analysing alerts, triaging incidents, initiating responses — has significant reach. The efficiency case is compelling. The governance case is equally important. You cannot deploy capability without deploying accountability alongside it.

What is emerging is a new kind of leadership role that nobody has a clean job description for yet. Call it the agentic manager. It blends technical literacy with human judgment. It involves designing the conditions in which AI systems operate — setting the guardrails, defining the escalation paths, specifying what requires human review and what does not. It is less about managing tasks and more about managing the environment in which tasks happen.

Deloitte’s research on this is worth sitting with. For organisations deploying AI at depth, roughly 70 percent of the difficulty lies in change management. Only 10 percent sits in the technology itself. The hard part is not building the agent. The hard part is redesigning how your organisation works around it, and who is responsible for what when it does not behave as expected.

This is not a technology problem. It is a leadership problem wearing a technology costume.

I think back to the antifragile team framework. One of the principles is that antifragile systems decentralise decision-making — small, empowered units that can experiment, adapt, and fail cheaply. That is a good instinct. But it requires clarity about where the humans sit in the loop. Decentralisation without governance is not agility. It is exposure.

The same principle applies to agentic AI. Autonomous systems extend your team’s reach. They do not replace your team’s judgment. The leaders who understand that distinction — who invest as seriously in governing AI as they do in deploying it — are the ones who will build teams that are genuinely more capable, not just more efficient.

There is a question I think every leader should be asking right now, before the agents are already running.

If one of your AI systems made a significant error today — acted on incomplete data, escalated the wrong thing, made a call it was not authorised to make — would you know? Would your team know? Does a named person own that outcome?

If the answer is uncertain, that is where the work begins.

The organisations that will thrive in an agentic world are not those that deploy the most agents. They are those that build the clearest accountability around them.

That is what it means to lead in this moment. Not to control every step. But to design the system well enough that when things go wrong — and they will — you know exactly what to do next.


←Previous: The hidden cost of AI productivity: we’re skipping the part where people actually learn
Next: The CISO Evolution: From Firewall Manager to Culture Architect→
Suneth Mendis

© 2026 Suneth Mendis. All Rights Reserved.

  • LinkedIn