How do you lead a team with both people and AI agents in the best way?
By Matilda Rydow
Categories: Competence & Team, AI Agents
Leadership must combine a clear process with a clear quality bar, but also avoid making process an end in itself. If you only process you risk an organisation that is efficient on paper but loses tempo, creativity, and motivation. If you only run you get fast output but low quality, unclear ownership, and weak learning. 1) Design the work as a system, not a Slack channel Agents need defined tasks, constraints, logging, and follow‑up. People need clear goals, shared measurement, and reduced friction between functions. Otherwise everyone runs fast in different directions, with different dashboards and different truths. At the same time the process must be minimally sufficient. It should create traceability, decision power, and quality, not fill calendars. A good rule of thumb is that every new process step should pay for itself in at least one of three currencies, lower risk, more learning, or higher quality. 2) Optimize for learning, not just output When AI does more of the repetitive work it is easy to delegate away learning and get fast but shallow. Then the team loses the ability to review, debug, and understand why something works. That becomes dangerous as journeys get more hybrid and agent‑supported and cause and effect become harder to read. Practically, build learning into how you work: - Require a short why and alternatives, not just delivery - Regular debug sessions where you troubleshoot manually - Post‑mortems that actually update guardrails and improve prompts and processes - A shared experiment engine where insights flow back into the system, not just into one person’s head 3) Set a quality bar that cannot be automated away Quality can be handled at three levels, but add a clear quality‑owner mindset: - Where experts are needed, high risk, big money, brand, policy and claims, complex strategy. Humans should own the decision and the bar. - Where oversight is required, the agent runs but someone reviews samples, monitors deviations, and updates rules. - Where control can be released, low risk, easy to roll back, clear guardrails. What determines whether AI delivers is often everything around the work, ownership, decision criteria, review, logging, and feeding learning back into the process. 4) Protect motivation, do not turn people into QA machines A common trap is that AI does all the fun work and people are left to approve, correct, and chase deviations. It gets boring fast and selects for a narrow profile, people who like control, routine, and administration. To avoid that, design roles so people get: - Clear ownership of outcomes, not just review - Room for creativity, prototyping, and problem‑solving - Opportunities to build competence, not just operate In practice a strong setup is often a mix of prototyper, shaping new paths, and operational generalist, driving through and building endurance, with the agent as an accelerator, not a replacement for meaningful work. 5) Be transparent about job risk and make a plan There is a real risk that some tasks and sometimes whole roles shrink as agent flows improve. Pretending it does not exist creates uncertainty and passivity. What works better is: - Clarity on what will be automated and why - A plan for upskilling and role shifts, for example into process design, measurement, platform competence, creative systems, and agent ops - Clear expectations for using AI to get better, not just faster Summary The best way to lead a hybrid team is to do three things at once, build a minimally sufficient process that creates traceability and decision power, build in learning so competence does not erode, and design the work so people are not reduced to QA but own outcomes, creativity, and growth.
How do you lead a team with both people and AI agents in the best way?
Leadership must combine a clear process with a clear quality bar, but also avoid making process an end in itself. If you only process you risk an organisation that is efficient on paper but loses tempo, creativity, and motivation. If you only run you get fast output but low quality, unclear ownership, and weak learning.
1) Design the work as a system, not a Slack channel Agents need defined tasks, constraints, logging, and follow‑up. People need clear goals, shared measurement, and reduced friction between functions. Otherwise everyone runs fast in different directions, with different dashboards and different truths.
At the same time the process must be minimally sufficient. It should create traceability, decision power, and quality, not fill calendars. A good rule of thumb is that every new process step should pay for itself in at least one of three currencies, lower risk, more learning, or higher quality.
2) Optimize for learning, not just output When AI does more of the repetitive work it is easy to delegate away learning and get fast but shallow. Then the team loses the ability to review, debug, and understand why something works. That becomes dangerous as journeys get more hybrid and agent‑supported and cause and effect become harder to read.
Practically, build learning into how you work:
- Require a short why and alternatives, not just delivery
- Regular debug sessions where you troubleshoot manually
- Post‑mortems that actually update guardrails and improve prompts and processes
- A shared experiment engine where insights flow back into the system, not just into one person’s head
3) Set a quality bar that cannot be automated away Quality can be handled at three levels, but add a clear quality‑owner mindset:
- Where experts are needed, high risk, big money, brand, policy and claims, complex strategy. Humans should own the decision and the bar.
- Where oversight is required, the agent runs but someone reviews samples, monitors deviations, and updates rules.
- Where control can be released, low risk, easy to roll back, clear guardrails.
What determines whether AI delivers is often everything around the work, ownership, decision criteria, review, logging, and feeding learning back into the process.
4) Protect motivation, do not turn people into QA machines A common trap is that AI does all the fun work and people are left to approve, correct, and chase deviations. It gets boring fast and selects for a narrow profile, people who like control, routine, and administration.
To avoid that, design roles so people get:
- Clear ownership of outcomes, not just review
- Room for creativity, prototyping, and problem‑solving
- Opportunities to build competence, not just operate
In practice a strong setup is often a mix of prototyper, shaping new paths, and operational generalist, driving through and building endurance, with the agent as an accelerator, not a replacement for meaningful work.
5) Be transparent about job risk and make a plan There is a real risk that some tasks and sometimes whole roles shrink as agent flows improve. Pretending it does not exist creates uncertainty and passivity.
What works better is:
- Clarity on what will be automated and why
- A plan for upskilling and role shifts, for example into process design, measurement, platform competence, creative systems, and agent ops
- Clear expectations for using AI to get better, not just faster
Summary The best way to lead a hybrid team is to do three things at once, build a minimally sufficient process that creates traceability and decision power, build in learning so competence does not erode, and design the work so people are not reduced to QA but own outcomes, creativity, and growth.