Q&A on AI in Marketing
It's an exciting time right now for most companies and functions. But marketing, CRM, and e-com/web have a very particular challenge ahead. I've spent the last ten years deeply involved in digital marketing, especially the nerdy part: how companies build competitive advantage through smart use of data. It has been a long journey for many, and the work is still very much ongoing. Now AI enters and makes everything… both easier and harder. From one direction, AI creates entirely new possibilities in daily work: more automation, faster production, new ways to analyse, and eventually AI agents that can take over parts of the job. But that development has to live side by side with everything else already in motion: data projects, new CDP/warehouse initiatives, measurement improvements, attribution/MMM, new processes, new tools, new agency setups. AI is not a "reset", it's a new layer that has to fit into reality. From the other direction comes AI search, and not least Agentic commerce. It's a phenomenon that doesn't just affect channel mix or creative formats, but can change how companies operate at the core, across product, revenue, and marketing. When discovery, research, and sometimes purchase move to new interfaces and new decision flows, it also affects the marketing operating model. That's why the CMO role becomes so central in this shift. The change is driven from two directions at once: internally (how AI changes ways of working, teams, cost structure, and quality) and externally (how AI search and agentic commerce move customer behavior and the rules of the game). Here I share ongoing questions & answers, reflections, and observations on the topic, as transparently as I can. This is too important to stay behind closed doors.
By Matilda Rydow, AI Advisor & Consultant
Categories
- Organisation & Operating Model
- AI Agents
- Agentic Commerce
- Tech Stack & Data
- Agencies & Partners
- Measurement, Attribution & MMM
- Competence & Team
- Implementation & Security
- Automation & Quality
How should a marketing organisation be structured for 2026–2027 and beyond?
Collaboration, transparency, courage, and creativity are principles I believe will matter. I have also become obsessed with silos. More specifically why they have grown over the last few years and why they risk getting worse as AI agents take more space. Before I go into the risks of AI agents and silos, I need a short recap. Many in‑house teams are very competent, but working in the wrong silos is often a blocker. My view is that siloing has increased over the last ten years. There are several reasons. One is that in‑house teams have grown. They grew because more agency work was brought in‑house and the fragmented digital landscape increased the need for specialists. Teams grow and competence increases, but old structures remain, so many functions, brand, creative, SEO, revenue, CRM, channel owners, e‑com, sales, CX, still have too little interaction and insight into each other’s work. The paradox is that as the in‑house team grows, agency collaborations often grow too. Why? Because each in‑house function wants its mirror image on the agency side. That is how the structure has looked, and still looks. The consequence is that marketing, sales, and CRM departments, including sub‑teams, not only work in silos, they also create their own silo with the agencies they work with. That adds another layer of complexity and a disproportionate share of time goes to communication. AI agents can create more and deeper silos AI agents risk creating even more silos if the organizational perspective is not brought in early and you are not proactive. Today there is almost always a lack of transparency around which agents exist, how they work together, and how collaboration between humans and agents actually looks. AI implementations often happen at the individual level, or by buying a full system for a specific function, for example SEO AI Employees, with no connection to the whole. It can also be a more enterprise‑adapted implementation in proof‑of‑concept mode that tries to solve an isolated problem. So how should you organize for 2026+? First, there are differences between industries and verticals, just like today. My view is that verticalization will increase somewhat. That applies both to the specific competence companies benefit from and to how you should organize for maximum effect. Some principles will repeat across verticals. The drivers come from two directions. One is the opportunities AI creates to make marketing work more efficient. The other is agentic commerce, which will put pressure on companies, and CMOs in particular, to rethink their operating model more drastically. 1) Start from the maturity of your tech stack and data quality Organizing ways of working must start from the current state of the stack and data quality. If maturity is low and data quality is weak, you must implement AI agents and organizational change differently, and vice versa. You need two tracks running at once, improving the stack and data quality while implementing AI agents and organizational change. The critical part is that those two tracks stay very tightly aligned. 2) Remember agentic commerce affects all companies, not just e‑commerce Agentic commerce means AI agents research, compare, and sometimes purchase on behalf of users and companies. Discovery, research, and sometimes checkout move somewhere else. I believe this drives a few important organizational shifts: - Trust and brand ownership across channels. A cross‑channel function is needed to own brand, PR, reviews, policies, customer promises, product claims, and proof. That means teams that historically worked in silos, PR, brand, social, product, insight, must come together. In a world where small companies can compete with big ones and customers are less loyal, this becomes one of the most important functions to drive revenue, both short and long term. - CRO and the funnel change. If choices, or at least recommendations, are increasingly made by an AI agent rather than on site or at checkout, then CRO, e‑com, and web teams must own what actually drives a recommendation. If you take it further, you might ask whether the team and skill mix as we know it should be rebuilt from scratch. - CRM and acquisition, break the walls. What is a new customer and what is a loyal customer when the agent does the research? It is no longer black and white. What historically sat under separate departmental ownership must be seen as one system, not two machines. A new customer may build preference long before the first purchase appears in your data, and a loyal customer can return without ever visiting your site because an agent keeps choosing you based on trust, delivery, price logic, return terms, and product data. That means acquisition, find new, and CRM, nurture existing, can no longer be optimized separately without losing effect. The walls need to come down for real. 3) Agency and consulting partnerships, the operating model and collaboration need to be challenged Agencies will also use AI. That does not mean you must use agencies less. But it does mean the collaboration must be redesigned. Otherwise you end up with two parallel systems, you optimize human‑agent flows internally, and the agency optimizes its human‑agent flows externally, without the systems talking to each other. That is a big risk, duplicated definitions, weaker traceability, more friction, and eventually lower speed. Two things to be clear about when you set up the collaboration: - Transparency and compatibility. Same data sources, same tracking, same definitions, same QA, and a way to connect the agent layer so you do not build two separate systems. - Right value model. If you get exactly the same delivery and pay exactly the same when production is made more efficient by AI, it is skewed. At the same time, an enormous amount of time goes to communication today, and if the operating model is not right, that time will not shrink, it will grow. And finally, some functions are more business‑critical than others. In some cases outsourcing a whole function can work, in others it must stay internal. What can be highly relevant to buy in, advanced analysis, measurement, automation, creative edge, deep channel expertise, and the ability to actually see through platforms, hello Google. 4) Measurement, attribution, and MMM become more important than ever Yes, you have heard it before. But it is true, for several reasons. AI makes differences between companies smaller. It becomes harder to create sustainable competitive advantage. A tight grip on holistic evaluation, and making it part of the entire operating model, is a competitive advantage that is very hard to build and copy. Another reason is a further fragmentation of the buying journey, where an even larger share of research will happen off site, by agents. 5) Do not automate everything, optimize for quality This is not just about content, it is about almost everything. Automation is great, especially as methods like MMM become more accessible, but when everyone can do everything, quality drops fast if no one owns the craft, the bar, and the checkpoints. Think in three levels: - Real experts. Hard decisions, big money, high risk. - Oversight required. Automation can run, but someone must review and steer. - Release control. Low risk, easy to roll back, clear guardrails.
What is agentic commerce and why does it affect SaaS, B2B, and apps too?
Agentic commerce means AI agents help users, consumers and companies, research, compare, and sometimes complete purchases. It affects all verticals, e‑com, B2B, and beyond. Parts of discovery, research, and decision‑making move away from traditional surfaces into agent interfaces and recommendation layers. For SaaS and B2B this often means shortlisting and risk assessment become more agent‑assisted, for example comparisons, proof, policies, and pricing logic. For apps it can affect how users find and choose alternatives. In all cases, being chosen depends more on trust signals and clarity, and less on owning the entire journey in your own channels. Agentic commerce forces organisational change. Going forward you need to optimise for both humans and agents. Some buying journeys will remain fully human, others almost 100 percent agentic, but most will be hybrids where the agent does research, comparisons, and shortlisting and the human makes the final call, or the other way around. That means brand and trust must be owned and built more systematically, because recommendations increasingly depend on signals like proof, reviews, policies, price logic, delivery, and product data, not on owning the entire journey in your own channels. At the same time the line between new and existing customers blurs. A new customer can build preference through repeated agent recommendations long before the first conversion shows up, and an existing customer can return without visiting your surfaces at all. To avoid optimizing the wrong parts of the system, product data and policies, brand and trust work, CRM, and measurement need to connect much more tightly. Otherwise you get local wins, more output, more initiatives, more dashboards, but a weaker whole. The agent makes decisions based on a different logic than the teams measure, and the organisation runs faster in the wrong direction.
What does a modern journey look like when the line between new and loyal fades?
When agent interfaces and external surfaces take more space, the journey becomes less linear. A new customer may build preference through comparisons, reviews, and proof long before the first purchase appears in your analytics. A loyal customer can repurchase without visiting your site or opening email because the agent keeps choosing you based on trust, delivery, price logic, and return terms. That means CRM and acquisition can no longer be optimized separately without losing effect. You need shared goals, a shared customer model, prospect and customer in the same view, and an experiment engine that spans the whole chain rather than being owned by one channel.
How does CRO change when research and recommendations move away from the storefront?
CRO shifts from pixel‑perfect checkout to reducing friction, increasing clarity, and building trust across the system. When more decisions are made before a user reaches the storefront, it becomes more important to win on policies, product data, proof, reviews, and consistent promises. The storefront is still important, but it is no longer the only persuasion surface. That means CRO must work more closely with brand and trust, CX, and ops. Otherwise the default fix is to buy more traffic when conversion falls instead of fixing why the choice happens elsewhere.
Which trust signals matter more when an agent compares options for the customer?
Agents need signals that are clear and comparable. Examples include: - Delivery promises and delivery times - Return terms and customer promises - Reviews, ratings, and proof - Inventory status and variant logic - Price logic and offers - Product claims and how well they are backed up The problem is that these signals are often owned by different functions, brand, PR, CX, e‑com, ops, legal. That is why trust is not a channel issue but a cross‑functional construct. Without ownership and coordination, the result is more internal communication and more ad hoc decisions while external trust erodes.
How do you notice you have silo problems in the company?
Silos are rarely about people being specialists and “doing their own thing.” They happen when accountability and visibility break between things that actually belong together. An early sign is high activity but a sense of being out of sync, many parallel initiatives, lots of pinging, many meetings, yet a constant feeling of firefighting. Another sign is that the same questions repeat in multiple rooms and decisions are made in semi‑closed contexts, for example an agency and one internal function, without other stakeholders understanding why. The organization slows down while communication explodes. With the arrival of AI agents, an additional layer of silos can easily be built. Companies need an overview of which agents exist, what they do, and how they affect productivity and quality. It is early days, which is exactly why the overview must be built before it becomes permanent.
Which silos are most problematic within marketing, e‑com, and CRM?
The most destructive silos are where one team affects another team’s results every day without sharing goals, measurement points, and the same “truth” about the customer. You optimize locally and pay the price centrally. Three recurring examples: - CRM ↔ acquisition. When acquisition hunts new and CRM nurtures existing as two separate machines, you lose impact. In an agentified journey the line between new and loyal blurs, so optimization must happen as one system. - Brand and creative ↔ performance. Measurement often becomes wrong because of the silo. When brand, creative, and performance run as separate machines, they end up with different definitions of what counts as impact. The result is that performance gets the upper hand because what happens close to clicks and conversion is easier to attribute, while brand and creative end up in a separate world with softer metrics, longer cycles, and weaker feedback loops. - E‑com, ops, and policies ↔ marketing. When delivery promises, returns, inventory, price logic, and customer terms do not match what marketing communicates, you get a mismatch that directly eats conversion. People often try to solve it with more check-ins instead of fixing the root cause, that the customer promise is not a shared system.
How do you reduce handoffs and internal friction without making everyone a generalist?
The goal is not for everyone to know everything. The goal is for fewer things to bounce between functions. That requires three building blocks: - Clear ownership of outcomes. Someone owns not just a channel but an effect, for example activation, winback, or reviews to conversion. - Shared measurement logic. The same numbers and the same evaluation method. Otherwise every meeting becomes a debate about which dashboard is right. - Decision forums with discipline. Where decisions are made, who must be present, and what gets documented. Open Slack pipes without an operating model often create more noise, not more transparency. With AI agents, process design becomes even more important. If the agent does its thing but ownership, checkpoints, and measurement are missing, output may be faster but quality often drops and coordination increases.
How do you set and break down shared goals?
Shared goals must be tied to business outcomes and built on a shared measurement principle. A workable setup separates: - North Star. For example profitable growth. - Steering metrics. CAC to LTV, payback, retention by cohort, and margin thinking. - Team‑level metrics. Channels, CRM, and e‑com optimize within the framework. The key is to avoid each function choosing its own favorite metrics. If brand, performance, and CRM look at different truths, the organisation will talk past itself. You also need to define how evaluation happens, incrementality, MMM, or testing, so optimisation does not get stuck in simple attribution when decision paths are more diffuse.
How do you avoid marketing AI initiatives becoming too fragmented?
Fragmentation happens when each initiative has its own data, tools, KPIs, and no shared follow‑up. It looks like innovation, but it does not scale. Three minimum requirements make a big difference: - Agent inventory. Which agents exist, who owns them, and which systems do they touch? - Process map. Where does the agent sit in the flow, what is human work versus agent work, and where does review happen? - Measurement. What does “better” mean, time, quality, or business effect, and how is it tracked? This matters especially when AI employee systems are bought per function from different providers without connection to each other or the rest of the team. Then AI becomes a new silo layer.
How do you build agent governance at the company and in marketing without bureaucracy?
Agent governance can be lightweight but it must exist. Think traffic rules, not a policy bible. A minimal governance includes: - A register of agents and their purpose - Access rules with least privilege, especially around CRM and PII - Quality requirements and checkpoints - Logging and traceability, what was done and why Without governance it is hard to make the strategic decisions that often justify AI, reduce costs, enable in‑housing, and raise quality. That requires insight into which agents exist and how they affect productivity and quality.
How do you map human tasks versus AI tasks?
Mapping tasks is about protecting quality and reducing risk while scaling repetitive work. A practical model has three levels: - Expert mode. Hard decisions, big money, high risk, for example budget allocation, claims, brand risk, and pricing. - Oversight. The agent can run but must be reviewed, for example campaign iterations or certain analysis steps. - Release control. Low risk, easy to roll back, clear guardrails, for example summaries and first drafts. Many forget that agentification is process design. If you just drop an agent into a step without changing ownership, checkpoints, and measurement, output gets faster but not necessarily better.
What must be in place in tech stack and data before AI agents create real impact?
Agents are nothing without a stable foundation. That means tracking and instrumentation, product data, CRM structure, BI and reporting, consent and PII, and integration capability. Without this, agents optimize on the wrong signals and the organisation cannot judge whether productivity and quality actually improve. This is where many get stuck. They implement agents but lose focus on improving the foundation. Then agentification becomes a new layer of complexity rather than an enabler. A good rule of thumb: if the team cannot agree on which numbers are correct, or if data quality varies across functions, it is too early to scale agent work broadly.
How does AI change the marketing tech stack, and why does it get messy?
AI lets more people produce and analyse more, faster. But it also increases the risk of a patchwork of tools, prompts, and workflows, especially when adoption happens at the individual level or via POCs, and different functions buy different AI employee systems. The result is often messy data discipline and parallel truths. The fix is standardization, approved tools, shared data flows, clear ways of working, and traceability so agent work can be reviewed. The core point is simple. Agents are not the stack. They need a stack that works.
How do you build an experiment engine that is not owned by a channel?
A channel‑owned experiment engine tends to optimize for the channel’s view of reality. A channel‑neutral engine starts from business questions, does this drive incremental effect and what happens to retention and payback. Building blocks: - A shared backlog of hypotheses across the chain, messaging to onboarding to reactivation - Standardized test formats, geo tests, holdouts, incrementality setups - Clear rules for what can change during test periods - An owner with mandate to say “now we test” and keep other changes out When this exists, the need for constant ad hoc communication falls because answers are produced with a method everyone accepts.
Who should own MMM and incrementality, and why is “someone in analytics” not enough?
MMM and incrementality require mandate. They affect budget, priorities, and which truths the organisation steers by. That is why it cannot be a side task picked up when things burn. The owner needs to be able to: - Set method and standards - Educate stakeholders on what matters - Hold the source of truth together - Drive insights into decisions, not just reports In a world where more people can run an MMM in a sheet, ownership becomes even more important, otherwise quality drops and the organisation makes decisions on flawed models.
How do you design geo tests that work despite many parallel initiatives?
The biggest hurdle is rarely statistics. It is that the organisation changes too many things at the same time. Geo tests require discipline, a clear intervention, stability in other efforts, and a comparable control group. To make it work you need: - A decision forum that can freeze certain changes during the test - Clear rules for agencies and internal teams on what can be optimized - Consistent data collection and shared definitions As attribution weakens, geo tests become a robust way to build decision confidence, but only if the organisation respects the test.
What should you consider in agency collaborations going forward?
Communication is critical, but the problem is when it replaces structure. In recent years many in‑house teams have grown and specialized, and the agency landscape has grown in the same direction. The result is often a mirror model where each internal function gets its own agency counterpart. It feels safe, but it tends to create duplicate work, more handoffs, and a huge flow of questions. When the Slack pipes are open and decisions happen in informal rooms, you get more noise than transparency, and silos deepen both internally and between you and the agency. Going forward it becomes even more important that the collaboration is designed, not just active. AI and more agent‑based ways of working increase output and tempo, but without an operating model you easily end up with two parallel systems, you optimize your human‑agent flows internally and the agency optimizes theirs externally, with different definitions, different dashboards, and different truths. That is when friction explodes. A core principle is that you must be extremely clear on why you are buying agency and what you are actually buying. It is not wrong to outsource a channel as a whole, but it requires certain things to be in place. Otherwise you are effectively buying activity and communication. In practice two models tend to work, in different ways: - Buy expertise, not volume. Often the best deal is bringing in an agency for what is hard to build quickly in‑house, deep platform expertise, advanced analysis, experiment design, setup and structure, creative edge, or the ability to challenge a vendor’s best practice and find unconventional paths. Then a smaller internal team can run the day‑to‑day while the agency becomes your edge and quality boost. - Outsource a full function, but build minimal in‑house first. Outsourcing a full channel can work very well if you have goals, measurement logic, decision paths, and data in place, and if the agency is efficient enough to run without everything becoming a question back to you. A good way to avoid a bad buy is to first build a minimal internal base, someone who can set direction, understand trade‑offs, review output, and hold the whole together. Without that minimal internal engine, the agency easily becomes an external silo that produces a lot but where learning never lands with you. Regardless of model, these are the things that usually reduce noise and increase impact: - Set goals tied to the business with a breakdown everyone accepts. The agency should not deliver activity but contribute to outcomes. Translate business goals into a few steering metrics per area and be clear on what is secondary. - Agree on a measurement method and a source of truth. You need to align on what applies, MMM, incrementality or testing, or a combination, and which numbers count. Otherwise every meeting becomes a debate about which dashboard is right. - Build decision forums with discipline. Where decisions are made, who must be present, what gets documented, and what is the default when evidence is missing. Open Slack access without a decision chain often creates more rework, not more speed. - Ensure platform expertise and the right incentives. One of the main reasons to use a strong agency is their ability to go beyond vendor recommendations. That competence, understanding platforms deeply, seeing shortcuts, challenging best practice, and building an optimization engine that benefits you, becomes even more important when everything is packaged as click here or AI employees. - Design the ordering organization. Avoid the one‑to‑one mirror where each internal specialist owns one to three channels and has agencies for everything else. It burns communication hours, creates handoffs, and shifts energy from impact to sync. You get better leverage when the buyer role has broader scope, owns outcomes across multiple channels, and you combine in‑house and agency with clear interfaces. - Set guardrails and quality controls in the flow. With more automation you need clear constraints, budget rules, brand guidelines, approvals, QA, and stop conditions. Otherwise you get faster output but higher brand risk and more coordination when things need to be rolled back. Dashboards can help, but they do not solve the root problem if you and the agency still look at different metrics and hold different definitions. Structure shifts communication from a constant stream of questions to fewer, sharper discussions and decisions, and makes the agency the leverage it is meant to be.
When does it make sense to outsource channel operations to an agency, and when does it not work?
It works best when a company has set goals tightly linked to the business and has a relatively high maturity in breaking those goals down into metrics each function should work against, but lacks deep expertise in one or more channels or disciplines. A second requirement is understanding what competence you are actually buying. In digital marketing, a critical trait of both strong agencies and strong in‑house teams has always been deep platform expertise and analytical depth, the ability to choose unconventional paths, challenge a vendor’s best practice, and build a setup and optimization engine that is 100 percent aligned with the company and 0 percent with the vendor. If you only follow platform recommendations you often end up closer to 50/50. This matters even more now when much is packaged as just click here or when you buy AI employees built by people without deep product expertise in ad platforms, or with incentives that are not clean. A trusted agency is, in practice, a way to buy that hard‑to‑replace competence without having to build and maintain it internally while you have a thousand other things to manage. But even with the right agency it fails if the ordering organization is designed wrong. A 1‑to‑1 mirror where each internal specialist owns one to three channels and has agencies for the rest is, in practice, a no‑go. It burns expensive communication hours, creates handoffs, and shifts energy from impact to sync. Either the buyer role needs broader scope, owning outcomes and the whole, not just a channel, or you organize so the team runs some channels fully in‑house and outsources others, for example the same person or team runs two channels and outsources two others, or the buying sits higher up, for example under a growth or revenue owner, with clear guardrails and shared measurement logic.
Which agencies become most important to work with in 2026 and beyond?
The most important agencies are those that bring method, transparency, and quality discipline, not just output. When everyone can produce more with AI, what matters is the surrounding craft, experience, critical ability toward platforms, and the ability to work against shared goals and shared measurement. AI‑first agencies may have speed advantages, but speed without structure risks messy data discipline and more silos. Good agencies help the organisation gain overview, reduce noise, and make better decisions.
Which external profiles and or agencies should you look for in 2026 and beyond?
The value rarely lies in someone who knows a single channel. The value lies in capabilities that create leverage and order: - Measurement, attribution, MMM, and experiment design - Instrumentation and data platform understanding - Creative systems, how creative work is produced at scale with quality - Agent ops, how agent flows are run, quality‑assured, and logged What matters is that external resources can work inside the organisation’s real processes, not just deliver output from the side. Otherwise you build an external silo and communication grows.
Which roles grow in importance as AI takes over more repetitive work?
As AI takes over more repetitive work, it does not mean fewer roles are needed. Different roles become more decisive to deliver quality, learning, and control in the system, not just more output. - The right kind of generalist and project lead. Hands‑on, structured, and business‑savvy people who can coordinate across in‑house teams, agencies, agents, and tech vendors. The need for process design grows as the agent landscape expands, ownership, decision chains, QA gates, budget rules, documentation, and traceability. Without that you often get faster delivery but more friction, rework, and mistakes. - The creative analytical prototyper. A growth profile with creative range and technical and analytical ability, ideally with measurement insight and platform understanding. This person can rapidly test hypotheses, build prototypes, set up experiments, and create learning loops while keeping creative quality. Creative edge becomes strategically more important when many can produce okay at scale. The advantage comes from ideas, craft, and a clear bar. - The platform skeptic with deep channel expertise. The role many underestimate, someone who truly understands channels and platforms and actively challenges vendor best practice, understands incentives behind recommendations, and builds an optimization engine that benefits the company, not the platform. As more people run on autopilot or buy AI employees, this competence becomes a differentiator. - The measurement and experiment lead. Someone who can establish an evaluation method you can actually steer by, incrementality, experiment design, MMM thinking, method discipline, and source of truth. This becomes critical as journeys grow more hybrid and agent‑supported and signals from more surfaces must connect to business outcomes. - Enablement, training, manual craft, and debugging. This is the point many miss. When AI helps a lot it is easy to delegate away learning. That is exactly the risk highlighted in [the paper you linked](https://arxiv.org/pdf/2601.20245), using AI as a shortcut can make you faster at output but weaker at building conceptual understanding, reading and critically reviewing, and especially debugging when something goes wrong. Translated to marketing, growth, and ops, if AI does everything you eventually lose the ability to judge whether a suggestion is reasonable, find the root cause of a data or performance anomaly, understand what actually caused an effect, and build robust guardrails and QA checks. That is why training and ways of working are central. You need a culture where people practice manually at times, debug without autopilot, and use AI in a way that keeps humans cognitively engaged, for example asking AI to explain tradeoffs, justify decisions, show alternatives, and force reasoning, not just deliver an answer. Why the interplay between the generalist and the prototyper is decisive. The prototyper tests and shapes quickly, the project lead drives it through, builds endurance, and ensures it gets done with the right process, QA, and measurement. Without one you get ideas without adoption. Without the other you get production without edge and often without learning.
How do you lead a team with both people and AI agents in the best way?
Leadership must combine a clear process with a clear quality bar, but also avoid making process an end in itself. If you only process you risk an organisation that is efficient on paper but loses tempo, creativity, and motivation. If you only run you get fast output but low quality, unclear ownership, and weak learning. 1) Design the work as a system, not a Slack channel Agents need defined tasks, constraints, logging, and follow‑up. People need clear goals, shared measurement, and reduced friction between functions. Otherwise everyone runs fast in different directions, with different dashboards and different truths. At the same time the process must be minimally sufficient. It should create traceability, decision power, and quality, not fill calendars. A good rule of thumb is that every new process step should pay for itself in at least one of three currencies, lower risk, more learning, or higher quality. 2) Optimize for learning, not just output When AI does more of the repetitive work it is easy to delegate away learning and get fast but shallow. Then the team loses the ability to review, debug, and understand why something works. That becomes dangerous as journeys get more hybrid and agent‑supported and cause and effect become harder to read. Practically, build learning into how you work: - Require a short why and alternatives, not just delivery - Regular debug sessions where you troubleshoot manually - Post‑mortems that actually update guardrails and improve prompts and processes - A shared experiment engine where insights flow back into the system, not just into one person’s head 3) Set a quality bar that cannot be automated away Quality can be handled at three levels, but add a clear quality‑owner mindset: - Where experts are needed, high risk, big money, brand, policy and claims, complex strategy. Humans should own the decision and the bar. - Where oversight is required, the agent runs but someone reviews samples, monitors deviations, and updates rules. - Where control can be released, low risk, easy to roll back, clear guardrails. What determines whether AI delivers is often everything around the work, ownership, decision criteria, review, logging, and feeding learning back into the process. 4) Protect motivation, do not turn people into QA machines A common trap is that AI does all the fun work and people are left to approve, correct, and chase deviations. It gets boring fast and selects for a narrow profile, people who like control, routine, and administration. To avoid that, design roles so people get: - Clear ownership of outcomes, not just review - Room for creativity, prototyping, and problem‑solving - Opportunities to build competence, not just operate In practice a strong setup is often a mix of prototyper, shaping new paths, and operational generalist, driving through and building endurance, with the agent as an accelerator, not a replacement for meaningful work. 5) Be transparent about job risk and make a plan There is a real risk that some tasks and sometimes whole roles shrink as agent flows improve. Pretending it does not exist creates uncertainty and passivity. What works better is: - Clarity on what will be automated and why - A plan for upskilling and role shifts, for example into process design, measurement, platform competence, creative systems, and agent ops - Clear expectations for using AI to get better, not just faster Summary The best way to lead a hybrid team is to do three things at once, build a minimally sufficient process that creates traceability and decision power, build in learning so competence does not erode, and design the work so people are not reduced to QA but own outcomes, creativity, and growth.
How do you handle PII, GDPR, and security when AI agents connect to marketing and CRM?
First step is inventory. Which agents and tools are used, including individual ones, which systems they access, and whether they handle PII. Second step is access control. Least privilege, clear rules for what can be sent to external services, and checkpoints for sensitive decisions, for example segmentation, export of customer data, and claims. Third step is traceability. Log agent actions and decisions so you can review what happened, why it happened, and what result it produced. Without traceability it is hard to ensure compliance and to understand whether agentification actually improves productivity and quality.
Q&A on AI in Marketing
It's an exciting time right now for most companies and functions. But marketing, CRM, and e‑com/web have a very particular challenge ahead.
I've spent the last ten years deeply involved in digital marketing, especially the nerdy part: how companies build competitive advantage through smart use of data. It has been a long journey for many, and the work is still very much ongoing.
Now AI enters and makes everything… both easier and harder.
From one direction, AI creates entirely new possibilities in daily work: more automation, faster production, new ways to analyse, and eventually AI agents that can take over parts of the job. But that development has to live side by side with everything else already in motion: data projects, new CDP/warehouse initiatives, measurement improvements, attribution/MMM, new processes, new tools, new agency setups. AI is not a "reset", it's a new layer that has to fit into reality.
From the other direction comes AI search, and not least Agentic commerce. It's a phenomenon that doesn't just affect channel mix or creative formats, but can change how companies operate at the core, across product, revenue, and marketing. When discovery, research, and sometimes purchase move to new interfaces and new decision flows, it also affects the marketing operating model.
That's why the CMO role becomes so central in this shift. The change is driven from two directions at once: internally (how AI changes ways of working, teams, cost structure, and quality) and externally (how AI search and agentic commerce move customer behavior and the rules of the game). And on top of that comes all the usual: transformation projects already in motion, pressured budgets, higher demands for impact, and an organisation that is often more siloed than you want to admit.
That's the reality I want to help with. I see many wrestling with the same pattern: enormous potential, but limited overview, and a daily reality that quickly gets more complex when new tools, new agents, and new ways of working need to fit in.
Here I share ongoing questions & answers, reflections, and observations on the topic, as transparently as I can. This is too important to stay behind closed doors.