The measurement foundation that enterprise marketers relied on for two decades is cracking. Attribution models built on device-level tracking and cross-site cookies are producing numbers that overestimate channel contributions by 30% or more. Apple's App Tracking Transparency prompted 80-85% of iOS users to opt out of tracking, gutting the data pipeline that powered attribution. And while Google ultimately kept cookies in Chrome, 38% of consumers are accepting cookies less often than three years ago, eroding cookie-based measurement even without a forced deprecation.
For CMOs managing $10M+ ad budgets, the math no longer works. You can't justify that spend to a CFO using metrics that might be 30% wrong. That's why the sharpest marketing organizations are moving to incrementality testing, which is a measurement approach that proves what your marketing actually caused.
The Attribution Crisis Nobody Talks About
Attribution models assign credit to touchpoints along a customer journey. A user clicks a Facebook ad, later searches your brand on Google, and converts. Last-click attribution gives all the credit to Google. First-click gives it all to Facebook. First-click models overestimate the value of acquisition channels by 18-25%, while last-click models undervalue nurturing efforts by 30%.
The deeper problem is structural. Attribution measures correlation, not causation. It can tell you which channels were present before a conversion, but it can't tell you whether the conversion would have happened anyway. A 2024 academic study on causal inference in marketing found a budget discrepancy of up to 30% in traditional attribution models that overestimate direct-response channels. Another research analysis found that observational attribution methods inflated conversion lift estimates by nearly 6 times those from randomized controlled trials.
Privacy changes have made this worse. Apple's iOS 14.5+ update forced apps to ask for permission to track, and the vast majority of users said no. Meta's internal reporting showed a sharp drop in tracked conversions and ROAS on iOS devices, even though actual consumer behavior hadn't changed. The platform lost visibility, not effectiveness. And Apple's iOS 26 extends these restrictions to the browser and open web, further eroding attribution models that rely on identifiers and click-level data across sites.
64% of B2B marketing leaders say their organization doesn't trust measurement for decision-making, according to Forrester's 2024 Marketing Survey. When the people responsible for marketing strategy don't trust their own numbers, every budget conversation becomes a negotiation based on gut feel rather than evidence.
What Incrementality Testing Actually Measures (And Why It Matters)
Attribution tells you what happened. Incrementality proves what caused it. Incrementality testing answers one question: How many conversions happened because of this campaign versus how many would have happened anyway?
The methodology is straightforward. You split your audience (or geography) into two groups. The treatment group sees your ads. The control group does not. The difference in outcomes between these groups is your incremental lift.
There are several common approaches to test for incrementality:
Holdout testing
Withhold ads from a randomly selected control group and compare conversion rates against the exposed group.Geo-based experiments
Divide markets into matched geographic regions. Run ads in some regions, suppress them in others, and measure the difference in business outcomes.PSA (public service announcement) testing
Show the control group a neutral ad (like a PSA) instead of your campaign creative, keeping the ad experience identical while removing your brand's message.Conversion lift studies
Platform-native experiments (Meta Conversion Lift, Google Conversion Lift) that use randomized controlled trials within the platform's ecosystem.
The critical difference from attribution is that incrementality captures the contribution of both impressions and clicks, giving a complete picture of media performance. Attribution models typically measure only clicks, missing the significant impact of ad exposure alone, especially in brand advertising.
Since incrementality tests compare groups rather than tracking individual users, they don't rely on cookies or device-level identifiers. That makes them resilient to privacy changes
Why the Smart Money Uses Both (The Triangulation Approach)
Incrementality testing isn't a replacement for all other measurement. It's the anchor that makes everything else trustworthy.
The most effective measurement frameworks in 2026 combine three methods, each solving a different problem:
Multi-touch attribution (MTA) for day-to-day optimization
MTA provides always-on, granular data about click paths and digital touchpoints. It's useful for intra-channel decisions, such as which creative is outperforming, which audience segment is responding, and which landing page converts better. Its limitation is that it uses correlation rather than causation and only captures click-based activity.Marketing mix modeling (MMM) for long-horizon budget allocation
MMM uses historical data to estimate the impact of advertising, promotions, pricing, and external factors on business outcomes. It's powerful for portfolio-level planning across channels and geographies. Its weakness is that an MMM can easily confuse cause and effect, especially for fast-growing businesses with volatile data.Incrementality testing for causal proof
When the stakes justify it, incrementality tests provide the actual causal impact of a channel, campaign, or tactic. Over half (52%) of US brand and agency marketers are already using incrementality testing, and 36.2% plan to invest more in it over the next year, per EMARKETER and TransUnion.
Harvard Business Review called this combination of MMM calibrated by incrementality experiments "the gold standard for ad measurement" in a post-iOS 14.5 world. Measured recommends a triangulated approach combining incrementality, advanced MMM, and selective platform attribution.
The Brand Advertising Measurement Gap
Incrementality testing is especially critical for brand advertising, the marketing category most poorly served by attribution.
Les Binet and Peter Field's landmark research, analyzing thousands of effectiveness case studies from the IPA Databank, established that the optimal marketing mix for most brands is roughly 60% brand-building and 40% sales activation. Brand campaigns build emotional associations and mental availability over months and years, while activation captures short-term demand. Both matter, but brand building drives long-term volume and pricing power that activation alone cannot.
The problem: most attribution models are designed to capture short-term, click-based conversions. They systematically miss the long-term effects of brand advertising, which is exactly where most of the value lives. A consumer sees your brand campaign today and searches for your product six weeks later. Attribution gives zero credit to the campaign that created the demand.
Incrementality testing solves this by measuring the actual difference in business outcomes between exposed and unexposed groups over longer time horizons. It captures the full buyer journey from awareness through conversion, proving the causal impact of brand investment in a way that attribution never could.
For enterprise CMOs defending brand budgets to CFOs and boards, that proof is the difference between funding and cuts.
From Correlation to Certainty
The shift from attribution to incrementality is about confidence. CMOs need to prove to CFOs that marketing creates new buyers, not just captures existing demand. That requires precision targeting (knowing exactly who your buyers are), precision measurement (proving incremental impact, not correlation), and precision optimization (treating media as a unified system you can control and improve).
If you're managing significant ad budgets and still relying on attribution alone, here's where to start:
Test your largest spend channels first. The higher the spend, the bigger the impact if your attribution numbers are wrong.
Prioritize brand campaigns. Brand advertising is the hardest to attribute and the most likely to be undervalued.
Plan for adequate test windows. Budget for 2-4 weeks minimum to account for delayed conversions and purchase cycles.
Agility's Precision Brand Advertising platform combines persona strategy, cross-channel optimization, and investment-grade measurement with built-in incrementality testing. The brands that prove incremental impact grow with certainty.
Want to learn how Agility helps enterprise brands prove the incremental impact of their advertising? Talk to our team
FAQs
Why are marketing leaders shifting from attribution to incrementality testing?
Marketing leaders are moving away from traditional attribution because device-level tracking and cookie-based models now routinely misstate channel impact by 30% or more, especially as privacy changes like iOS 14.5+ and iOS 16 have gutted the underlying data. Incrementality testing, by contrast, uses controlled experiments (holdouts, geo tests, PSAs, lift studies) to measure causal lift, giving CMOs defensible numbers they can take to CFOs when justifying budgets.
Share in...





