Most media teams can fill a slide deck with impressions. Very few can explain, in a room full of skeptics, why those impressions produced a result.
That gap is exactly what gets people fired in a QBR.
The questions coming from the business side are no longer soft. Which placements actually moved the number? Why did this format outperform that one? What, specifically, did this budget earn beyond delivery? These are accountability questions, and viewability alone has never been built to answer them.
That is why attention measurement has moved from a nice-to-have into a buying and planning standard. The IAB and Media Rating Council (MRC) Attention Measurement Guidelines (Version 1.0), published in November 2025, give the industry what it has been missing: a shared framework for measuring attention across environments, with defined expectations for methodology, transparency, and validation.
For advertisers, this creates a more defensible way to evaluate exposure quality and engagement depth beyond delivery metrics. For publishers and supply partners, it sets a clearer standard for packaging and proving attention-ready inventory. And for performance marketers operating on the open web, it finally separates two things that have been conflated in campaign reporting for too long: ad opportunity and ad impact.
Why the IAB and MRC Attention Measurement Guidelines Matter Right Now
The IAB and MRC are direct about the stakes. Consistent and accurate measurement is central to building trust and increasing investment in attention-informed strategies.
They are equally direct about what attention is not. Attention alone should not be treated as an outcome measure for evaluating campaign performance. It is a data point that helps explain exposure and engagement in relation to outcomes.
That distinction keeps attention in the right lane for performance marketers:
The CIMM and IAB Attention Measurement Playbook for Marketers describes attention as a dynamic lens that complements existing metrics, and notes that adoption works best when attention metrics are tied to business objectives and treated as probabilistic signals, not a binary pass or fail score.
The guidelines were developed by the IAB Attention Task Force, a cross-industry group of more than 200 experts from brands, agencies, publishers, and measurement companies including Amazon Ads, Google, Meta, Nielsen, and The Trade Desk.
As The Current has reported, the core question the industry has been wrestling with is not whether attention metrics are valuable. It is whether buyers will consistently define and deploy them the same way. "The biggest hurdle that needs to be overcome is defining what they are." The November 2025 IAB/MRC guidelines are the industry's direct answer to that problem.
Attention vs. Viewability: The Difference That Changes How You Buy
If you have ever optimized hard for viewability and still struggled to move performance, you already understand the lesson. The IAB and MRC define viewability as a measurement of an ad condition that represents the opportunity to see or hear an ad, regardless of whether a person actually saw or heard it. Attention measures whether the ad was seen or heard by a person, and to what depth.
The MRC's own position on this is instructive. Ron Pinelli, MRC's SVP of Digital Research and Standards, was explicit about what the industry got wrong with viewability: treating it as an outcome rather than an opportunity signal.
The attention framework is specifically designed to avoid repeating that mistake. Attention is a more nuanced, continuous spectrum, which is exactly why it cannot simply become another checkbox.
Research from Lumen Research indicates that only around 30% of viewable digital ads are actually looked at. That means the remaining 70% technically renders but captures zero real attention. Viewability removes obvious waste. Attention identifies which viewable impressions were more likely to create real impact.
Viewability removes obvious waste. Attention identifies which viewable impressions were more likely to create real impact.
This distinction is a critical input for media mix modeling. The Trade Desk's own research on improving MMM accuracy calls out engagement and attention metrics, including viewability, completion rates, and TVQI, as required inputs to help models cut through impressions that never had a chance to influence outcomes. Without those signals, a model treats CTV, display, and online video as interchangeable, which produces planning decisions that do not reflect how media actually performs.
For buyers making format and placement decisions, that distinction is not theoretical. It is a direct input to where budget should go. As Next Millennium has outlined in its media buying strategy guidance for 2026, high-impact formats are built to command attention and convert visibility into measurable action.
What the IAB and MRC Framework Standardizes
Before the November 2025 guidelines were finalized, "attention" meant something different depending on which vendor was presenting it. The framework addresses that directly.
The framework also emphasizes three core attention dimensions: content, placement, and creative. In practice, this means attention measurement is a diagnostic tool, not a media quality score. It should help you identify what is actually driving performance: the environment, the ad location, or the message itself.
The guidelines strongly encourage reporting attentive time relative to viewable time across all measurement approaches. That ratio normalizes for opportunity and reveals where viewable inventory is actually earning engagement, which is the context raw attentive seconds cannot provide on their own.
The Four Attention Measurement Approaches, Explained
Stop asking which attention vendor is best. Start asking which measurement approach fits the objective you are trying to answer, and what trade-offs you are accepting.
1. Data Signal-Based Measurement
This approach quantifies exposure and engagement using signals from devices, ad placements, ad servers, and publisher metadata. Time in view, scroll depth, audibility, click interactions, and screen orientation changes all feed the model.
The IAB's position here is worth internalizing: attention is complex, and individual signals only provide a partial view. Combining multiple signals and understanding how they interact is more informative than relying on any one in isolation.
For performance buyers, data signal measurement is the most practical path to comparing the attention profile of different placements, formats, and supply paths without lab-grade instrumentation.
2. Visual and Audio Tracking
Visual tracking includes tools such as gaze tracking or facial detection. Audio tracking becomes relevant in environments where audibility is central to ad impact.
The guidelines establish minimum requirements around viewability and user presence for visual tracking methods. Any attention metrics that do not meet minimum viewability or audibility thresholds should be separately identified as diagnostic, with full disclosure.
This approach is particularly well-suited to benchmarking high-impact formats. Mediahub demonstrated exactly this on The Trade Desk's platform, where an attention optimization algorithm built on eye-tracking data from Lumen Research delivered 75% lower cost per point of brand lift versus optimizing on viewability alone. That finding connects directly to a pattern Next Millennium has documented in its analysis of MFA inventory and the quality tax it imposes on programmatic buyers: inventory that scores well on viewability and poorly on attention is exactly where spend gets buried.
3. Physiological and Neurological Observation
This approach examines how the body and brain respond during ad exposure. The guidelines reference physiological signals such as arousal indicators, and neurological data such as EEG, which can assess cognitive load, emotional engagement, and memory-related processing.
Most performance teams will not deploy this method on every campaign. Where it earns its place is in major creative or format decisions, where modest improvements in attention quality compound across large spend over time.
4. Panel or Survey-Based Methods
Panel and survey-based approaches combine passively or actively measured data with self-reported inputs to assess usage, exposure, attention, and behaviors, including brand recall and emotional responses. The guidelines cite brand health studies, focus groups, and ad effectiveness surveys as examples.
The marketer playbook is consistent with what practitioners have found in the field: hybrid methods combining visual tracking or physiological data with signal-based models produce more actionable insight than any single approach in isolation. Each approach calibrates the others.
Read More: Understanding Website Income from Digital Ads
An Attention-First Campaign Checklist
The most effective way to operationalise the IAB and MRC framework is to treat attention as a decision-quality multiplier, not a standalone number to chase.
For Buyers
Publisher Checklist
For Publishers
If you sell media, the IAB framework raises the bar on the story you need to tell buyers. Three deliverables make attention commercially useful and defensible in deal conversations.
What Your QBR Should Include: The Attention-First Reporting Template
If you want attention metrics to survive QBR scrutiny, your reporting needs to answer three questions without hedging.
What did we buy?
Spend, impressions, frequency, viewability, brand safety status, and fraud filtration confirmation.
What attention did we earn?
Attention metrics alongside viewability, including attentive time relative to viewable time where available. Break results out by format and placement so performance patterns are visible, not averaged away.
What business result did it drive, and what did attention explain?
This is where the real value surfaces. Attention measurement justifies budget shifts toward placements and formats that create stronger conversion opportunities, not just lower CPMs. The IAB and MRC are clear that attention is not a replacement for outcome measurement. It is what you use alongside outcomes to understand why certain environments performed and others did not.
Common Mistakes That Make Attention Metrics Unusable
The guidelines exist partly because attention measurement has been inconsistent. These are the errors that undermine it fastest.
Frequently Asked Questions About Attention Measurement Guidelines
Next Millennium's Attention-First Path to Better Performance
In 2026, the teams winning are not the ones with the lowest CPM. They are the teams who can prove why their media mix produced results, then scale it with confidence. Next Millennium is built for that standard: premium environments, high-impact formats designed to earn attention, and a performance approach that connects on-page behaviour to business outcomes.