Home Content Studio Predictive Modeling At Scale For The Next Era of Addressability

Predictive Modeling At Scale For The Next Era of Addressability

SHARE:

If you scan the ad tech headlines, you’d assume artificial intelligence (AI) offers the solution to every challenge facing today’s brands and agencies. Even if marketers understand how the hype machine works, many are still willing to buy into the smoke and mirrors, especially when it comes to the power of AI to cure what ails them (e.g., the deprecation of third-party cookies and the loss of other previously relied-upon industry identifiers).

Here’s the thing: There are already bona fide addressability solutions out there. But marketers shouldn’t just trust that every “AI-driven” offering is what it claims to be. In fact, the term “AI” itself is thrown around far too casually.

When it comes to the solutions that move the needle for marketers today, what we’re talking about in most cases is predictive modeling and machine learning. Modern predictive capabilities may offer a solution to the loss of addressability currently sweeping the digital media landscape – but not all predictive modeling is created equal. It’s important for marketers to be able to pressure-test promises being made by potential partners.

There are key criteria to consider when vetting predictive modeling or AI claims. Here’s what marketers should be aware of when shoring up their audience targeting strategies.


Seed data and scale considerations

As marketers are well aware, scale matters. A lot of addressability solutions on the market promise to help marketers target audiences at scale, despite the loss of third-party cookies. But the real question is this: What data is fueling the ability to find those audiences? This is where seed data becomes so important. What seed data is powering any given solution? Is the seed data itself already modeled? Or is it deterministic? And how reliable and privacy-safe is the seed data?

The output of any data model is only as good as its input. To find and understand audiences at scale, a model needs to be fed a data set that’s large enough and truly representative of the overall population (e.g., the U.S.). Only then can a model extrapolate that data to reliably predict who else might fall into a given cohort or persona. Unfortunately, not many of today’s predictive audience models are being fed by sufficient, privacy-safe seed data to yield accurate addressability at scale.

It’s also important for marketers to ask and understand what data types a model is reliant on and how that data is sourced. With the introduction of consumer-forward regulatory policies around the world and across the US, it’s important to ensure the right consent is obtained and the right provisions are made. A stroke of the pen by lawmakers could wipe out entire data assets or prevent whole classes of data from being used. Additionally, how tenuous or risky the solution is will often vary based on what data signals the model requires.

Signal fidelity and walled garden limitations

Beyond understanding the seed data behind a solution, marketers must also screen the addressability of audience targeting solutions according to a few other common limitations.

They need to think about signal fidelity and signal loss in bid requests. In other words, how far away is a given partner from the actual bid request, and how much important information is being lost along the way? How reliant is a model on identifiers given that Google recently reiterated their plans to deprecate the third-party cookie by the end of 2024?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

It can be hard for advertisers to know exactly how much information is making it to the actual point of ad decision-making, but this affects predictive models. If a partner is many steps removed from the source, the advertiser simply isn’t going to get the level of signal fidelity and visibility possible as when a partner feeds its model with signals directly from the supply source.

When it comes to identifiers, most predictive models still qualify users based on addressable identifiers that they can match against in the bid requests. That’s great for training and benchmarking today, but those addressable identifiers are set to disappear in the next year or so.

Of course, one way of bypassing this signal loss is to work directly with walled gardens to target their logged-in users. But here again, marketers need to think about the scale and the type of data that’s available within a given partner’s walls. Very few industry players have access to well-rounded user profiles that tie together demographic and psychographic data to real-world behaviors. For example, many retail media players are working with specific insights, such as purchase data, that lack the ability to understand who an individual is beyond what they’re buying from specific, online retailers.

The benefits of carrier-level data

At present, carrier-level data is one of the few sources of insights that can overcome the above addressability issues of scale, fidelity and audience insights in a privacy-compliant way. That’s because carrier-level data offers rich insights into consumer interests and intent based on app ownership and usage that is the basis of a large, robust and representative panel that’s virtually impossible for non-carriers to replicate.

Based on the apps people use, how often, how long and when they use them, predictive modeling capabilities can understand which users are more likely to fall into certain cohorts – say, business travelers or fitness enthusiasts. More importantly, we’re able to understand these personas at scale and overlay those signals against bid opportunities to enable marketers to engage those audience personas across multiple channels and screens via predictive models – without the need for any addressable identifiers.

We’re not talking about some vague application of AI here. We’re talking about predictive audience modeling, at scale, built on a strong, consented deterministic seed data set. That’s what the future of addressability should look like: robust, scalable and privacy-compliant. It’s the standard to which advertisers need to be holding their partners, and it’s the standard we hold ourselves to. These models will evolve over time to be truly AI-driven, but these fundamental considerations will still hold true and carry the keys to the efficacy and fidelity of the AI.

For more articles featuring Jess Zhu, click here.

Must Read

Amazon Juices Profits, With A Big Assist From The Ads Biz

Wall Street wanted profits. Big Tech delivered. That was the case for Google, Meta, Microsoft, Apple and – more than any other US tech giant – Amazon.

Comic: Welcome Aboard

Google’s Ad Revenue Rockets Upward Again, But The Open Web Is Getting Less

Google has always been the internet waystation. People arrive to be shuttled someplace else. Increasingly, though, Google is the destination.

How Bayer Is Using Creative Analytics To Cure Its Data Divide

Bayer partnered with its data agency, fifty-five, to develop a custom in-house creative analytics dashboard built on Google Cloud to more effectively measure and evaluate creative performance.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

First-Party Data On Ice? How Conagra’s Birds Eye Brand Navigates The New Video Ecosystem

Conagra-owned brand Birds Eye brings a new approach to online video, social shopping and first-party data.

As The Open Web Wobbles, Index Exchange Is Betting On Curated Deals

Index Marketplaces activates the curation capabilities of DSPs, DMPs and RMNs – and the demand for their PMP deals – across Index Exchange’s network of publishers.

an almost handshake

LUMA: 2024 Will Be Better For M&A (No, Seriously This Time)

Overall deal activity in the ad tech market was down 10% year over year in 2023, according to LUMA Partners. But 2024 may be looking up.