The strategy has shifted, and most advertisers haven't caught up
Not that long ago, the standard approach to YouTube advertising was placement whitelisting. You'd manually research channels, build a curated list of places you actually wanted your ads to appear, and upload it into Google Ads. It was slow, painful, and required constant maintenance, but it gave you genuine control over where your money went.
That approach has largely gone out the window.
Over the past few years, Google has been steering advertisers hard towards audience-based targeting. In-market audiences, customer match, custom intent segments, affinity audiences. The idea is straightforward: stop trying to predict which channels your audience watches, and let Google's machine learning find them wherever they are.
For most campaigns, this is genuinely the right call. Audience targeting scales better, adapts faster, and removes the impossibility of manually curating a whitelist across a platform with hundreds of millions of videos. It works well.
But it creates a problem that a lot of advertisers are still not dealing with.
Audience targeting does not care where your ad ends up
When you hand targeting decisions over to Google, the algorithm finds your audience based on who they are, not where they are. It will reach them on gaming channels, kids' content, foreign-language videos, low-quality reaction compilations, political commentary and plenty of other places you would probably never choose to advertise.
This is not a targeting failure. Your audience really does exist on those channels. The algorithm is doing what it is supposed to do. But Google is not optimising for brand safety or placement quality. It is optimising for conversions within your bid constraints, and those are different things.
The result is a steady drain. You are reaching the right people, but in contexts that dilute your brand, waste impressions on low-attention environments, and burn budget on placements that would never convert at a reasonable rate anyway.
The practical reality: without a robust placement exclusion list, a meaningful chunk of your YouTube budget is being spent in environments you would never consciously choose. The bigger your spend, the bigger the leak.
And here is the part that used to make this problem almost unfixable: building exclusion lists manually, at any real scale, is a complete nightmare.
Why manual exclusion is such a pain
YouTube's inventory is enormous. You can exclude by content category, but the categories are blunt instruments. You can pull placement reports after campaigns have run and exclude reactively, but by then you have already paid for those impressions. You can spend hours manually researching and adding channel URLs one by one, but the coverage you get is a drop in the bucket compared to what is actually out there.
To do placement exclusions properly, you need volume. Tens of thousands of channel-level exclusions, not a few hundred. And generating that by hand is just not realistic.
That is the problem I set out to solve.
Building 93,000 exclusions with AI
I used Claude to systematically work through the exclusion list build. The process was straightforward: describe the campaign, the audience, the brand context, and the types of placements to avoid, then iterate through the output category by category until the lists were comprehensive and properly formatted for import into Google Ads.
The numbers came out like this:
| List | Type | Count |
|---|---|---|
| YouTube Channel Exclusions (Part 1) | Channel-level | 65,000 |
| YouTube Channel Exclusions (Part 2) | Channel-level | 27,967 |
| Total Channels Excluded | 92,967 |
On top of that, the process produced a curated keep list of 3,780 relevant English-language channels. That is the positive counterpart: placements actually worth targeting if the algorithm serves impressions there, all verified as high-quality and brand-appropriate.
The whole thing took an afternoon. A manual approach to the same scope would have taken weeks, and realistically would never have got close to that kind of coverage.
What makes the difference
The key is not just volume. It is how the lists are structured. A useful exclusion list is not a random dump of channel URLs. It needs to be built with logic behind it: clear categories, the reasoning for each, and enough coverage within each category that it actually moves the needle.
The prompting approach I used broke the work into distinct phases. First, establish the campaign context so the AI understands what you are trying to protect against. Second, work through exclusion categories one at a time so you can review and refine the output. Third, format everything correctly for upload. Fourth, build the keep list so you are not just working in the negative.
Each step is fast. The iteration between steps is where the quality comes from.
The prompts I used
Below are the four prompts that drove this build. Copy them directly into Claude (or any capable AI), swap out the placeholders for your actual campaign details, and work through them in order.