Bot Traffic Is Eating Your Budget and You Probably Don't Know It
I want to tell you about the week I realized I had been paying robots to look at advertisements. Not metaphorical robots. Actual automated scripts pretending to be humans visiting websites, loading my popunder ads, registering as impressions, and costing me real money. For about a month I had been happily watching my impression counts go up without asking the obvious question of why none of those impressions were turning into conversions.
When I finally pulled the fraud analysis on month one, somewhere between 15 and 20 percent of my impressions were flagged. Let that sink in for a second. Roughly one out of every six "visitors" seeing my ads was not a person. It was some script running on an AWS server or a datacenter proxy somewhere, clicking through the motions of being human without any of the inconvenient parts like having a credit card or wanting to install my app.
The sneaky part that nobody talks about
The obvious problem with bots is you pay for impressions that dont convert. Thats money straight into the garbage. But theres a second problem thats arguably worse and I didnt realize it until I'd already made the mistake: bots corrupt your data.
Think about it. If 18% of your impressions on a particular placement are fake, your conversion rate for that placement looks way worse than it actually is. I had one placement I was about to unlink because the numbers looked terrible — low CTR, almost no conversions, seemed like garbage traffic. Turned out the placement was fine, there was just a bunch of bot traffic mixed in that was dragging the averages down. I almost cut a profitable traffic source because my data was polluted. That scared me more than the wasted spend honestly, because at least wasted spend is visible. Bad optimization decisions based on bad data can compound for months before you notice.
What the antifraud system actually looks at
PopLayer checks every single ad request server-side. And I mean every one, not a statistical sample like some networks do. The checks are roughly: is this IP from a datacenter like AWS or Google Cloud or DigitalOcean (real humans browse from residential ISPs, not server farms), does the browser fingerprint look consistent or does it have the telltale signs of a headless browser or selenium script, is this IP hammering the site hundreds of times per hour (humans dont do that), does the referrer make sense, does the claimed device actually match the user agent string.
When you create a campaign theres a threshold slider that controls how aggressive all this is.

What I wish someone had told me about setting it
Everyone says "it depends on your use case" and technically they're right but thats useless advice when youre staring at a slider and dont know what to pick. So heres my actual take.
Performance campaigns where you need real humans to convert — set it strict, maybe 20-30 on the slider. Yes you'll lose some legitimate traffic thats borderline. Probably around 5% of real impressions get filtered out. But you're also blocking like 15% fake impressions, and on a performance campaign every fake impression is pure loss. The math is obvious.
If youre doing some kind of volume or branding play where you mostly just need reach, medium is fine. You'll get more impressions, some of them will be sketchy, but if you're paying $0.20 CPM for tier 3 awareness traffic the fraud tax is less painful. I ran a test once on the same campaign — strict versus medium — and strict gave me about 12% better conversion rate but 20% less volume. Depends what you're optimizing for.
Most of my campaigns sit at medium-strict. Ive landed there through trial and error over several months and it seems to be the sweet spot for my particular mix of offers and geos. Your mileage will absolutely vary though.

What to look for after your campaign has been running a few days
Patterns Ive learned to watch for. A source that sends you 5,000 impressions and literally zero clicks is almost certainly not real traffic — humans click on things sometimes, even accidentally, and zero across thousands is not a statistical anomaly its a red flag. Traffic that spikes at 3 AM local time in the source's timezone — people sleep, bots dont. One specific geo/device combination with a CTR thats like 100x lower than everything else — had one source showing 0.001% CTR on Android from Brazil while everything else was around 0.5%, which turned out to be a bot farm. And brand new sources that immediately start sending huge volume on day one — legitimate websites build traffic over time, they dont go from zero to 50k impressions overnight.
After I tightened the antifraud settings and cut two sources that looked off, the fraud rate went from that 15-20% range down to about 3-4%. And my cost per conversion dropped roughly 25%. Same campaign, same offer, same budget, same bid. The only thing that changed was less fake traffic eating the budget. Twenty five percent improvement from what was basically 10 minutes of work in the antifraud settings. Kinda makes you wonder why I waited a month to do it. (Because I was lazy. Thats why.)