AI’s Adtech Moment

Sponsored by edgeful

Happy Monday!

Happy New Year all, lets see how AI adoption is starting to look a lot like early adtech, and why the real winners might be the “boring” measurement layer, not the flashiest model.

In the late 90s, the internet was obviously the future, but nobody could answer the most important question with confidence: did the ad actually work? Attribution was messy, incentives were messy, and everyone had a story. What fixed it over time was not one magical banner format, it was infrastructure. Measurement, planning tools, governance, standards, and eventually a shared language for what “working” even meant.

Sponsored

Sometimes a setup looks great, but you want to know if the odds are actually on your side. Edgeful helps you check the history behind the move without overcomplicating your process.

You can quickly see how similar price patterns played out in the past, how often breakouts held, or whether volume and trend behavior support the idea. It works across stocks, futures, forex, and crypto.

It is not about guessing the future. It is about using simple stats to decide if a trade makes sense or if patience is better.

Heads up for my readers: Edgeful doesn't usually offer free trials, but they've hooked us up with a 7-day one. If it sounds useful, check it out via the link below—no strings attached, and sometimes passing is the best call.

Enterprise AI is sitting in that same awkward phase right now. Companies are buying a pile of tools because leadership feels the clock ticking. “We have 18 months to lead or fall behind” is basically the emotion driving budgets. But the scoreboard is broken. A lot of teams can only report, “here’s what we bought,” not “here’s what changed.”

And that’s where most AI ROI conversations go off the rails. The default corporate move is to survey employees: “Do you feel more productive?” That’s not measurement, that’s vibes with a spreadsheet. People answer how they think they’re supposed to answer, and half the time they aren’t even using the tool consistently. Worse, once you pick a metric and start rewarding it, you trigger Goodhart’s Law: people optimize the number, not the outcome. If you track “tokens used,” you’ll get tokens. If you track “emails sent,” you’ll get emails. If you track “lines of code,” you’ll get code. None of that guarantees value.

The baseline problem is the core issue. A lawyer can cut an 8-hour task to 4 hours, but if the company still gets the same number of contracts out the door and the extra time turns into golf, the employee got the win and the company didn’t. So the company needs a way to measure impact that does not rely on self-reporting and does not get instantly gamed.

A more honest approach starts with three layers:

First, you need discovery: what tools are being used, by which functions, how often. Most enterprises are shocked by how much “shadow AI” exists, not always malicious, just unmanaged.

Second, you need adoption that feels safe. A huge chunk of friction is not technical, it’s social. People don’t want to look dumb, and they definitely don’t want to get fired for pasting the wrong thing into the wrong box. If you want usage, you create a safe lane with clear rules and guardrails, and you make the best internal users visible so their workflows spread. One motivated person quietly doing “8 hours of work in 1 minute” is nice. That person teaching the whole department is the actual unlock.

Third, you need impact metrics that map to real outputs, not activity. In a lot of cost-center functions, the cleanest proxy is responsiveness and cycle time. Are other departments comfortable routing more work to legal because turnaround is faster? Are requests getting resolved in fewer days? Is the backlog shrinking without quality dropping? Those are harder to fake than “I used the tool a lot this week,” and they connect to how big companies actually win or lose: coordination.