AI has rapidly become one of the marketer’s first choices. From composing ad copy to measuring campaign performance, artificial intelligence is transforming the way we tackle pay-per-click (PPC) advertising. And with the speed of this adoption comes an urgent question:
Can you really trust AI with your PPC strategy?
Recently, digital marketer Susie Marino put this question to the test in a controlled experiment. She evaluated five major AI platforms — ChatGPT, Gemini, Meta AI, Perplexity, and Google’s AI Overviews — by asking them 45 PPC-specific questions. The goal? To see how accurate, insightful, and actionable their responses were when it came to paid advertising.
The results revealed a truth many marketers have quietly suspected: AI is powerful, but far from perfect.
The Accuracy Problem: One in Five Responses Were Wrong
Let’s start with the big headline: 20% of the answers across the five AI tools were incorrect. That’s one out of every five pieces of advice — wrong. Not vague, not debatable. Just factually incorrect.
For businesses spending thousands (or millions) on ads, a 20% misinformation rate isn’t a small glitch — it’s a budget risk. From bidding strategies to keyword match types, one wrong move can tank performance, waste spend, or steer campaigns off course entirely.
This is especially concerning given how many marketers are leaning on AI to make faster decisions. The idea that you can simply “ask the AI” and run with the answer might be a comforting shortcut — but it’s also a dangerous one.
Some Tools Outperformed Others
Among the five tools tested, Google’s Gemini came out on top, offering the most accurate and useful responses. With only three incorrect answers out of 45, it stood out as a promising assistant — especially since it’s being integrated into Google’s new Marketing Advisor inside Google Ads itself.
At the other end of the spectrum, Google’s own AI Overviews (the summaries now embedded in search results) gave the most wrong answers: 12 out of 45. These are the AI blurbs many people are now seeing in response to PPC-related searches — a reminder that just because it’s on Google doesn’t mean it’s gospel.
ChatGPT landed somewhere in the middle. It didn’t give as many wrong answers, but it had a softer problem: tone. When given poor campaign performance data, ChatGPT offered sugar-coated responses — avoiding direct criticism and missing the urgency needed to fix underperforming ads. That may be polite, but it’s not helpful in a performance-driven space like PPC.
AI and Keyword Suggestions: Still a Long Way to Go
One of the most practical (and commonly used) features of AI tools is keyword research. Yet in this test, most tools fell short — badly.
ChatGPT, Perplexity, and Meta AI all produced keyword lists filled with overly broad, generic, or high-competition terms. Worse, some even suggested outdated ad types like ECPC and expanded text ads — formats that have already been deprecated or are no longer effective.
Only Gemini produced keyword ideas that could reasonably serve as a starting point. Even then, they still needed human refinement, market context, and competitive analysis.
In short, AI isn’t yet a reliable shortcut to high-performing keyword strategy — especially for platforms like Amazon or Google, where search intent and match type precision are critical to ROI.
Platform-Specific Performance
One clear takeaway from the test is that AI tools tend to perform best on the platforms they’re closest to.
- Meta AI excelled at Facebook-specific data, offering accurate insights and guidance for Meta’s own ad ecosystem.
- Gemini was strongest on Google Ads, unsurprisingly, given its origin.
- ChatGPT showed more generalist strength, helpful for generating content and explanations, but not strategic PPC advice.
This suggests a practical approach: match your AI assistant to the platform you’re working on — and always stay critical, no matter which tool you choose.
Why This Matters for Amazon Sellers and DTC Brands
If you’re running Amazon PPC campaigns or overseeing direct-to-consumer (DTC) paid strategies, the implications here are serious.
Relying on AI to guide bids, suggest keywords, or evaluate performance can feel like a time-saver — but if the output is even 20% inaccurate, you’re not just risking inefficiency — you’re risking profitability.
For Amazon sellers in particular, where margins can be razor-thin and ad budgets must be tightly controlled, following bad advice can have long-term consequences: missed ranking opportunities, higher ACOS, wasted ad spend, and lost buy box visibility.
Final Thoughts: Proceed With Smart Caution
AI is here to stay. And when used properly, it can be a big help.
But marketers and sellers should stop thinking of AI as some kind of all-knowing expert.
Instead, they should see it as a smart helper that still needs guidance.
So before you go ahead with the next AI-driven PPC change or keyword suggestion, take a moment.
Look at the data.
Check if the advice makes sense. Remember — it’s your money that’s at risk, not the machine’s.