The New Marketer Said to the Veteran Marketer, “You’re Basing Your Big Decision on THAT?”
Posted by Robert Rosenthal | Marketing Testing | No CommentsRecent graduates don’t get it.
As marketing students, they’re taught to sample in representative ways – and build adequate sample sizes. They learn the hazards of receiving misleading feedback – and relying on misinterpreted responses.
Then they get entry-level marketing jobs and are invited to sit behind a one-way mirror and observe a famously unreliable (but properly catered) event considered by many a staple of marketing research.
The focus group. Where a collection of strangers join a moderator and become – for one magical moment – armchair critics of advertising concepts or even completed advertisements.
Need C-level executives with authority to approve an enterprise security solution with a six-figure price tag? No problem, according to focus group facilitators.
Focus groups offer qualitative advantages, but the problems with these artificial arrangements include sample sizes too small to yield projectable results; atypical prospects recruited through cash payments; misleading responses resulting from the presence and behavior of peers; and reports for marketing agencies and clients based on misinterpreted feedback.
Amazingly, marketers often make decisions involving millions of dollars based on these flawed exercises with a tiny number of consumers or business prospects.
Very often, a better alternative is live testing of completed advertising.
In the real world, we’ve quickly tested as many as three new campaign directions against an existing approach, using a call to action. Rather than rely on what people say they’ll do, we make decisions based on how they actually behave in the marketplace. We’re talking about controlled experiments with sample sizes large enough to project rollout results with a high degree of confidence.
Testing this way helps ensure that superior campaigns get out there. They often account for the difference between red and black ink.
Online testing is fairly new (Google launched AdWords in 2000), but split-run testing is, well, time-tested. Claude Hopkins, reportedly America’s highest-paid copywriter in the early 1900s, discussed split-run testing using a call-to-action in his 1923 marketing bestseller, Scientific Advertising. The technique was embraced by Claude’s contemporaries including John Caples, author of Testing Advertising Methods and BBDO’s direct response genius-in-residence for decades. David Ogilvy said Scientific Advertising “changed the course of my life.”
I’ve yet to figure out why the technique isn’t more widely used. Live, controlled, low-cost testing options are available across online and offline media. All it takes is marketers willing to think like recent graduates. For perennial students of the craft, there’s no better way of keeping score. And continuously improving.
Recent Comments