Imagine you work at an advertising agency.  One of the agency’s clients is a lawnmower brand, Nevergrass. Your colleagues from the creative department have come up with six ideas for a TV ad.  The team gets together for a creative review, and a vigorous debate ensues: which one of the six ads should the client run? All heads in the room turn in your direction. You confidently say, “There’s only one way to find out”.

Because there isn’t a lot of time (and because you believe that more data doesn’t always mean better insights), the online survey you design to test the ads is a short one:

  1. After collecting respondents’ demographic info, you ask them to rate Nevergrass and five other lawnmower brands on a 7-star scale.
  2. Then you randomly assign each respondent to one of seven groups. They will see either one of the six new ads, or the ad for Nevergrass that is currently running on TV (aka the Old Ad). It’s a monadic design, so everyone sees only one ad.
  3. After they are done watching, you ask them to rate the ad they just saw on a 7-point scale that ranges from “Hated It” to “Loved It”
  4. Finally, you ask the respondents to rate Nevergrass and its competitors on a 7-star scale one more time.

When you return to the office after a night of restful sleep, the data is ready.


Download the data (.xlsx) and conduct an analysis. Based on the results of the test, what is your recommendation? Should the team go back to the drawing board because none of the new ads beat the one that’s on the TV right now? Is there a clear winner among the new ads? How would you design the test differently?

The brand is imaginary but the data is not and comes from a real test. That test included other questions, which are not relevant to this assignment and have been removed.  How you present your recommendations is up to you; send us a file or a link.