Polling In The Dark

How do you survey people in a dark movie theater?

CinemaScore conducts exit polls in theaters by asking movie goers to pull back tabs on the ballot, the design of which has remained mostly the same over the past 35 years.

 CinemaScore Ballot. Source:  Las Vegas Weekly

CinemaScore Ballot. Source: Las Vegas Weekly

CinemaScore tabulates the results and reports each movie's letter grade. Only 19 movies in the company's history got an F. The score is not a simple average:

“CinemaScore has an algorithm,” [founder and president Ed] Mintz explains. “A long time ago, we tweaked and analyzed until we came up with what we thought to be the absolute right system. Obviously I can’t share that. That’s the McDonald’s secret sauce,” he laughs. “But if you have 100 ballots, even if you divided it evenly, and had 20 As, 20 Bs, 20 Cs, 20 Ds, 20 Fs — in school, that’s a C. In our curve, it’s a lot worse; a B in school is more equivalent to a C in our terms. When you start getting Bs with CinemaScore, it affects the algorithm and curve a lot harder than it does in school. If you have 20 percent Cs, 20 percent Ds, 20 percent Fs — imagine how bad that is.” (Vulture.com)

The results are used to estimate word of mouth and multiples— the overall gross in relation to the opening weekend.


Can Surveys Predict Behavior?

holding it wrong.jpg

It has been a recurrent sentiment in my corner of Twitter that surveys are useless for predicting what people will do — or why — because one can't trust people to tell the truth. Jason Oke, then a planner at Leo Burnett, blogs about it in 2007.  Faris Yakob throws a bomb of a blog post in 2010. Richard Shotton writes about it in 2017

I happily collect my data using observations and experiments. I love digging through search queries (come back another time for a story about that).  I've tried just about every implicit method there is.  But often, just asking people questions works pretty well, too.

To be sure, there are plenty of reasons to be skeptical about survey results.  I wouldn't base my life's decisions on the results from a survey that asks questions this way:

 A question from the  Media Accountability Survey

A question from the Media Accountability Survey

I'd also disregard the data from any survey that looked like this to its respondents:

 A screenshot of a grid question that was too large to fit on my iPhone screen.

A screenshot of a grid question that was too large to fit on my iPhone screen.

But let's not throw away a perfectly sharp tool just because other people keep grabbing it by the wrong end.

The concerns about respondents' biases are as legitimate as they are well known. How to ask questions in a way that produces reliable data is literally a science. It has its own experiments, discoveries, and textbooks. (I can recommend three: The Psychology of Survey Response and Asking Questions are more theoretical, and The Complete Guide to Writing Questionnaires is my desk reference.) 

Look at the description of Asking Questions:

Drawing on classic and modern research from cognitive psychology, social psychology, and survey methodology, this book examines the psychological roots of survey data, how survey responses are formulated, and how seemingly unimportant features of the survey can affect the answers obtained. Topics include the comprehension of survey questions, the recall of relevant facts and beliefs, estimation and inferential processes people use to answer survey questions, the sources of the apparent instability of public opinion, the difficulties in getting responses into the required format, and distortions introduced into surveys by deliberate misreporting.


Philip Graves, "a consumer behaviour consultant, author and speaker,"  takes a dim view of market research surveys in his 2013 book Consumerology. (Faris in 2010 and Richard in 2017 both mention this book in their posts.) Among other things, Graves writes that "attempts to use market research as a forecasting tool are notoriously unreliable, and yet the practice continues."

He then uses political polling as an example of an unreliable forecasting tool. He does not elaborate beyond this one paragraph (p.178).

I'm glad he wrote this.

First, horse race polls ask exactly the forward-looking "what will you do" kind of question that people, presumably, should not be able to answer in any meaningful way.  Here's how these questions usually look:

If the presidential election were being held TODAY, would you vote for
- the Republican ticket of Mitt Romney and Paul Ryan
- the Democratic ticket of Barack Obama and Joe Biden
- the Libertarian Party ticket headed by Gary Johnson
- the Green Party ticket headed by Jill Stein
- other candidate
- don’t know
- refused

(Source: Pew Research's 2012 questionnaire pdf, methodology page)


Second, in election polling, there's nowhere to hide. The data and the forecasts are out there, and so, eventually, are the actual results.

And so, every two and four years, we all get a chance to gauge how good surveys are at forecasting people's future decisions.


Here's a track record of polls in the US presidential elections between 1968 and 2012. FiveThirtyEight explains: "On average, the polls have been off by 2 percentage points, whether because the race moved in the final days or because the polls were simply wrong."

On average, you can expect 81% of all polls to pick the winner correctly.

The closer to the election day polls are conducted, the more accurate they are. 

"The chart shows how much the polling average at each point of the election cycle has differed from the final result. Each gray line represents a presidential election since 1980. The bright green line represents the average difference." (NYTimes, June 2016)

 Source:  The New York Times , June 2016

Source: The New York Times, June 2016


What about the 2016 polls?  The final national polls were not far from the actual vote shares. 

"Given the sample sizes and underlying margins of error in these polls, most of these polls were not that far from the actual result. In only two cases was any bias in the poll statistically significant. The Los Angeles Times/USC poll, which had Trump with a national lead throughout the campaign, and the NBC News/Survey Monkey poll, which overestimated Clinton’s share of the vote." (The Washington Post, December 2016)

 Source:  The Washington Post , December 2016

Source: The Washington Post, December 2016


So why then was Trump's win such a surprise for everyone?   

"There is a fast-building meme that Donald Trump’s surprising win on Tuesday reflected a failure of the polls. This is wrong. The story of 2016 is not one of poll failure.  It is a story of interpretive failure and a media environment that made it almost taboo to even suggest that Donald Trump had a real chance to win the election." (RealClearPolitics, November 2016)

In an experiment conducted by The Upshot, four teams of analysts looked at the same polling data from Florida. 

"The pollsters made different decisions in adjusting the sample and identifying likely voters. The result was four different electorates, and four different results."  In other words, a failure to interpret the data correctly."

 Source:  The Upshot

Source: The Upshot

(Here's a primer on how pollsters select likely voters.)

Nate Silver's list of what went wrong:
- a pervasive groupthink among media elites
- an unhealthy obsession with the insider’s view of politics
- a lack of analytical rigor
- a failure to appreciate uncertainty
- a sluggishness to self-correct when new evidence contradicts pre-existing beliefs
- a narrow viewpoint that lacks perspective from the longer arc of American history.

 In other words, when surveys don't work, you must be holding it wrong.



The Effect of Incomplete Logos on Brand Perception

incomplete logos.png

A study has found that companies with typographic logos that are intentionally missing or blanked out parts of the characters (think IBM) are perceived as less trustworthy but more innovative.  "The former influence is tied to the logo's perceived clarity, while the latter influence is tied to its perceived interestingness."

Consumers with a prevention focus (?) have an overall unfavorable attitude towards the firms with incomplete logos.

Reference: Henrik Hagtvedt (2011) The Impact of Incomplete Typeface Logos on Perceptions of the Firm. Journal of Marketing: July 2011, Vol. 75, No. 4, pp. 86-93.

What Does Your Typeface Taste Like?

In "The Taste of Typeface" paper: 

"Participants matched rounder typefaces with the word “sweet,” while matching more angular typefaces with the taste words “bitter,” “salty,” and “sour.”

Why would people match tastes and typefaces varying in their roundness and angularity? The more that an individual likes a taste, the more they will choose a round shape to match it to, and the less they like it, the more they will tend to associate the taste with an angular shape instead." 

Logos of Powerful Brands Should Be Placed Higher

Consumers prefer brands with a high standing and influence in the marketplace more when the logo is featured high on the packaging rather than low.

They prefer less powerful brands more when the brand logo is featured low rather than high.  "The underlying mechanism for this shift in preference is a fluency effect (?) derived from consumers intuitively linking the concept of power with height."

What about "fake it till you make it?"

"There is the possibility that managers may choose to place their logo high to signal power even when such a strategy does not match their brand’s true category standing. Although valid, the current research suggests that when category standing is known by the consumer, this strategy may not work."

Reference: Aparna Sundar and Theodore J. Noseworthy (2014) Place the Logo High or Low? Using Conceptual Metaphors of Power in Packaging Design. Journal of Marketing: September 2014, Vol. 78, No. 5, pp. 138-151.


How Smartphones Shape The Way We Shop

woman shopping phone.jpg

People who are actively looking for a solution are the clearest indication of a design opportunity.

The ways people are using smartphones while shopping tell us there are many gaps in the experience that people are trying to bridge with their phones. Here are 30 of them.



1. Find a store that carries the merchandise they need using search and navigation apps
2. Go to the store to check out the merchandise they are planning to buy online (aka showrooming)
3. Post their impressions, pictures and videos of the merchandise while shopping
4. Alert others about the rare or discounted merchandise they found
5. Bring in detailed product information to the store instead of having their shopping process guided by the sales staff (in auto dealerships, for example)


6. Find their way to the store using a GPS app
7. Take a picture of the car to remember later where it’s parked
8. Use the phone as a shopping list
9. Receive updates for their shopping list from someone else (a friend, a spouse)
10. Arrange their shopping list in a way that minimizes their shopping time (and any accidental exposure to unplanned merchandise)
11. Use their smartphone to entertain young children to shop in peace
12. Use the smartphone to kill time while waiting for someone else to finish shopping
13. Receive offers from competing business while shopping at a store or navigating at a mall
14. "Bookmark" merchandise by taking pictures for future reference
15. Monitor things back home through a video feed and shop in peace
16. Check for important work emails, which before would have required being physically present 


17. Use their smartphone’s front camera as a mirror (when trying on glasses, for example)
18. Send a picture of the item to someone else for confirmation of correct selection (are these the tampons you need, honey?), or for approval (you look great in those)
19. Look up ratings and reviews for an item they are considering
20. Look up dimensions and other specifications for an item
21. Look up post-purchase information:  additional costs (insurance or add-ons, for example), as well as usage, care, or assembly instructions
22. Take and send pictures of the merchandise  they are considering to friends and receive their friends’ feedback
23. Look at pictures taken earlier for reference (pictures of a room, for example, when selecting a carpet)
24. Compare the item's price against the store’s own prices online. Compare the price against other online and  brick-and-mortar stores
25. Look up price change history for the item
26. Look for online coupons for the store they are already in
27. Like something, then look up its availability at competitors or at more convenient locations


27. Carry loyalty cards with them at all times on their phone
28. Look at the phone during the check-out process and not at the shelves, driving the sales of the impulse-purchase merchandise down
29. Abandon the purchase and buy the same item online in order to save money or to avoid standing in line or having to carry it
30. Pay with the phone


New smartphone-enabled behaviors such as comparison shopping present obvious challenges. Many others, though, are good for the stores. Some reduce distractions: using phones as pacifiers for impatient children. Some provide information that reduces anxiety about the purchase.

Now that we know how things have developed over the past ten years, we could try imagining the trajectory for the next ten.  Will phones detect our our subconscious reactions to the merchandise and make suggestions accordingly?  Will voice assistants have memory of what we paid for the same item last time? Will they prevent us from overindulging and buying two tubs of ice-cream instead of one? Will we see floating AR-enabled arrows pointing towards the shelves that have the items from our shopping list?   

How Likely Is Probably?

When you are asking research participants to estimate how likely something is to happen,  there are several pitfalls to watch out for.  One of them is the lack of consensus about the meaning of words.


From Psychology of Intelligence Analysis published by the CIA:

The table [above] shows the results of an experiment with 23 NATO military officers accustomed to reading intelligence reports. They were given a number of sentences such as: “It is highly unlikely that...”

All the sentences were the same except that the verbal expressions of probability changed. The officers were asked what percentage probability they would attribute to each statement if they read it in an intelligence report. Each dot in the table represents one officer’s probability assignment. While there was broad consensus about the meaning of “better than even,” there was a wide disparity in interpretation of other probability expressions.



Inspired by Sherman's experiment, a Reddit user u/zonination polled fellow redditors with a similar questionnaire, asking them to assign numerical probabilities to different words, and mapped the results.

When you ask someone how likely they are to purchase your product and they answer "unlikely", it could mean a less-than-5% chance, or a greater-than-35% chance. For someone else, "probably" could mean less than even odds.

If you are asking about purchase intent in a survey, be careful how you present answer options, since the probability implied in their order could conflict with what people usually understand each word to mean.  If "probably", on average, means a higher probability than "likely", it could be confusing for respondents to see the options in this order:

  • Definitely
  • Likely
  • Probably
  • About even
  • ...


Sherman Kent, the original experiment's author, talks about what sparked his interest in how intelligence professionals communicate different probabilities:

A few days after the estimate [of Soviet actions in Yugoslavia] appeared, I was in informal conversation with the Policy Planning Staff's chairman. We spoke of Yugoslavia and the estimate. Suddenly he said, "By the way, what did you people mean by the expression `serious possibility'? What kind of odds did you have in mind?" I told him that my personal estimate was on the dark side, namely, that the odds were around 65 to 35 in favor of an attack. He was somewhat jolted by this; he and his colleagues had read "serious possibility" to mean odds very considerably lower.

Can You Spot Subliminal Advertising?

We attempted to reconstruct a famous 1950s experiment with subliminal advertising by inserting very brief flashes of certain words in this video clip. Can you see what the words say? (The full report from the experiment is coming soon.)


Hello, World!

 Skating away from the puck.

Skating away from the puck.

Every Kiss Begins with Kay, and every blog begins with "Hello, World!"  Nobody ever reads it (present company excluded, obviously), but it feels weird to just start jamming without tipping your hat first.  

So: hello, world!

Between 2004 and 2011, I wrote the Advertising Lab  blog about technology and the future of the industry. I spotted a few things right: I wrote about the iPhone and the dangers of showrooming back in 2008, for example.  Over time, the blog grew to 50,000 subscribers. 

Eventually, I've come to realize that the most interesting thing about technology are people and everything that makes them human. The future has lost its appeal, too. It began to feel as if talking about tomorrow was a way to avoid having to deal with today.  If everyone skates to where the puck is going, who will be left minding the puck now?  

Advertising Lab came to a stop, and I spent the next few years getting an insights department off the ground and doing experiments

But I have always wanted to come back. Blogging is contemplative. In a way, it is more social than, say, Twitter; I still feel a sense of connection to people I internet-met a decade ago on their blogs and mine. 

Plus, blogging is like vinyl — it just sounds better.