Microsoft has now responded, with a blog post and a letter, to my post about an experimental study that I coauthored with Yale Law School students Emad Atiq, Sheng Li, Michelle Lu, Christine Tsang, and Tom Maher. Our paper calls into question the validity of claims that people prefer Bing nearly two to one.
In response to several commenters: I do not work for and do not have any consulting relationship with Google.
Microsoft claims that our study is flawed because it relied on their own blind comparison website. They now say that “Bing It On” is meant to be a “lightweight way to challenge people’s assumptions about which search engine actually provides the best results.” To be sure, companies often use fantastical or humorous scenarios for free advertising. However, Microsoft’s television commercials present the site as a credible way that people can learn whether they prefer Google or Bing. These commercials show people who discover that they really prefer Bing to Google. The challenge site that they created is either sufficient to provide insights into consumer preferences or it isn’t. The advertisements give the impression that the challenge site is a useful tool. Microsoft can’t have it both ways. If it is a sufficient tool to “challenge people’s assumptions,” then it is sufficient to provide some evidence about whether the assumed preference for Google is accurate.
What’s more, the ads have conveyed the clear impression that a substantial majority of challenge takers prefer Bing. The “nearly 2-to-1” claim, combined with videotaped examples in their TV commercials where virtually 100 percent of the challenge-takers learn that they prefer Bing, suggests that the experience of Bing It On users confirms the results of Microsoft’s 1,000-person study.
After spending years and (presumably) millions of dollars trying to convince consumers that the Bing It On challenge de-biases the “Google Habit,” it seems incongruous for Microsoft to turn around and claim the whole exercise was “lightweight” and did not warrant tracking and analysis.
This bring us to the second point, which concerns Microsoft’s (literally) bold declaration that “we don’t track the results from the Bing It On challenge” because doing so would be an unethical invasion of privacy.
This dog won’t hunt.
First off, there must be some tracking for Microsoft to count and then advertise that 5 million people have taken the challenge (the number is over 25 million as of May 2013). Second, Microsoft still has not explained how it came up with its list of suggested search items. Our study suggests that the list had been systematically chosen to favor terms that are more likely to produce a Bing preference. How exactly did Microsoft learn that these terms were Bing-friendly if it hadn’t been tracking? We’re still waiting for Microsoft to explain this anomaly.
More important, tracking search results is an essential part of Bing’s business model. All search engine companies operate by analyzing search data to improve user experiences. The Bing It On website is slightly different from a search engine in that it asks users which set of search results they prefer. However, it is unclear why anonymous and aggregated side-by-side search preferences trigger greater privacy concerns than information (also aggregated and anonymous) on what terms users search for, which results they click on, and the myriad pieces of user data that feeds Bing’s algorithm.
One way to think of our study is that we simply started to track the results of the challenge that Microsoft has been choosing not to. Our subjects went to Microsoft’s own site and told us what happened. Now Microsoft has good reason to believe that most people who take the challenge do not prefer Bing. Given this knowledge, it would be misleading to place a banner asking consumers to “join the 5 million people who’ve visited the challenge” next to advertised claims on Bing’s superiority over Google in “blind comparison tests,” when the “challenge” and the “blind comparison tests” are in fact different creatures. We take it as progress that Microsoft has become more circumspect in its claims. The Bing It On website, for example, no longer includes the “5 million visitors” figure (as it previously did).
What we are concerned about is companies playing fast and loose with numbers to create misleading advertisements. Advertisers are typically (and rightly) given a lot of flexibility to be lightweight with the truth, but when statistics and studies are introduced in a scientific manner to support a claim, that flexibility should end. Microsoft should ensure that the results of the millions of Bing It On challenge-takers are not inconsistent with its advertised claims.