As a research and analytics-based communications consultancy, we need to have a data-driven POV on the relationship between spokesperson characteristics and perceptions of credibility. The challenge is that people are loath to admit unflattering things about themselves, especially when it comes to racism, sexism, ageism, or even less pernicious forms of snobbery, all of which are very likely to influence perceived credibility.
The survey researcher’s go-to strategy for encouraging honesty has traditionally been anonymity. Respondents are assured that no one will be able to associate their name or other identifying information with their responses. There are two flaws with this approach. First, in a world in which everything and everyone is tracked on the Internet, how many people fully believe that their online survey responses are not traceable back to them? Second, many forms of bias are things people do not want to admit even to themselves; and thus, promises of confidentiality do not pierce the wall of self-delusion that people build to protect their ability to think of themselves as a “good” person.
One way to get around politically correct responding is to give people the opportunity to express certain unflattering things about themselves without having to own or admit them. The Thought Leadership team did just that in the 2017 Trust research with the use of an avatar methodology.
We wanted to understand which demographic characteristics increased the perceived credibility of a spokesperson in five markets – Brazil, Germany, India, the UK, and the US. The four characteristics we focused on were age, gender, race and formality of attire/professionalism.
Rather than directly asking people who is more believable, a man or a woman, or an older person versus a younger person, we asked them to choose the person someone like themselves would find to be the most credible spokesperson to deliver a corporate message from an array of 16 avatars. These avatars were systematically varied by age, gender, race, and formality, but were otherwise exactly the same. By looking at choice patterns, we could determine which characteristics people found to be the most credible without having to ask them to make explicit, value-laden judgements.
The idea here is that with all 16 options randomly displayed altogether, it was not obvious what we were examining. And while this method cannot be used to label any particular person a racist or sexist, which is why it works so well, if you look at patterns of choices across markets or segments you can see bias at the group level. The logic is that if there were no bias at all, then each of the 16 avatars would be selected 1/16th or 6% of the time. Any statistically significant skew from that indicates a trait preference.
What we found was quite clear, the number one most chosen avatar across all countries and all demographic groups was this guy:
The young, white, formal, male was selected 20% of the time which is 3x more than random chance. Women picked him more often than they did any of the female avatars. Indians picked him more often than they did the darker skinned avatars, and matures picked him more often than they did any of the older avatars. Overall, a male was selected 62% of the time and a Caucasian was picked 68% of the time.
The least selected choice across all groups was this individual:
The older, darker-skinned, informal, female was selected 2% of the time which is 3x less than chance.
The good news is that we have an interesting technique at our disposal for measuring hard to reach attitudes and values such as gender, age, and racial bias. The bad news is that, apparently, it’s still 1950 when it comes to our credibility judgments.
David M. Bersoff, Ph.D. is Head of Thought Leadership Research, based in our New York office.