OPINION POLLING CREDIBILITY

Shielding the public from ‘funny’ polls: Whose responsibility?

It would be useful if research associations establish a ‘rapid response team'

In Summary

• There should be an additional ‘safety’ in evaluating the integrity and accuracy of the results of any particular poll.

• Each media house should also undertake its own internal scrutiny before reporting.

Research Analyst Tom Wolf making a presentation on National Survey findings at the Ipsos office in Nairobi on August 2, 2016.
RESEARCHER: Research Analyst Tom Wolf making a presentation on National Survey findings at the Ipsos office in Nairobi on August 2, 2016.
Image: FILE

 On Wednesday, July 22, I became aware that a local research firm previously unbeknownst to me, Intel Research Solutions (IRS), had released the results of a survey that morning at a Nairobi hotel.

According to its website, this firm has been in existence since 2011. On that basis, I found it strange I had never encountered their work before. (Nor does its website – ‘ResearchIntel Africa’ – lead to any surveys.)

I obtained a copy of the document that was distributed to the media, and found that the results cover a number of topical issues, including the forthcoming BBI proposals and the 2022 presidential contest. Notwithstanding the apparent widespread interest (including my own) in such issues, after going through the document, I was grateful that it received minimal media attention.

But even before looking at the data, I was concerned that a household/face-to-face survey was conducted during the current Covid-19 pandemic, for two reasons.

First, regarding public health, sending out interviewers (from wherever their homes are) to households all over the country could be dangerous for them, as well to those they engaged for interviews.

Indeed, the Market Survey Research Association of Kenya had issued notice to its members in March that this survey method should be avoided for the time being (though this restriction ended on April 30, subsequently replaced by advice that any such household surveys should “adhere to guidelines” issued by the government). I believe that such public health/ethical considerations should apply even to research firms that are not members of this association – a category that reportedly includes IRS).

Second, obtaining a representative sample in these circumstances poses an additional challenge. How often were interviewers not allowed to even begin the respondent-selection process due to such health concerns by one or more household members, and what sort of bias might this have generated in the sample achieved?

Leaving aside the Covid-19 issue, the brief duration of the survey – just six days – also raises concerns. In my experience, for an average interview - a duration of around 45 minutes and a team of about 60 interviewers, nearly 10 days are required for a sample of just 2,000 - since much of the fieldwork time is spent in traveling to particular sampling points (ie ‘enumeration areas’). So in this case, how many interviewers were deployed, and how many interviews did each one complete each day?

My worries about the integrity of the survey increased further when I looked at its (very scant) demographic profile. Regarding education levels, I found no figure for respondents lacking any formal schooling, yet every national (household) survey I have been associated with yielded a figure of around five per cent for this category.

At the other extreme, I have never seen a nationally random sample with more than around six per cent with university education, yet in this sample the figure is 13 per cent!

 

The absence of other (usually reported) demographic details was also glaring. These include employment status, religious affiliation, marital status, position within the household, and average total household monthly income, among others, all of which (whatever their inherent interest) help to demonstrate how representative any achieved sample is of the nation’s total adult population.

MEDIA REPORTAGE

Turning to the substantive findings, the few media outlets that reported the survey (as well as social media commentators) gave most attention to expressed support/voting intentions regarding the 2022 presidential contest.

Deputy President William Ruto received 30 per cent and former Prime Minister Raila Odinga 17 per cent, followed by Machakos Governor Alfred Mutua and former Vice President Musalia Mudavadi with eight and seven per cent, respectively.

On the face of it, such findings are not particularly counter-intuitive, though several deviate considerably from those reported in a (mobile phone, and rather smaller) survey released by Kantar last November. They, for example, gave Ruto 40 per cent, Mudavadi three per cent, and Mutua less than one per cent. Raila's figure of 16 per cent is statistically identical.

Perhaps even more puzzling in light of Kantar’s survey are the figures regarding expressed political party support: 25 per cent for ODM, and 13 per cent for 'none' (no party whatsoever), whereas according to Kantar, these two responses received 16 and 26 per cent, respectively, though the figure for Jubilee in both of these two surveys is exactly the same —42 per cent.

Although there has been considerable political turmoil, including individual and party realignments over the last few months, such contrasts stretch one’s credulity.

Aside from politics, an even more doubtful finding is found in the distribution of responses to the question about the country’s 'three most pressing problems', with five per cent of respondents mentioning 'drought'. Yet since the exceedingly heavy rains since late last year, I am unaware of this 'problem' in any county.

On the contrary, flooding/exceedingly high water levels continue to be reported in several areas (Tana River, Baringo and Samburu counties and around Lake Victoria), yet this 'pressing problem' receives no mentions at all.

(I should note that I sent two e-mail messages to IRS asking for clarification regarding several of the above issues but I am yet to receive any response.)

Leaving aside my own concerns, I have no idea how each media house that decided not to report this survey came to its decision. But given the stakes involved in Kenyan elections and the generally high level of confidence (as revealed in many past surveys) that the media collectively enjoy among the public, I consider it laudable that in this case most of them ignored it, thereby denying it much credibility.

Of course, the fact that other (more familiar) survey firms have recently been silent with regard to most of the topics it covered makes it difficult to assess its ‘accuracy’. I hope this situation will soon change, given the many pressing (and controversial) issues with which the country as a whole and its leaders are grappling, and the general utility of knowing the public’s level of awareness of and opinions about them.

Moreover, I suggest that there is additional ‘safety’ in evaluating the integrity and accuracy of the results of any particular poll when more rather than fewer firms are conducting and releasing them, thus exposing any ‘outliers’.

How firms can cover the costs of such polls is another matter. But if they can, such public benefits can accrue only if 'such information' is accurate, within the normal boundaries of statistical variation, of course.

In this regard, it would be useful if MSRA establishes a (small) ‘rapid response team’, so that based on the guidelines to which its own members should adhere (and which could also be made more widely known), any media house that has doubts about the integrity of any firm’s work can quickly obtain professional input, pending its decision as to whether to publish.

Just making such a ‘friendly’ arrangement known may deter those inclined to release ‘fake’ polls from doing so, or at least impose upon them some reputational cost, if they do. In the meantime, let each media house undertake its own internal scrutiny, as apparently – and thankfully – was largely the case here.

Dr. Wolf is an independent research analyst based in Nairobi