Sovereign Risk Rating A Key Guide To Investors
It’s been East Africa Africa PR week: First the ‘Spain is not Uganda’ outrage, then Korean Air managed a great little blooper with Kenya’s ‘primitive energy’, and then there was also the Foreign Policy magazine ranking of failed states that placed Kenya at rank 16 – no. 1 is still reliably in the grasp of good old Somalia, vintage failed state, despite the recent flurry of beach-party photographs. Korean Air was amusing: I very much suspected that their unfortunate ad for Kenya as a travel destination was lost in translation – ‘primitive’ was probably meant to be ‘vibrant’ or ‘genuine’ or so. But the bigger irritation was of course that they didn’t seem to have cared enough to get some solid English copy-editing done. If you want to get taken seriously in Kenya, you better take Kenya seriously, too.
Foreign Policy’s Failed States index was a slightly different story. When I first saw it, I flipped through the accompanying ‘Postcard from Hell’ summaries, wondered about Kenya at no. 16, and gave up halfway through because I got bored and went back to chasing some deadline on this failed state. But the KICTanet ICT list server picked it up, and list members – people in an industry that’s made quite a few positive headlines in Kenya were annoyed with the ranking. As usual, conspiracy theories were quickly thrown around, too: that the index was published by people trying to bring Kenya down to benefit from such a downfall, that it was an imperialistic tool or, possibly a more moderate version of this, that the FP Failed States would affect Kenya’s sovereign risk rating and then make international borrowing a lot more expensive and should therefore be a concern.
Not quite. I thought two points were worth flagging: that it made little sense to speculate wildly about the evil purposes behind the ranking without taking a look at the methodology, and that it’s useful to consider the FP ranking in the bigger picture of country risk assessments in general. A sovereign risk rating is essentially an assessment of the creditworthiness of a country, both its capacity and then also its willingness to repay debt. Such exercises are a lot more complex than just flipping through a magazine and going ‘ooooh, Kenya’s bad!’.
There are some people who love number crunching, but I’m not one of those, and I remember the days when my former employer decided to add sovereign risk ratings for all countries to our services offer as filled with horror: the ratings involved a snooze-making amount of balance of payment data, looking up and consolidating data from a variety of sources, making intelligent guesstimates for data that were missing (often the case with African countries), and making forecasts. The methodology and template actually leave very little space for the usual bad-headline stuff – what you use for the (relatively small) political risk section is condensed from the company’s previous years of analysis.
And that’s an important point: Any country risk firm worth their salt have tools and systems. Some sort of methodology. A structure for country reports to make them comparable. Criteria for ratings. Quality controls, from editing to feedback from peers. Often years or even decades of analysis that tracks underlying dynamics rather than just obsess over headlines, and they can track the accuracy of their forecasts (as will clients). So yes, analysts are human and can make mistakes, I tried to explain to a friend, but a country’s risk profile (which investors and lenders will use) is never just made up on the fly by an individual. Things may still go wrong, like with Enron, or Greece, for example –sadly we still can’t predict the future perfectly. But essentially, while analysts will pretty much read anything they can lay their hands on, a key skill for this work is the ability to discern what is good information and what is nonsense.
I also went back to the Foreign Policy website and had a look at the methodology for the failed states index, which is compiled not by the magazine itself, but by an organisation called Fund for Peace (FFP). It’s perhaps not surprising that a Fund for Peace will use rating criteria that give less attention to a country’s economic structure and strength than it gives to more human rights focused areas. They monitor factors such as demographic pressures, massive refugee/IDP movements, vengeance-seeking group grievance, human rights – and a few factors that touch on economics, like uneven economic development, poverty, or sharp economic decline. But in the rankings and assessments typically used by investors and lenders, these weights will be shifted, and you’ll find a lot more focus on trade data, economic diversification, capital inflows etc – where Kenya of course comes out much better.
What I also found interesting was that they collect ‘thousands of reports and information from around the world, detailing the existing social, economic and political pressures’, and use software as a first step to analyse this information load. Quantitative analysis is then added subsequently. I’d be keen to see how software deals with, say, inaccurate or overly sensationalist headlines, and if an automated system is able to sort out hype from substance. So the Foreign Policy Failed States Index wasn’t really great PR for Kenya – but then it also won’t matter much, is my educated analyst guess.