ROGUE APPLICATIONS

WANGARI: Governing AI for humanity in workforce revolution

AI has potential both for good and ill and it needs regulation to promote innovation.

In Summary
  • A rigid regulatory framework approach amplifies the burden of compliance and the ambiguity of regulatory contents that  hamper AI implementation.
  • In addition, regulatory fragmentation will impose serious costs not only on businesses but also society.
A robot uses a computer
WORKFORCE REVOLUTION: A robot uses a computer
Image: PIXABAY

This year, the World Economic Forum chose an intriguing theme: “Rebuilding Trust”. The WEF’s website says it’s all about revitalising trust in our future, creating harmony within societies, and strengthening the bond among nations. Beyond the disruptive technological upheavals of 2023, a dose of trust-building feels apt.

As Artificial Intelligence significantly impacts our business and daily lives: improving health, learning, prosperity, peace building, fighting climate change globally, countries are asking common questions. They concern AI’s unpredictable nature and the difficulty in explaining it. The reflection or amplification of data biases raise various concerns about privacy, security, fairness, and even democracy.

How do we leverage these emerging technologies to build safe, secure, and transparent AI systems to lead a sustainable, peaceful, prosperous, just and equitable society? How do we identify, prevent or mitigate the new set of challenges and threats created by artificial intelligence systems? How do we control technology that is becoming so powerful?

In many ways, we have experienced the transformative advancement and potential of artificial intelligence for the social and economic good of humanity through chatbots, voice cloning, image generators, medical science, video apps and more. However, rogue and unethical AI applications are rife through poor or inadequate decisions around financial services and creditworthiness, hiring, criminal justice, health care, education, and other scenarios. These prevent social and economic mobilities, stifle inclusion and undercut democratic values such as equity and fairness. These and other potential risks must be regulated with legal frameworks to bring equity into the design, execution, and auditing of machine learning models to thwart historical and present-day discrimination, bias and other predatory outcomes.

As this agenda advances globally, how do we govern AI appropriately, what are the threats, how do we contain them and the developments? Globally, there are now more than 800 AI policy initiatives, from governments of at least 60 countries, with little progress in most. A recent report indicates that most organisations are now strongly positioned in the path to implementing AI, with 76 percent on the AI adoption curve.

In 2021, the UK set the precedent on AI governance efforts globally with the European Union Artificial Intelligence Act. Locally, in 2024, the Media Council of Kenya has unveiled a Media Handbook with media practice guidelines on AI application, data, reporting and social media in Kenya.

Last year, President Joe Biden signed the US Artificial Intelligence Bill of Rights that established standards for safe, secure and trustworthy AI. Prior to this development, the UN Secretary-General António Guterres had announced his 39-member high-level advisory body to provide recommendations in the international governance of artificial intelligence.

How do enterprises handle their data that powers machine learning and artificial intelligence? How will they protect their customers' privacy? With the new rules and regulations on the horizon, what do they need to comply with in their data and AI practices? How will they ensure that they are obtaining the most value from their data and AI models?

Consumers, public services and businesses want to be able to trust the insights generated by the AI systems without experiencing a negative event. By building this trust, we’ll be able to accelerate the adoption of AI globally and maximise the economic and social benefits this technology can deliver, while attracting investment and stimulating the creation of high-skilled AI jobs.

A heavy-handed and rigid regulatory framework approach amplifies the burden of compliance and the ambiguity of regulatory contents that choke innovation and hamper AI implementation. In addition, regulatory fragmentation will impose serious costs not only on businesses but also society. How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers, including Group of Seven (G7) leaders.

As we navigate in this new era of maturing standards and greater customer sophistication, we’re on a collective journey to forge a balanced and pro-innovation AI-powered future with our values of transparency, accountability and responsibility embedded in our practices.

Locally, there is a significant need to undertake an update of the existing civil rights and the Data Protection Act 2019 to determine interpretability for AI governance to promote equitable access to housing, loans, jobs, and more.

As the dialogue continues, we’ll need to outline, identify and determine specific use cases with classified different degrees of risks and recommendations of more stringent oversight and appropriate regulatory guidelines, including in financial services, healthcare, employment, and criminal justice. The Media Council of Kenya has set the precedent in the journalism industry.

The diversification of software engineering teams in enterprises, critical scrutiny of data biases and the widespread consumer input engagement will level the playing field. Further, the design and development of an anti-racist framework by policymakers that upholds the integrity of the algorithmic design and execution processes to minimise the explicit discriminatory and predatory practices that are institutionally innate is needed.

To achieve true equity, we’ll need to learn from each other, and no matter how good we may think something is today, we will all need to keep getting better. As the pace of technological disruption accelerates, the work to govern AI responsibly to be trustworthy must keep pace with it. With the right commitments, environments and investments, we believe it can.

Postgraduate student, University of Edinburgh  ([email protected] @kennedykwangari)

WATCH: The latest videos from the Star