Artificial Intelligence (‘AI’) is in the news almost every day – often featuring scare stories, perpetuated by the likes of Elon Musk in his interview with Rishi Sunak, during the Global AI Safety Summit in London during November 2023. Musk then said that AI will be “the most disruptive force in history… AI can do everything. I don’t know if that makes people comfortable or uncomfortable. It’s both good and bad”. Our suggestion in this blog is that Risk Managers should be starting to feel uncomfortable about the challenge of managing this emerging, serious, and complex new risk which is coming their way.
The FCA Handbook requires the risk management function to ensure that all material risks are identified, measured, and properly reported. It must also be actively involved in elaborating the firm’s risk strategy and in all material risk management decisions. In terms of reporting, it must be able to deliver a complete view of the whole range of risks to the firm. In our view, this places AI risk management and compliance squarely on the desk of the Risk Director or Chief Risk Officer.
So, what should Risk Managers be doing now to ensure that the deployment of AI in their firms does not create reputational risk to the firm or deliver poor outcomes for consumers? We consider the European AI Act, which is the first formalised AI regulation to be approved, as well as the recent UK government guidance on AI Assurance . We then suggest some proactive steps that risk managers can be taking now to keep ahead of the game.