AI in the Boardroom: Risk and Innovation
“Supposing a tree fell down, Pooh, when we were underneath it?” “Supposing it didn’t”, said Pooh after careful thought. Piglet was comforted by this – Winnie the Pooh, by A.A. Milne
It’s all about thinking through the risk of anything, and then proceeding with caution, right? When it comes to AI, doing nothing is not an option. Your company could ultimately go out of business if you ignore either aspect of AI: risk versus opportunity. Your competitors are moving fast. So…AI conversations in the boardroom, anyone?
There are hundreds of use cases for AI bubbling up in most companies. (P.S. if it’s not bubbling up in yours, you should be worried.) The board needs to ask about the ROI. And to understand how to assess AI risk. The AI Act in the EU shares seven critical “common-sense” guardrails by which to assess risk (which are relevant to all): Bias, accuracy, privacy, intellectual property, cyber, health & safety and antitrust. The company needs a framework to assess the risk-reward balance and decide where to focus.
Some important questions for the board to ask about AI initiatives, as shared by Dominique Shelton Leipzig in a great session on “AI for the Boardroom: opportunity and risk”, hosted by BDO:
- “Do we have a cross-disciplinary AI team? And is Legal involved?” The company needs to work as a whole across functions to understand the impact and the risk. A few examples: HR needs to watch for bias, marketing needs to watch for intellectual property, IT needs to watch for cyber, engineering needs to watch for privacy.
- “Are we risk-ranking our AI? And do we have “prohibited” or “high-risk” use cases?” Prohibited cases are those which can cause the highest risk to humans, both physical and mental. In any company doing business in the EU (but again, relevant for all to consider), which includes many US companies, these must cease by the end of 2024. And “High-risk” use cases, of which there are 145 listed, must be closely monitored.
- “Do we continuously test, monitor, and audit the seven guardrails? Who/what are we using to help us?” GenAI models will experience “drift”, i.e.” the decay of models’ predictive power as a result of changes in real world environments”. This leads to unintended consequences that can cause huge brand damage (e.g. the chatbot which started to swear at customers) or lawsuits (e.g. due to bias in decision-making), hence the need to monitor closely
- “What is our human oversight plan?” Humans need to be in the loop. Period. While it is true that AI models can be used as guardrails to test and monitor other AI models, at the end of the the day, the accountability and the ownership lies with us. The executive team. The board.
We need to recognize, as boards of directors, that, in Winnie the Pooh parlance, it is on us to provide oversight for the tree not falling on the company while we pick the fruit.
Originally published on LinkedIn