Beena Ammanath
Contributor
Beena Ammanath is a global and U.S. Technology Trust Ethics Leader at Deloitte AI Institute.
AI is on the minds of nearly every enterprise and startup leader today, challenging human decision-makers with a constant stream of “what if” scenarios for how we will work and live in the future. Generative AI, especially, is redefining what business can do with artificial intelligence — and presenting thorny questions about what business should do.
Managing risks and ensuring effective oversight of AI will need to become a central focus of boards, yet many organizations can struggle when it comes to helping their top leaders become more intelligent about artificial intelligence.
The urgency to educate board members is growing. Over the last decade, the use cases for machine learning and other types of AI have multiplied. So have the risks. For boards, the AI era has exposed new challenges when it comes to governance and risk management. A recent Deloitte survey found that most boards (72%) have at least one committee responsible for risk oversight, and more than 80% have at least one risk management expert. For all the attention and investment in managing other kinds of business risk, AI demands the same treatment.
AI risks abound. AI security risks, for example, can compromise sensitive data. Biased outputs can raise compliance problems. Irresponsible deployment of AI systems can have significant ramifications for the enterprise, consumers and society at large. All of these potential impacts should cause concern for board members — and prompt them to play a greater role in helping their organizations address AI risks.
A growing sense of urgency
The rise of generative AI makes the AI-risk challenge even more complex and urgent. Its capabilities have stunned users and opened the door to transformative use cases. Generative AI, including large language models (LLMs), image and audio generators and code-writing assistants, is giving more users tools that can boost productivity, generate previously overlooked insights and create opportunities to increase revenue. And almost anyone can use these tools. You do not need to have a PhD in data science to use an LLM-powered chatbot trained on enterprise data. And because the barriers to AI usage are quickly crumbling at the same time AI capabilities are rapidly growing, there’s a tremendous amount of work to be done when it comes to risk management.
Not only does generative AI amplify the risks associated with AI, but it also shortens the timeline for developing strategies that support AI risk mitigation. Today’s risks are real, and they will only grow as generative AI matures and its adoption grows. Boards have no time to spare in getting more savvy about generative AI and how it will influence risk management. The following five steps can help board members prepare their organizations for a future that will be shaped by generative AI.
1. Build the board’s AI literacy
Establishing a solid understanding of AI is essential. If board members are to become advocates and guides for AI risk management, they will have to know how to ask the right questions. That means they will need a certain level of AI literacy — beyond what they already know about AI. With generative AI, the need for AI literacy is even more crucial, given the new types of risk that the technology presents. Board members will need to understand new terminology (such as “hallucinated” outputs that are factually false), as well as how generative AI magnifies existing risks due to its scale. A GenAI-enabled call center, for example, could give biased outputs to a greater number of people.
To build a stronger foundation in generative AI risk management, board members can increase their AI literacy through traditional methods, such as bringing in speakers and subject matter experts and pursuing independent learning through classes, lectures and reading. But generative AI itself could also help. For example, an LLM could leverage simple prompts to then summarize and help explain, in natural language, the complexities of how generative AI works, its limits and its capabilities.
2. Promote AI fluency in the C-suite
Boards and C-suites should be on the same page when it comes to generative AI and risk management. Having a common language, understanding and set of goals is essential. And while generative AI literacy in the boardroom is important, fluency in the C-suite will be even more so. Board members should use their position to urge executives to build generative AI fluency around not only the value and opportunities, but also the risks.
The power and allure of generative AI will continue to grow, along with the use cases. Business leaders will need knowledge and familiarity with the technology so they can responsibly shape AI programs. There are big decisions to be made around AI ethics, safety and security and accountability. All the factors that influence trust in AI flow from a baseline understanding of what generative AI is and what it can do. Board members have a responsibility to drive that understanding within the enterprise, encouraging others to build AI fluency and making it clear why it’s important.
3. Consider recruiting board members with AI experience
In many organizations, board members come from fields that are focused on finance and business management. That background allows them to be informed leaders on fiscal and competitiveness issues. But given that AI is a technical and complex field with its own unique collection of challenges and risks, boards should expand their in-house subject matter expertise. One way they can do that is by recruiting an AI professional to the board. Such a person should bring experience as an operational AI leader, with a track record of implementing successful AI projects in similar organizations.
Keep in mind that generative AI is a relatively new area. Some of the earliest use cases are only now being deployed. Adding board expertise sooner rather than later can help your organization get ahead of the game, and a professional with operational AI experience can provide essential insights boards will need for oversight and governance.
4. Orient the board for the future
Governance is a continuous need, not a one-time exercise. Boards will have to implement controls to guide the ethical and trustworthy use of generative AI. They already may stand up subcommittees to oversee vital enterprise activities, such as for audits, succession planning and risk management related to finance and operations. And they should support generative AI governance with a similar tactic.
The future of generative AI is still in flux. The capabilities, risks, trajectory and even the lexicon for generative AI are all evolving as the technology matures. With a subcommittee or dedicated group for AI, a board can remain highly focused and informed on this complex, fast-changing technology. Another way boards can rise to the challenge is by extending the mandate for existing subcommittees to include generative AI components. For example, an audit committee’s mandate might include planning for algorithmic auditing.
5. Guide the organization as generative AI matures
Board members are important stakeholders with essential responsibilities, even though they may not work directly with generative AI. As enterprise leadership and lines of business explore how generative AI can enhance productivity and drive innovation, the board can take a higher-level, big-picture view of AI programs. It can focus on guiding the enterprise in the ethical and trustworthy deployment of generative AI. One way to do that is by leveraging a framework for assessing risk and trust, and understanding how those areas affect compliance and governance.
Deloitte’s Trustworthy AITM framework is just one example, providing a way to help organizations assess risk and trust in any AI deployment. By deploying such a framework, organizations can help their board members make clear-eyed evaluations and guide the business toward the most valuable use of generative AI.
Entering new territory
The generative AI landscape is still new and exciting. And it will likely continue to be exciting, even though its future remains unwritten. No organization has been here before. All organizations are experiencing the early days of a new technology that will have a profound impact on business and society.
While these five important steps can help businesses prepare for the future, there’s even more that board members can do to position their organizations for the new era of generative AI. There’s no shortage of advisers that boards can turn to for assistance and guidance. Such advisers are already helping develop essential tactics and standards for generative AI governance and oversight, and they can provide critical insight that educates and informs boards.
Risk management will always be a moving target, but with greater literacy, focus, professional experience and a vision for the future, boards can guide their organizations through the uncertainty ahead and position their businesses to thrive in this new era of AI.