Establishing a human governance framework for artificial intelligence

AI’s rapid adoption and elevated risks require a thorough governance approach

July 28, 2023

Key takeaways

All organizations must establish and operate an overarching enterprise risk management program.

As AI evolves, it requires a more expansive governance framework to enable successful usage.  

Effective governance requires organizational, educational and cultural changes.  

#
Data analytics Artificial intelligence Digital transformation
Risk consulting Cybersecurity consulting Generative AI Technology risk consulting

The rapid proliferation of complex digital systems demands that all enterprises establish and operate an overarching enterprise risk management (ERM) program, including digital risk frameworks that address cybersecurity and artificial intelligence (AI). This is not a static “set it and forget it” process. Instead, the dynamic nature of complex digital tools, particularly AI, requires frequent periodic review and modification of policies and procedures. Augmenting and reorganizing your management team may be necessary to keep up.

AI tools are powerful but also potentially dangerous. Their rapid adoption, frequently without control policies and procedures, is alarming. Boards and leadership must take immediate action to control AI and other digital tools in a manner that enables the successful exploitation of their profound potential.

Codifying and following policies and procedures will also assist enterprises in satisfying pending SEC risk disclosure rules. Diligently following these processes is important, as the failure to do so may be perceived as digital “risk-washing,” a superficial approach to risk management. The emergence of AI requires establishing a new governance framework—one much more expansive than existing digital risk frameworks, such as those focused on cybersecurity.

Establishing ERM

Risk governance begins with an overarching ERM framework designed to identify and evaluate enterprise threats and opportunities and manage these risks according to your organization’s risk tolerance. One size does not fit all. A good starting point is the principles published by the Committee of Sponsoring Organizations of the Treadway Commission (COSO), whose five components of risk management include:

  1. Governance and culture
  2. Strategy and objective -setting
  3. Performance
  4. Review and revision
  5. Information, communication and reporting

 

Leveraging a cybersecurity governance framework

A subset of ERM is the familiar cybersecurity governance framework. Again, one size does not fit all. However, a good starting point is the National Institute of Standards and Technology (NIST) framework, which includes the following elements:

Identify: Develop an organizational understanding to manage cybersecurity risks to systems, assets, data and capabilities.

Protect: Develop and implement the appropriate safeguards to ensure the delivery of services.

Detect: Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event.

Respond: Develop and implement the appropriate activities to action regarding a detected cybersecurity event.

Recover: Develop and implement the appropriate activities to maintain resilience plans and restore any capabilities or services impaired due to a cybersecurity event.

Detailed policies and procedures underlie each element. The NIST cybersecurity framework is currently under review to add an additional governance element.

Implementing an AI governance framework

AI is a powerful data processing technology tool that businesses now must use to stay competitive. Use cases include: 

  • Business process automation and efficiency
  • Data intelligence
  • Supply chain optimization
  • Predictive maintenance
  • Customer service and support
  • Fraud detection and risk management

However, extraordinary enterprise risks are the price to pay for using this powerful digital tool. The scope of these risks is much bigger than cybersecurity alone. Examples include:

  • Maintaining human control
  • Overdependence and reliance on outcomes
  • Lack of accountability and transparency
  • Data management, privacy and security challenges
  • Ethics concerns and workforce disruption
  • Regulatory compliance demands
  • Vulnerability to cyberthreats, internal misuse and external attacks

As society grapples with AI, we can expect rapidly changing regulations and governmental requirements. To keep pace, your organization must take immediate action to establish frameworks, policies and procedures to control its use, and anticipate and monitor for changes as the technology evolves.

Human control of these policies and procedures is key to optimizing the benefits of AI while minimizing its risks. Include the following categories in your AI framework:

Governance and ethics: Require human oversight and usage authorization on a “need to use” basis only. Incorporate your organization’s values and ethics into the design and implementation of AI tools. Require any external AI tools to meet these requirements.

Privacy: Incorporate data privacy rules and regulations into the design and implementation of AI tools.

Bias control: Strive to ensure that data leveraged for AI tools and the tools themselves, both internal and external, are as unbiased as possible.

Consistent output: Train AI tools to produce consistent results.

Explainable: Require users of AI tools to understand the input and explain their output.

Accountable: Clearly define ownership of and access to each AI tool and its inputs and outputs. Develop an inventory of approved AI tools and capture, maintain and retain an audit trail of key inputs and updates.

Secure: Protect the enterprise and AI tools from cyber and physical attacks.

Regulatory compliance: Factor applicable regulations into the design and implementation of all AI tools. Routinely monitor for and review regulatory changes.

Education: Develop an enterprise-wide education program to instill awareness of the benefits and risks associated with AI. Openly reward positive usage behavior and create a culture of shared responsibility throughout the enterprise.

Management implications

Successfully implementing and adhering to these new AI governance frameworks, policies and procedures will require establishing cross-functional management teams and committees empowered with clear authority and responsibilities to deal with AI digital risk. It will likely require augmenting your current management teams with AI experts and legal, compliance, and ethics professionals. Also, given the expanding scope of digital risk, consider adding digital systems knowledge to your board and establishing a chartered risk committee.

Getting started on governance

With the emergence of AI—especially generative AI—and the various strategies companies can use to leverage the technology, there are no check the box solutions for digital risk governance. But to build an effective framework, boards must focus on organizational changes necessary to manage and control digital risk, educational changes to develop a common understanding of that risk among board members and risk experts, and cultural changes to impress upon the organization the importance and shared responsibility of controlling digital risk. With the rapid advances in AI technology, companies can no longer afford to be reactive without lost opportunities and potentially harmful consequences. 

RSM contributors

  • Rod Hackman
    Rod Hackman
    Executive Advisor, Board Excellence

Related solutions