Boards fear falling behind competitors in AI use more than they worry about security or other AI risks.
High Contrast
Boards fear falling behind competitors in AI use more than they worry about security or other AI risks.
Strong governance controls and frameworks are crucial to protect data, prevent bias and ensure accuracy.
Use of external AI services requires governance policies that limit exposure and monitor usage.
Video: Directors & Boards governance conversation – AI and the board
Host: David Shaw, Directors & Boards
Featured guest: Matt Franko, RSM US LLP
Note: This interview has been edited for length and clarity.
David Shaw: Every board is thinking about how AI can affect the company's products and services in the future. But there's more to consider from a board’s perspective when it comes to AI. Today I’m discussing these additional factors with Matt Franko, a principal in the risk consulting practice for RSM US LLP.
Matt, first, from a board’s perspective, what’s the risk of not using AI effectively?
Matt Franko: I actually met with a client this morning over coffee and asked him about the biggest risks his board has been focused on. This is a Fortune 100 company, and he mentioned that the board is not asking much about controls, security or even ethical considerations. Instead, their biggest concern is the risk of falling behind competitors by not using AI effectively.
AI is a marketplace differentiator. Using AI effectively allows companies to stand out, drive profitability and shift the workforce toward more meaningful tasks rather than repetitive, day-to-day operations. Boards are looking at AI’s potential to enhance efficiency, improve decision making and ultimately make organizations more competitive.
DS: And yet, some of the things you mentioned—like cybersecurity, controls and ethical considerations—seem critical to ensuring AI is implemented effectively. Let’s start with cybersecurity. What are the implications of AI for a company’s cybersecurity strategy?
MF: AI relies on vast amounts of data, and for companies to trust AI-generated results, they need to ensure that the data is properly secured. Cybersecurity is essential when it comes to AI because protecting data—knowing where it resides, how it flows and who has access to it—is at the core of AI risk management.
AI security starts with a strong data protection program. This includes monitoring access, tracking for data exfiltration or manipulation, and ensuring proper safeguards are in place.
The National Institute of Standards and Technology (NIST) provides risk management frameworks, including special publications on AI. The International Organization for Standardization (ISO) also has frameworks addressing AI governance. We typically recommend that companies align their AI security programs with these standardized frameworks to ensure robust data protection.
DS: Another area of risk is working with third-party AI providers, such as OpenAI. What risks do third-party AI solutions pose, and how can a board manage them?
MF: From a board’s perspective, it’s important to ask about governance processes around AI usage. Sensitive corporate data—including customer information, health care data and proprietary intellectual property—needs to be protected.
Boards should ensure their organizations have clear policies on using AI software and third-party applications. A key strategy is limiting data flow outside the organization. We typically recommend companies establish private AI instances within their own cloud environments rather than using public AI platforms. This ensures data remains under corporate protection and reduces exposure to external risks.
Boards should also press leadership to confirm that security teams are monitoring AI usage and preventing unauthorized access to public AI tools that might pose data leakage risks.
DS: AI has been known to create ethical challenges, such as bias in decision making. How should boards approach this?
MF: Any comprehensive AI framework includes not just cybersecurity measures but also ethical and bias considerations. Boards can ask management the right questions and rely on their responses, but given AI’s complexity and the reputational risks involved, it’s often valuable to bring in a third-party expert.
An unbiased third party can assess the company’s AI models, whether they are internally developed or third-party solutions, to evaluate fairness, bias and accuracy. Many companies integrate AI through SaaS-based applications, meaning they are leveraging external AI capabilities. In these cases, an independent assessment can ensure alignment with established ethical frameworks, verify that outputs are reliable and confirm that AI-generated results are accurate.
At the end of the day, if AI isn’t delivering accurate and unbiased results, it’s failing to serve its intended purpose. Boards need confidence that their companies are using AI not only effectively but also responsibly.