Article

As AI plans advance, governance and risk awareness are critical to protect data

AI is transforming business operations, but potential risks must be addressed

November 03, 2025

Key takeaways

AI continues to evolve, with governance risks and potential for more complex attacks.

Monitoring and auditing AI system performance and outcomes is the leading AI governance measure. 

34% of smaller middle market companies indicated that AI governance steps are not yet in place. 

#
Cybersecurity consulting MMBI Artificial intelligence Cybersecurity

Artificial intelligence has already revolutionized many key processes for middle market organizations, delivering increased efficiency, insight and productivity. But like other new technology implementations, AI does come with significant risks. Middle market companies must be careful to avoid introducing new vulnerabilities to critical data when integrating AI solutions and expanding their use.

While AI does certainly carry risks, its implementation has rapidly evolved into an imperative for ongoing business success.

“In many ways, AI is still emerging,” says Antalik. “Some organizations are further along than others, and some are just at the tip of the iceberg of experimenting with it. But the ones that are scared are missing the boat. While companies that are diving in are bringing on additional risk, if they do things thoughtfully, AI can provide significant benefits across the business.”

To successfully deploy AI technology while mitigating potential risk exposure, organizations must implement an effective governance framework. Several frameworks are currently available, including platforms from the National Institute of Standards and Technology (NIST), Google, and Microsoft, as well as guidelines from several countries and industry organizations.

In many ways, AI is still emerging. Some organizations are further along than others, and some are just at the tip of the iceberg of experimenting with it. But the ones that are scared are missing the boat. While companies that are diving in are bringing on additional risk, if they do things thoughtfully, AI can provide significant benefits across the business.
Mark Antalik, Managing Director, RSM US LLP

However, data is at the heart of any AI deployment, so data governance—understanding what data assets companies have and where it is stored, processed, transmitted and accessed from—and core AI governance go hand in hand.

“I often talk to clients about how AI governance is really data governance with a few different components added on, and if you struggle with data governance, then you’re going to have struggles with AI governance,” says Franko. “A vast majority of an AI governance framework is derived from data governance and data protection frameworks. Know your data, protect your data, govern your data. Yes, you must worry about bias and how it makes decisions and whether you are getting the right answers, but you can’t get there unless you have those core principles solved.”

Regarding leading AI governance practices, MMBI survey respondents identified monitoring and auditing AI system performance and outcomes as the most widely implemented control (39%). Close behind were defined roles and responsibilities for enterprise AI decision making (37%), staff training on responsible AI usage and development (36%), and AI-focused risk assessments of products (35%).

Of note, 34% of smaller middle market companies indicated that AI governance steps are not yet in place. This means that more than a third of these companies are not yet using AI, or if they are, their data is likely at an elevated risk.

The Canadian perspective: A smaller share of Canadian firms indicate they don’t have AI governance in place compared to U.S. respondents (5% versus 20%). This is likely due to Canada’s efforts to regulate AI at the federal level.   

Beyond AI governance processes, organizations have a growing list of regulatory guidelines to consider when deploying AI strategies. Similar to data security and privacy, AI is not subject to a federal AI regulatory standard in the U.S., but several states are introducing and passing new AI laws. Several specific industries are also rolling out AI standards to promote safe and secure AI usage.

For companies that operate overseas, the European Union has become a pacesetter for AI standards since adopting the first comprehensive set of rules by a major regulator in 2024. Its Artificial Intelligence Act establishes obligations for providers and users depending on the level of risk presented by specific AI tools and applications. Much like the General Data Protection Regulation (GDPR) the EU passed for data privacy in 2018, the AI Act could serve as a global blueprint for AI regulatory actions.

Of course, middle market organizations also must be aware of an entirely new level of threats as criminals harness the power of AI to launch sophisticated attacks. For example, AI is making social engineering attacks feel more realistic by providing attackers with more details about an organization and enabling mimicry of company representatives and leadership with vishing (voice phishing) campaigns and deepfake-enabled impersonations. These attacks are focused squarely on the weakest link in security: people.

“At the end of the day, training your users to understand AI risks is essential,” says Franko. “Your people are your first line of defense, and providing them with the right knowledge is critical. At the same time, the technical and procedural controls that support your users must be strengthened, including your organization’s capability to monitor and swiftly respond when an incident does ultimately occur.”

While companies need to adjust protective strategies and increase awareness to account for more complex AI-supported attacks, the underlying protective strategies are still generally the same as they have always been.

“It’s all the same blocking and tackling that has been used for many years,” says Franko. “It’s making sure you’re strengthening your protection mechanisms, monitoring so you can identify when an attack has been successful, getting the bad guys out, learning from it and getting better. AI risks are no different. Your tactics may change, but your principles don’t.”

Expanding AI risks can understandably be very scary for middle market organizations, but Franko highlights a bright spot.

“The good guys are also armed with AI because it is built into many tools,” he says. “So, your defense capabilities are getting better.” 

How effective is your cybersecurity program?
Every organization is facing an elevated level of cybersecurity risks, with threats evolving on a frequent basis. Do you know where you stand? RSM’s cybersecurity Rapid Assessment® can provide the insight and detail that you need.