Article

Successfully managing artificial intelligence security and privacy risks

Many companies remain concerned about risks as AI usage increases

May 30, 2024
#
Artificial intelligence
Risk consulting Cybersecurity consulting Generative AI Cybersecurity

Artificial intelligence (AI) is a hot topic in all segments of the economy, as companies evaluate how they can take advantage of potential increases in efficiency and productivity and make more rapid, informed decisions. Even so, organizations cannot lose sight of the potential risks related to AI.

Not surprisingly, AI use is rapidly expanding in the middle market as more solutions become accessible and companies discover new use cases. In an upcoming RSM US LLP report that details AI use in the middle market, more than three-quarters of survey respondents (78%) say their organizations use AI, either formally or informally, in their business practices. Seventy-seven percent report using generative AI.

Generative AI is gaining significant momentum within midsize companies, as 74% of middle market executives report having a dedicated generative AI budget and 85% say that the technology has had more of a positive influence on their organization than expected.

Despite that positive impact, companies understand the potential risks of the emerging technology. Survey data suggests that some companies that have adopted generative AI have data security and privacy concerns about the technology. In addition, for those not currently planning on using generative AI, 46% cite data security and privacy as the leading reason.

Many of the perceived risks when implementing generative AI are rooted in common misconceptions about the technology. First, many feel like every piece of data sent to a large language model can be viewed by another party and lead to a vulnerability or exposure.

RSM Director Dave Mahoney detailed some potential scenarios. “If a user uploads a document to a public, large language model chatbot like ChatGPT, Gemini or other popular options, then yes, the company has lost control of that document and it is subject to the AI provider’s privacy conditions,” he says.

“The same scenario can occur with prompts,” he continues. “If someone takes a spreadsheet that has sensitive information in it and sends it within a prompt, that data could be at risk.”

If information is meant to be kept private, companies can use application programming interfaces (APIs) that do not use data to train models. But if a user shares sensitive data in an open-source application, the company gives up control.

Essentially, companies need to have effective controls over sensitive information to ensure that it is not shared with people that should not have access to it. But this is not a new concept—it is just a new application.

“So, it’s not just a simple ‘Hey, if you use AI, your data is being stolen,’” says Mahoney. “It’s the same as a user emailing sensitive data outside of the organization or accidentally sharing it with a client that shouldn’t have access. That’s not an AI problem. That’s a data loss problem that already existed.”

In addition to those risks, AI tools and applications can present other unique threats. For example, if a middle market company wants to build a fully enabled large language model that requires expert-level information and feedback, it will likely be derived from a similar model someone else has built.

Whether the model is hosted by a provider or built internally, it is subject to access risks. Companies don’t want information to be exposed, especially if they operate in a field that routinely handles sensitive data, such as the financial, legal or health care sectors.

However, practices and techniques to safely build and maintain models are available. Companies must think about where the model lives, what information the model can have access to, and whether anything happened in the development or the operation of that model that could inject additional risk. If the company does not control and own the model, then a provider could increase the level of risk.

“I do agree that is a concern,” says Mahoney. “But that just requires doing your homework and having an understanding of how the technology works, what you're buying and how it operates at a fundamental level.”

In these scenarios, the risks are manageable and largely related to internal controls and education. Regardless, these challenges should not be barriers to evaluating and implementing generative AI solutions. Apprehension about new technology is normal, but companies cannot hesitate for too long.

“In my opinion, if you plan to be an effective leader in almost any business, you have to get your hands around AI and figure out how you can start leveraging it to drive efficiency and scale operations,” says Mahoney. “Otherwise, you will be left behind, both professionally and in terms of the resiliency of your business.”

Related insights

RSM US MMBI

Cybersecurity 2024 special report

Our annual insights into cybersecurity trends, strategies and concerns shaping the marketplace for midsize businesses in an increasingly complex risk environment.