Audio

Modernizing your internal audit with artificial intelligence

Join our team for Episode 3 of Material Observations: Insights on Internal Audit

November 03, 2023
#
Risk consulting Internal audit Artificial intelligence

There is no bigger buzzword right now than AI. From search engines to ChatGPT, artificial intelligence is driving organizations to rethink their systems and look for ways to leverage AI’s opportunities. But as a technology-driven by data, AI affects everything from cybersecurity to enterprise risk management. And that affects internal audit.

In the latest episode of our new podcast, “Material Observations: Insights on Internal Audit,” Katie Landy, RSM risk consulting principal and host, sits down with Dave Mahoney, RSM director of security, privacy, and risk, to explore the AI phenomenon, discuss how it’s affecting internal audit and look at ways companies are leveraging AI to modernize their internal audit program.

Join Katie and Dave for a half-hour conversation about artificial intelligence and internal audit, as well as all of the opportunity and accountability when the two come together. Listen to Episode 3 of our internal audit podcast and get a sneak peek at the future of machine learning, business automation, risk analytics, internal audit, and much more.

Edited transcript

Katie Landy: Hello and welcome to episode three of RSMs Material Observations, Insights on Internal Audit, where we explore what's happening in internal audit today. I'm your host, Katie Landy, Risk Consulting Principal at RSM. Today I'm joined by Dave Mahoney, director of Security, Privacy, and Risk at RSM. Today we're talking about modernizing internal audit with artificial intelligence and making everyday audit more compelling. Let's get started.

Dave, thanks for joining us today. Obviously, artificial intelligence, AI, ChatGPT. Just a few words to name are big buzzwords in every topic of conversation, business, non-business. Would love to hear from you, given your experience and your background, how we're seeing artificial intelligence and the use of artificial intelligence come to fruition through really the channel of internal audit.

DM: Well, Katie, first of all, thanks for having me. I really appreciate the opportunity to do this and there is, as you say, quite a bit of buzz around artificial intelligence and machine learning, but really it's about what we can do with it. That's really the new thing. The technologies are very long-lived. These are techniques and procedures and systems that have been around since the sixties, actually. Maybe even a little before, depending on how you want to run some definitions. But it's really the way that we're using them to generate what feels to us, contextually speaking. It gives you the perception of novel content because the computer now has the ability to interact with you through a common medium, which is language. And that facade of being able to use language effectively to communicate with a machine is what gives us that contextual feeling of artificial intelligence, specifically within the realm of internal audit.

Because think about what we do in internal audit. We're there to ensure the efficacy and the monitoring efficacy as well of enterprise risk management and governance controls. Well, there's a lot of data that goes into doing that, which anytime you have large data sets with multi variables, different variables within your domain of knowledge affect other variables differently. And there's different risk rankings and scoring mechanisms and things like that. It's not a unified quantity of measure in terms of the risk in all things that we do in internal audit or in risk in general.

And because of that, being able to manipulate that data and compare that data using statistical analysis and probability determination, but again, being able to layer on the ability to extract information from language, which is where most of our business intelligence lives. It's in documents. It's not always structured information that's in a table or a format that's limited of some kind. It's in the written word and the prose. And artificial intelligence with natural language processing and the machine learning capabilities to teach the system to do what we need it to do within that domain of knowledge is what's making it really powerful.

KL: No doubt. And I feel like every conversation we're having with clients now is what should we be thinking about. What should we be doing as it relates to this topic? And when we think about the lifecycle of an internal audit, planning, fieldwork, and reporting, would love to get your perspective on how you see organizations adopting it to use and facilitate audits through its lifecycle from the initiation and planning phase all the way through reporting. Any insights you can share there?

DM: Yeah. So moving past the highbrow theory that we just talked about of why it's beneficial within the domain, practical use cases. How does this affect me? I think that's what we're driving at. Is that about right?

KL: Absolutely, that's right.

DM: All right. Well, I mean, I'll tell you what, as opposed to answering a question, let's play a context game. This is what AI is all about. Are you willing to answer a question for me?

KL: Absolutely.

DM: Great. So think about something that you do in your job that is arduous but highly repeatable. And what would that be?

KL: Pulling data from Power BI.

DM: Pulling data from Power BI. Could you just for fun, why is that an arduous task?

KL: Because we're embracing technology to home the data, but I manually have to go retrieve it.

DM: And you have to build a query to do that. You have to click on different things and make selections that the system lets you make. You're not interacting with it necessarily through the spoken or written word.

KL: Exactly.

We can automate business process. And while that's very true, but automating a business process, it still doesn't allow you to interact with the machine and with the system and with the business process in your most comfortable medium, which is language.
Dave Mahoney, security, privacy and risk director, RSM US LLP

DM: Okay. So now let me take all of that work that you do in Power BI and let me infinitely expand the use cases of your data and your data overlay because I can now layer in new pieces of information to draw correlation to. And I can give you the ability to basically talk to the machine and say, I need to pull all of the information in categories 1, 2, 3, 4, and I want you to present it back to me in a specific way. And by the way, because somebody was really smart and we decided to feed a lot of business persona-related information, perhaps, I also want you to give this information back to me so that it would be most relevant to this person or this type of client or a client in this industry. I think that's a very powerful set of use cases. And we didn't even do anything specific. All we did was define a need that you have and then we layered in some systems and technology on top of it and talked about some data that we could use to do that.

So what I'm saying is that the world is your oyster. What is it that we need to automate? Robotic process automation. RPA was supposed to be, I guess the enterprise as a computing system. We can automate business process. And while that's very true, but automating a business process, it still doesn't allow you to interact with the machine and with the system, and with the business process in your most comfortable medium, which is language. So again, everything's going to come back to this large language model because that's the generative AI part. Well, it can be used to do other things as well, create pictures, but again, it's creating those pictures. But you know how you're asking it to create those pictures usually because people are typing in, I want you to show me a picture that has this and these characteristics, and then the machine gives it to you. So our interaction is still very much.

So in terms of very specific audit, internal audit, so insights and reporting. I want to get the data from my internal audit review that had X number of findings and remediation steps and planning activities. And I want you to give me a summary that would be most important to the CFO. I want you to give me a summary that's going to be most important to the CEO. So very simple example, I don't know how many times in my consulting years I've actually approached a staff member that was on my projects or even a peer and said, "Hey. This is a great report and you did a phenomenal job of covering the bases." But I don't think this person that you're presenting it to is going to be enthralled or they're going to care because you just missed their context.

And now you can play contextual research games. If it has enough information about... So if I go through the RSM portfolio of documents that have been presented to chief financial officers or that mention chief financial officers, I could actually teach the machine to build patterns of what would be most important to a chief financial officer by giving it certain context of what those things in that chain are. And now you as the consumer can play that contextual role-based learning with your data in a safe space without having to feel like you might be foolish by putting out the wrong information to the wrong person or not considering their persona. So again, I know I'm being a little highbrow in there, I could be more tactical, but these are the things that I think are the most transformational as we have this discussion.

KL: Absolutely. And probably something that we'll just continue to see organizations adopt more quickly as folks become more familiar with artificial intelligence, machine learning. As you mentioned earlier, it's not new, but I feel as though it's gained a lot of attention. More recently, I want to switch gears for just a minute and talk about the use of artificial intelligence, machine learning as a risk to organizations. There's this fear unfortunately that it's going to replace jobs, it's going to put people out of business, and I think we can both agree that that's not necessarily the case. So maybe you could share with us on what types of conversations you're having with clients and with organizations in terms of how they embrace AI, what types of policies, what stance from a governance standpoint organizations are taking on this topic?

DM: Yeah. That's a great question. It's hard to be involved in a conversation in this particular domain of knowledge without talking about the risks. So taking a quick step back, you're absolutely right. You cannot have this conversation and we always need to at least acknowledge and make sure that we've addressed the risks in this particular conversation because they are there and they are real. But I would also argue, persist, I guess is a better word, that the risks aren't new. They're just more apparent and are going to be realized more quickly because we have increased our interaction with the machine. So it's not necessarily the machine that is causing the risk. It is facilitating it greatly because it's doing this faster and it's giving you information that you're going to act or react or it could come to a conclusion and then take an action because you told it to. But again, we've inserted that risk.

So everybody talks about the risk of the technology, it starts earlier and it starts with us. So because we are of limited intelligence, we are now trying to train intelligence and it's decadent. Okay. It's reasoning by I've learned and I've understood and I've conceptualized, and now I can run a series of probability determinations to theorize within a certain margin of error. And now, so here's a risk. But it's no different than with a person. I'm going to continue to draw that parallel.

That risk of probability, determination, and making a judgment based off of nearest neighbor or risk rankings and prioritizations and contextual analysis of different domains of knowledge and information like language. You add all of those things together without a definitive source of truth and by the very nature of a probability determination, you end up with compounded margins of error that are much higher by the time you get to the end of the problem than when you started the problem 5%, 2%, 0.0%. You add all of that up across a very complicated question... I write questions to ChatGPT sometime that are a page and a half long because I want it to have very specific context when it answers me.

So that particular type of, they call that a cascade, right, wrong or indifferent, but that type of cascade effect is the thing that would be very harmful and damaging specifically in internal audit. Because what is being done is very much driven on framework compliance, regulatory requirements or regulations, we don't really have a lot of room for margin of error in terms of interpretation. The intent of the regulation and the language of the regulation, and then therefore the compliance with that regulation and the controls therein that make up that compliance. We need a pretty stringent level of fidelity, let's say, within that decision tree and those chain of things.

And so that's not something that we want to take lightly. We need to make sure that we do that properly. But the nice thing is because we've been doing this for so long and that particular field is so well-formed, and it is very well structured and we've taught people how to do it using frameworks, we very much made the process programmatic because we needed to templatize it to do it at scale and to do it quickly and to make our people effective and efficient. Well, all of that did a great thing and this is where the job part comes in. I don't see any reason if you're living in 2023, you've been around technology for a while, especially if you're still in the workforce. I don't see any reason why there isn't time to learn, adjust, adapt, just like we learned and adjust and adapted when the iPhones and the tablets came out. This is not new technology. This is a new way to use technology, which is fundamentally different. New technology, that was the advent of personal computing. The advent of pocket-sized cell phones that weren't the size of a brick.

My kids will never know what a landline is. Okay. That's new technology. We're talking about new use of technology, and so there's time to learn, there's time to grow. And by the way, I have spoken to a number of internal audit practitioners here at RSM that are like, "Man, I don't want to lose my job. This thing can do this so fast." Yes, but look what it's doing. It's doing what it's doing because you are giving it context. You are asking it questions, you are asking it to do specific analysis.

So yes, we can program those things in, but what risk do we put on ourselves by doing that and how much time is it going to take me or somebody else to do the testing to not only prove to me as a business owner or a consultant, but to my client that I have worked out all the kinks and it works 100% of the time? Because if I was the client, I would demand a very high degree of proof before you came in and told me that a machine was going to do something that I've been dealing with Bob over here, and Bob has my trust. I trust Bob. Why am I going to trust Bob AI? That's not how people work. So that's my philanthropic and philosophical take on the situation.

This is not new technology. This is a new way to use technology...But the data fidelity and integrity, confidentiality, even the data security and data management in general, are really more important than ever.
Dave Mahoney, security, privacy and risk director, RSM US LLP

KL: And that makes a lot of sense. I mean, it's clear artificial intelligence is not going to take out completely the human element to internal audit. There's still going to be some professional judgments that's needed. I mean, we talk about completeness and accuracy of data every day, and we know that there is still errors that can be produced and result from the use of AI and ML. So when we think about that, what do you think will happen or looking forward, what will we need to do to gain more confidence in the use of AI as part of our internal audit procedures?

DM: Great question. So we haven't really talked about data. I'm so focused on the human element of this, the ability to interact, I apologize. But the data fidelity and integrity, confidentiality, even, the data security and data management in general, I guess if I'm going to be broad about it, are really more important than ever. So let's create a scenario internal audit. I'm going to have lists of regulations. I'm going to have lists of controls. I'm going to have lists of corporate guidelines and governance, and then I'm going to have a lot of evidence. There's going to be a lot of different kinds of potentially sensitive information that an entity doesn't want to have released to anybody.

Well, if I go and build a system, what this is, it's all systems. If I go and build a system and I interlink all of this data, or I actually, which wouldn't be very smart, but consolidate all of that data into a single isolated spot, I've really just compounded my risk if I don't have good data management and data security controls. Because I didn't have control of the data to begin with, now I've matched it all together and you can get to it through a single interface. And now I have no idea who's getting access to it necessarily and how they're interacting with it and what the outcomes are on the other side because we haven't even talked about how to control the system. We just fed it a bunch of data.

So now obviously that's an overblown, hopefully, nobody's being quite that cavalier depending on what data they're feeding it obviously. But that's really what it all comes down to is what data are you feeding it? We could build 100 AI systems that have absolutely no one's going to get hurt. It's a database of cars and paint colors. My kids would... Actually, we built one the other day. I told my kids, "Here's how you would design a large language model car dictionary in Microsoft." And they thought it was great. They're eight and 10. By the way. I don't think they understood a word I said, but they pretended to enjoy their dad's conversation.

If we don't take a mindset that we need to govern and control these particular systems, the risk is highly compounded because there's so much data and there's so much that you can do with it. But again, the human element, why is that important, and how do we avoid this? So the training of these systems is always what you're going to hear people say, training is paramount. How well you train it, and the kinds of thinking that you put into that training. So again, this is where it gets a little complicated, but let me give you a simple example that makes it relatable. Let's say that we feed a list of regulations to the system, and then we also give it a large language model context so that it can try to read and interpret those list of regulations. Well, now I have a list of technical controls.

Well, just because I taught the system how to understand the list of regulations, it still has absolutely very minimal context on those controls and how those controls apply. So I have to teach it how to reason. I have to create connection. I have to tell the model these things, which you would not know if I hadn't told you, just like a child wouldn't know. You would not know that these were related but I'm going to tell you. Now, here's where it gets tricky. I'm going to try to explain to you mathematically, graphically how it's related.

Well, guess what? Most of the time, unless we're talking about math or physics, how is very subjective. And I think that anybody in internal audit that has worked with regulators would know that there is a degree of subjectivity, and sometimes we are better lawyers than they are and say that's not what that means. So the simple fact that that real circumstance happens, now you have to try to program thought processes and decision trees into a computer, are we following where the problem could come in? This could get complicated really quick. I'm the person drawing the connections. My team is drawing the connections. So we have to put guide rails and guidelines on how we draw those connections. So we could see how we could get out of control really quick.

KL: Absolutely. So Dave, obviously with all the buzz, there's so much to digest and wrap your arms around as it relates to this topic just to be understanding how to use it from a practitioner standpoint, but then how it's going to impact our businesses. So if you were to give advice to an internal auditor who is looking to understand AI, the use of AI, machine learning within their organization, what would you suggest that they do? What are two or three things that would be a great next step to better understand the topic? And then how could that then be taken back to the internal audit function and adopted or audited as part of their annual plan?

DM: I'm going to try to break this down. I'm going to try to give you some general counsel that I think is good for most, everybody just given the circumstances, and then I'll try to make it specific again. But in general, our perspective is this. We need to be very educated about this particular domain of knowledge. We need to understand it so that it doesn't... Every conversation turn into this philosophical waxing of words, but we're actually talking about real things and driving value because that's what interests most of us in business is driving value. So that's the first thing that we all need to do is let's get educated. Let's understand the opportunities, how we use this to drive value. The world is your oyster. We do not believe in taking this my word. I'm going to hang onto this ledge with my fingertips and be afraid. We don't believe in that. We want to be understood and we want to understand. So let's go and get educated.

Easy way to get educated. Go read. There is so much information on this. I don't care if you go to Microsoft, Google, it doesn't matter. Read everything that you can get your hands on and go and use some of the tools, the beautiful AIs, the OpenAIs, play with it, experience it, get to know what it does. Learn about prompting and how to prompt AI so that you can get back things that work well for you. Even in a public domain. Understand, everyone needs to understand if whoever the entity is, that is the data owner slash data steward, if you do not control the model, the learning model, the language model, the processing model, or the underlying data that the system relies on, don't feed it information that you don't want other people to have access to, please.

So let's start with understanding, but then the next thing is let's be intentional. So we talked a lot about opportunity and this is great and a transformative change. Great. I'm also a risk professional, so I see opportunity. I also see a whole bunch of things that I don't want to fall into. So let's do our best to understand where these risks are in the domains that are important to us in business. I don't want to damage my reputation. I don't want to hurt my clients. I can't do anything illegal. I have a set of values that I've built my company on. I need my people to operate within a framework that's acceptable to me. Acceptable use policies and employee handbooks and things of that nature. All of that level of governance of how we run our business, depending on what use we're going to use this tool for or what opportunity we want to pursue, we have to build that into the system.

If you're going to have a person be accountable to your acceptable use policy, if you're going to have a person be accountable to your handbook, if you're going to have a person be accountable to your applicable laws and regulations, then this system needs to be too. Artificial or not, we're going to call it an intelligence, even if it's a fake one, then it needs to be just as accountable as we are so that it can't put us in danger from a business perspective. And that's not hard to accomplish by the way. That's actually a very accomplishable task. It takes some work, but it's accomplishable.

So in terms of internal audit, here's what I suggest. More tools are going to be released. People are dumping billions, billions, and billions of dollars into this technology. You are going to see rapid prototyping and rapid development like you have never seen before. Caveat, not a lot of that rapid prototyping and development is going to be done using secure systems engineering design principles, which concerns me and part of the reason why I'm here today talking about this, because that's my passion, secure systems engineering design. I don't believe honestly that there are enough people that do what I do and know the kinds of things that I know or are afraid of the things that I'm afraid of. And I want to make sure that those people are getting out to the forefront of this particular topic because we can be the key enablers because we can keep people from those pitfalls and get to value faster.

But internal audit needs to play context games. You need to understand how to query... By the way, be sorry. So because these tools are going to be developed and be developed rapidly, there's going to be all kinds of tools that are released specifically around internal audit because enterprise risk management is a massive domain. It's a huge domain. There are upmarket domains. This is going to get tooled like you've never seen before. I guarantee you, and that's why everybody's a little nervous, and it already is, and that's why people are, "I'm going to lose my job."

Well, don't think of it that way. Again, this is an opportunity for you too. Learn to play with these tools, learn how to consume them, learn how to consume them at the next level compared to the guy next to you. Differentiate yourself in this space. Again, by understanding and learning, by being very intentional and pursuing it. Whether you're an individual or a corporation, this is now a competitive requirement to be able to operate. I guarantee you, without even thinking about prevaricating, I have probably doubled my output just by being able to consume ChatGPT and play contextual games with myself to create better understanding for when I'm generating things either for my boss or for my staff or for my peers. I mean, I thought I was already a pretty productive guy. I mean, that's pretty transformative.

So if you're an internal auditor, that's how you need to be assessing your life as well. Take control, be understanding, be understood, be intentional, and pursue your opportunity. This is an opportunity for everybody, for businesses, for individuals, we need to lean into this. We all leaned into the tablets and the phones, this shouldn't be too much harder.

KL: Absolutely. Well, Dave, it's clear a lot on the topic, and really appreciate your insights that you shared today. There's a lot to unpack. I'm sure this will be the first conversation of many, but my last question for you today is what does AI and ML look like through the lens of internal audit 5, 6, 10 years down the road?

DM: It's going to be like the cell phone. What was the landline? You mean we used to do this in spreadsheets? You mean we used to fill out forms? You mean when we did an audit plan, we built Gantt charts in PowerPoint? That's what it's going to mean. High degrees of automation. You're going to be able to write reports. You're going to be able to query reports. You're going to be able to give answers to things that you used to take hours or days, and you're going to be able to do it in seconds. It is going to be that is all enablement.

KL: An exciting time indeed. But thank you again for your time today. Really enjoyed our conversation and look forward to learning more from you on the topic.

DM: Thank you so much for having me. It was great.

KL: Thank you to RSMs Dave Mahoney for his insights. And thank you to our listeners for joining today.

Related insights

E-book

Are your internal audit capabilities keeping up?

Learn how executives are effectively managing risk during uncertain times. Elevate to meet your resource, regulatory, and technology and data challenges.