KL: And that makes a lot of sense. I mean, it's clear artificial intelligence is not going to take out completely the human element to internal audit. There's still going to be some professional judgments that's needed. I mean, we talk about completeness and accuracy of data every day, and we know that there is still errors that can be produced and result from the use of AI and ML. So when we think about that, what do you think will happen or looking forward, what will we need to do to gain more confidence in the use of AI as part of our internal audit procedures?
DM: Great question. So we haven't really talked about data. I'm so focused on the human element of this, the ability to interact, I apologize. But the data fidelity and integrity, confidentiality, even, the data security and data management in general, I guess if I'm going to be broad about it, are really more important than ever. So let's create a scenario internal audit. I'm going to have lists of regulations. I'm going to have lists of controls. I'm going to have lists of corporate guidelines and governance, and then I'm going to have a lot of evidence. There's going to be a lot of different kinds of potentially sensitive information that an entity doesn't want to have released to anybody.
Well, if I go and build a system, what this is, it's all systems. If I go and build a system and I interlink all of this data, or I actually, which wouldn't be very smart, but consolidate all of that data into a single isolated spot, I've really just compounded my risk if I don't have good data management and data security controls. Because I didn't have control of the data to begin with, now I've matched it all together and you can get to it through a single interface. And now I have no idea who's getting access to it necessarily and how they're interacting with it and what the outcomes are on the other side because we haven't even talked about how to control the system. We just fed it a bunch of data.
So now obviously that's an overblown, hopefully, nobody's being quite that cavalier depending on what data they're feeding it obviously. But that's really what it all comes down to is what data are you feeding it? We could build 100 AI systems that have absolutely no one's going to get hurt. It's a database of cars and paint colors. My kids would... Actually, we built one the other day. I told my kids, "Here's how you would design a large language model car dictionary in Microsoft." And they thought it was great. They're eight and 10. By the way. I don't think they understood a word I said, but they pretended to enjoy their dad's conversation.
If we don't take a mindset that we need to govern and control these particular systems, the risk is highly compounded because there's so much data and there's so much that you can do with it. But again, the human element, why is that important, and how do we avoid this? So the training of these systems is always what you're going to hear people say, training is paramount. How well you train it, and the kinds of thinking that you put into that training. So again, this is where it gets a little complicated, but let me give you a simple example that makes it relatable. Let's say that we feed a list of regulations to the system, and then we also give it a large language model context so that it can try to read and interpret those list of regulations. Well, now I have a list of technical controls.
Well, just because I taught the system how to understand the list of regulations, it still has absolutely very minimal context on those controls and how those controls apply. So I have to teach it how to reason. I have to create connection. I have to tell the model these things, which you would not know if I hadn't told you, just like a child wouldn't know. You would not know that these were related but I'm going to tell you. Now, here's where it gets tricky. I'm going to try to explain to you mathematically, graphically how it's related.
Well, guess what? Most of the time, unless we're talking about math or physics, how is very subjective. And I think that anybody in internal audit that has worked with regulators would know that there is a degree of subjectivity, and sometimes we are better lawyers than they are and say that's not what that means. So the simple fact that that real circumstance happens, now you have to try to program thought processes and decision trees into a computer, are we following where the problem could come in? This could get complicated really quick. I'm the person drawing the connections. My team is drawing the connections. So we have to put guide rails and guidelines on how we draw those connections. So we could see how we could get out of control really quick.
KL: Absolutely. So Dave, obviously with all the buzz, there's so much to digest and wrap your arms around as it relates to this topic just to be understanding how to use it from a practitioner standpoint, but then how it's going to impact our businesses. So if you were to give advice to an internal auditor who is looking to understand AI, the use of AI, machine learning within their organization, what would you suggest that they do? What are two or three things that would be a great next step to better understand the topic? And then how could that then be taken back to the internal audit function and adopted or audited as part of their annual plan?
DM: I'm going to try to break this down. I'm going to try to give you some general counsel that I think is good for most, everybody just given the circumstances, and then I'll try to make it specific again. But in general, our perspective is this. We need to be very educated about this particular domain of knowledge. We need to understand it so that it doesn't... Every conversation turn into this philosophical waxing of words, but we're actually talking about real things and driving value because that's what interests most of us in business is driving value. So that's the first thing that we all need to do is let's get educated. Let's understand the opportunities, how we use this to drive value. The world is your oyster. We do not believe in taking this my word. I'm going to hang onto this ledge with my fingertips and be afraid. We don't believe in that. We want to be understood and we want to understand. So let's go and get educated.
Easy way to get educated. Go read. There is so much information on this. I don't care if you go to Microsoft, Google, it doesn't matter. Read everything that you can get your hands on and go and use some of the tools, the beautiful AIs, the OpenAIs, play with it, experience it, get to know what it does. Learn about prompting and how to prompt AI so that you can get back things that work well for you. Even in a public domain. Understand, everyone needs to understand if whoever the entity is, that is the data owner slash data steward, if you do not control the model, the learning model, the language model, the processing model, or the underlying data that the system relies on, don't feed it information that you don't want other people to have access to, please.
So let's start with understanding, but then the next thing is let's be intentional. So we talked a lot about opportunity and this is great and a transformative change. Great. I'm also a risk professional, so I see opportunity. I also see a whole bunch of things that I don't want to fall into. So let's do our best to understand where these risks are in the domains that are important to us in business. I don't want to damage my reputation. I don't want to hurt my clients. I can't do anything illegal. I have a set of values that I've built my company on. I need my people to operate within a framework that's acceptable to me. Acceptable use policies and employee handbooks and things of that nature. All of that level of governance of how we run our business, depending on what use we're going to use this tool for or what opportunity we want to pursue, we have to build that into the system.
If you're going to have a person be accountable to your acceptable use policy, if you're going to have a person be accountable to your handbook, if you're going to have a person be accountable to your applicable laws and regulations, then this system needs to be too. Artificial or not, we're going to call it an intelligence, even if it's a fake one, then it needs to be just as accountable as we are so that it can't put us in danger from a business perspective. And that's not hard to accomplish by the way. That's actually a very accomplishable task. It takes some work, but it's accomplishable.
So in terms of internal audit, here's what I suggest. More tools are going to be released. People are dumping billions, billions, and billions of dollars into this technology. You are going to see rapid prototyping and rapid development like you have never seen before. Caveat, not a lot of that rapid prototyping and development is going to be done using secure systems engineering design principles, which concerns me and part of the reason why I'm here today talking about this, because that's my passion, secure systems engineering design. I don't believe honestly that there are enough people that do what I do and know the kinds of things that I know or are afraid of the things that I'm afraid of. And I want to make sure that those people are getting out to the forefront of this particular topic because we can be the key enablers because we can keep people from those pitfalls and get to value faster.
But internal audit needs to play context games. You need to understand how to query... By the way, be sorry. So because these tools are going to be developed and be developed rapidly, there's going to be all kinds of tools that are released specifically around internal audit because enterprise risk management is a massive domain. It's a huge domain. There are upmarket domains. This is going to get tooled like you've never seen before. I guarantee you, and that's why everybody's a little nervous, and it already is, and that's why people are, "I'm going to lose my job."
Well, don't think of it that way. Again, this is an opportunity for you too. Learn to play with these tools, learn how to consume them, learn how to consume them at the next level compared to the guy next to you. Differentiate yourself in this space. Again, by understanding and learning, by being very intentional and pursuing it. Whether you're an individual or a corporation, this is now a competitive requirement to be able to operate. I guarantee you, without even thinking about prevaricating, I have probably doubled my output just by being able to consume ChatGPT and play contextual games with myself to create better understanding for when I'm generating things either for my boss or for my staff or for my peers. I mean, I thought I was already a pretty productive guy. I mean, that's pretty transformative.
So if you're an internal auditor, that's how you need to be assessing your life as well. Take control, be understanding, be understood, be intentional, and pursue your opportunity. This is an opportunity for everybody, for businesses, for individuals, we need to lean into this. We all leaned into the tablets and the phones, this shouldn't be too much harder.
KL: Absolutely. Well, Dave, it's clear a lot on the topic, and really appreciate your insights that you shared today. There's a lot to unpack. I'm sure this will be the first conversation of many, but my last question for you today is what does AI and ML look like through the lens of internal audit 5, 6, 10 years down the road?
DM: It's going to be like the cell phone. What was the landline? You mean we used to do this in spreadsheets? You mean we used to fill out forms? You mean when we did an audit plan, we built Gantt charts in PowerPoint? That's what it's going to mean. High degrees of automation. You're going to be able to write reports. You're going to be able to query reports. You're going to be able to give answers to things that you used to take hours or days, and you're going to be able to do it in seconds. It is going to be that is all enablement.
KL: An exciting time indeed. But thank you again for your time today. Really enjoyed our conversation and look forward to learning more from you on the topic.
DM: Thank you so much for having me. It was great.
KL: Thank you to RSMs Dave Mahoney for his insights. And thank you to our listeners for joining today.