Article

Artificial intelligence and big data

A double-edged sword for risk management and internal audit

Apr 07, 2021

Napoleon Bonaparte once said that an army marches on its stomach. He, along with other enlightened generals of the 18th and 19th centuries, realized that in order to succeed in long military campaigns, an effective supply line was necessary in order to provide soldiers with the right food at the right place and the right time—as failure to do so could result in military disaster.

Today’s businesses are in growing need of a supply chain to succeed—not so much one of material goods, but of models and data. The fourth industrial revolution—this time, a digital one—is now fully underway. Remote working practices, established in light of the COVID-19 pandemic, have accelerated the pace of digital innovation, which has a voracious appetite for big data as well as more sophisticated tools that enable effective decision making.

Artificial intelligence (AI), once squarely in the realm of science fiction, has now become a business tool that can yield significant competitive advantage. Machine learning, a subset of AI, is a technique to train computers to make decisions based on logical algorithms. Patterns of actual outcomes across large amounts of historical data train the logical rules in order to predict outcomes in new data.

AI algorithms are augmenting—and even replacing—human judgment in many decision-making tasks. Today, smart algorithms approve loans, provide interactive chat support, select and suggest products for us to buy, review contracts, price insurance, change traffic patterns and predict the weather. The digital decision maker, who never sleeps or gets tired, makes decisions continuously until told not to. As algorithms continually learn, they consequently generate increasingly accurate outcomes over time.

Advances in processing technology mean that automated decision making will soon be capable of handling highly complex, real-time situations infinitely faster—and arguably, with better outcomes—than a human can.

This new era presents a double-edged sword: On one hand, AI presents a new set of risks requiring careful management; on the other hand, it has generated novel tools that advance auditors and risk managers in their mission to protect the enterprise.

The risks

If the supply chain producing the data that fuels decisions is ineffective, unreliable, unavailable or unsecure, the automated decisions and subsequent transactions will be flawed: open to error, manipulation and even fraud. Similarly, decision-making algorithms trained with bad data will make wrong decisions repeatedly, destroying any business advantage. The data used for training can contain errors, be incomplete, unrepresentative of the full population, or even mirror the unconscious or conscious human biases of the developer.

Unless training data is carefully chosen to fully and fairly represent the entire population of expected transactions, algorithms will not only produce suboptimal decisions but also harm an organization’s brand and reputation—or potentially even lead to regulatory scrutiny and sanctions.

Furthermore, there are certain decisions that have consequences for the safety, well-being and health of a human population. Automated decision making in these situations can raise concern from an ethical standpoint: AI cannot yet mimic compassion, caring or even honesty, and thus does not take into consideration concepts such as societal good, truth or personal safety.

Selecting, cleaning and organizing the right data to train and run an algorithm can take up to 80% of the effort to develop an AI model. Increasingly adopted data governance frameworks will establish data structures and processes that make AI more efficient and reliable. Until then, this remains one of the riskiest elements of an AI program.

Unfortunately, there is no single generally accepted standard of good governance over the development of AI, although a number of the standard setters, such as the International Standards Organization (ISO) and the Institute of Electric and Electronics Engineers (IEEE), are working toward this goal. With a lack of clarity over governance of both data and models, risk management and internal audit in this area remain rather subjective.

Internal auditors and risk managers each require a high level of knowledge about how AI works and the importance of good data hygiene to be effective in this new environment, as they often have to consult their core principles and continually ask, “What can go wrong?” This requires applying the optimal blend of skepticism and challenge to identify the right balance of risk and control in order to subsequently provide advice to management.

The opportunity

There is the promise that AI can provide a risk manager or internal auditor with a set of tools to make objective, increasingly accurate predictions of risk in order to calculate their subsequent impact on business controls. Consider the ability to judge whether an employee is more or less likely to make bad choices, commit fraud or be a “bad apple” based on data available (including social networks). In addition, imagine being able to do the following:

  • Forecast with some degree of certainty when a business risk is likely to result in a significant loss, or when and under what circumstances a business control will fail
  • Predict cyberbreaches based on the monitoring of external messages and chats, or be able to identify the intent to commit fraudulent financial trades before they are made
  • Determine when a key project is likely to fall behind schedule or go significantly over budget, or when a new product will likely fail to deliver the promised benefits to the business

AI, if adopted well, enables a more informed business conversation about risk. Current available technology enables risk managers and auditors to identify and more accurately quantify the likelihood and severity of risks using a variety of AI and statistical techniques. These innovations have the potential to reshape the role of many functions, turning their respective teams into advisors focused on future outcomes rather than checkers of how well things went in the past.

The near-term future for risk managers and auditors will demand a significant change in skills. Teams will need to be versed in data science, statistical modeling and technology, as well as how to apply these techniques in a risk context. However, perhaps the primary need is to establish the same passionate drive to innovate that powered businesses to adopt AI in the first place. If risk managers and internal auditors fail to grasp the potential of new technologies to reshape their work, they will quickly become redundant.

Not since the advent of business computers has the promise of innovation had the capacity to drive such significant change in how enterprises are run. With that promise come new and emerging risks, as there is not yet a sound framework for establishing controls.

The quality of data in an organization, alongside how it is managed and used, has never been more important. The risk manager and internal auditor can play a key role in helping the organization stay in control as the adoption of new technologies transforms business.

Napoleon’s imperial dreams foundered during the Russian campaign: Despite his famous maxim, he overextended his supply lines, the enemy implemented a scorched earth policy and a deadly winter set in.

Securing your data supply chain, ensuring the risks in AI development process are well managed and increasing your team’s “AI IQ” are key to success in this fast-moving environment.

Do nothing in this space or move too slowly at your peril.

RSM contributors