Enhancing internal audit’s impact with AI

What do we mean when we talk about artificial intelligence (AI) in an internal audit context? How can AI aid and support internal audit work? What are the emerging ethical dilemmas? How can internal audit seize the opportunities AI presents, while mitigating the risks? And what skills and training will internal auditors need to use it well? Last, but not least, will machines replace human internal auditors?

These were some of the questions debated by participants in a roundtable debate hosted by the Chartered IIA and Workiva in July. While many of the audit leaders present were still experimenting with AI, the discussions showed that it was being used widely and innovatively.

Most agreed that AI could transform internal audit practices without removing the need for human internal audit skills. While how, when and why humans need to intervene would change, internal auditors plus AI capabilities would be able to undertake more than ever before, more effectively and efficiently. The new possibilities are exciting – but the risks cannot be ignored.

Host Ash Varsani began by describing the development of AI capabilities as similar to using a driverless car: you can begin by using the basic AI assistants to provide information and directions, move on to semi-automation while still keeping your hands on the wheel, and then decide to lift your hands altogether.

Each stage requires appropriate risk assessments and controls. For example, he asked: “If you take your hands off the wheel, at what point should you put your foot on the brake? When the car is going at 70mph, 80mph or 90mph?”

We need to think about these things now, because the technology is in its infancy, but developing fast. “We’re teaching computers to take their first independent steps using algorithms,” he said. “You start with an algorithm that, for example, describes a vehicle as an object with four wheels, windows and a steering wheel, then you add details such as colour and characteristics.”

However, it won’t be long before these systems are walking confidently and internal auditors must understand their uses and their risks before this happens.

Half of those present said they were currently using AI in internal audit. However, the extent and sophistication of these uses varied widely, from Excel-based experiments to bespoke systems.

When asked whether AI is likely to replace their jobs, some expressed concerns, however Varsani argued that (at least for the moment) AI is a vital tool to increase efficiency, but it will not replace human internal auditors – although internal auditors who cannot use AI may be at risk. “AI still requires people to tell it what to do,” he said.

 

Controls

One common use of semi-automation is to maintain and improve controls. Clustering characteristics in algorithms highlights overlaps that will create controls and identify exceptions to these. “For example, once you have a system that identifies a vehicle, you can group everything that’s red and has wheels,” Varsani said. The technology does not have to be sophisticated – it can be done in Excel – and it is often used to analyse group transactions, correlate information and spot anomalies, for example, an unusually high volume of transactions on a Sunday morning.

 

Chatbots and NLP

Chatbots have been around for years, but they have only recently started to utilise AI. Like other AI tools, they need to be taught what to use. Some participants were developing AI chatbots. “We’ve trained a basic chatbot using generative AI, but we’re using our own data, so we can control what it knows and says,” one explained.

Another said their team used Copilot for first queries. “We’re developing the art of the prompt,” they said. “If you want a better answer, you need a better question and this is a new skill. We’re working out how to ask the best questions.”

Varsani agreed that the knowledge base is crucial. “If I ask Open AI to generate an audit template, it will do it. But if I ask it to generate one using private data then it will base it on what we already do in our organisation drawing on our data and previous audits.”

One participant said they were testing ChatGPT’s ability to generate audit templates based on their own data to control the outputs and “eliminate rubbish”. “We tested this on internal auditors and it was really helpful, although some felt it was not very creative,” they said.

 

Words and pictures

One huge development is that AI can now extract information from images – such as PDF documents and photos. This isn’t perfect, but it is developing rapidly. “Video recognition could be valuable in, for example, pharmaceutical equipment manufacturing, where it could enable AI to identify sub-standard items,” Varsani said.

 

Data

However, data – its sources, uses, accuracy and risks – is also a concern. How do you control it, meet regulations, limit AI models to data you can trust and know where all the information it uses is stored? The growth of AI means that organisations must address these issues and put adequate controls in place before they start to test AI.

“We’re used to managing data, but AI brings in a whole range of new issues,” Varsani warned. “You need to put in semi-autonomous controls so that a specific series of events triggers a response – when you put your foot on the brake.”


Speech

When asked whether they were using speech transcription in audits, one participant said they used it in planning meetings and potentially in exit interviews. Another said they used Copilot to summarise speech in meetings and were planning to use it generatively to suggest recommendations and produce reports.

“We’re seeing people using AI to record and transcribe board meetings so the data can inform activities across the organisation. We expect more of this in the future,” Varsani said. “This makes it even more essential to have a clear policy about when and where data should be used, so you can create controls.”


Risks and ethics

Some organisations use AI to generate ideas for mitigating AI risks and creating controls. However, the ubiquity of AI in everyone’s lives can create unwarranted trust in its findings. “Customers and staff can be too trusting of AI – it’s too easy to forget that it has no sense of ‘truth’,” one participant said.

“Imagine you could ask AI to scan all videos and images relating to a case to provide evidence. What are the ethics of calling up videos not just of this case, but everything related to it?” Varsani asked. “Could I ask AI to scan faces and mannerisms in a meeting and use it as a lie detector? It is already being used to look for pupil dilation and stress indications. Where do you draw the ethical line?”

Internal audit also needs to be aware of tipping points. “When an AI control triggers a series of responses, you need to know when and how to intervene,” he said.

Further issues arise around how much you can trust information you don’t collate yourself. There is a risk of misusing data if you don’t understand its origins.

However, Varsani added that this is why internal audit will not be replaced by AI. “Risk analysis will be vitally important and the profession needs to think about this and appropriate policies. Internal audit jobs won’t go, but they will change.”