Open minds: the future of AI

Are we at the start of an AI revolution? Maybe. There are many unanswered questions about the direction in which commercially available AI is likely to go – but it is a hot topic that is unlikely to cool down soon.

At the very least, internal auditors need to be tracking developments that may soon be used in some form within their organisations and sectors. They need to be in the discussions as new AI systems start to be adopted more broadly, and they need to understand enough about them to be able to contribute positively to discussions about the opportunities and the risks that they create. This is true of all major corporate technology decisions, but AI carries a particular risk because AI systems will be able to learn and evolve themselves – in which case, the rules and safeguards need to be established before they begin.

At the same time, we should assume that AI will become part of the normal internal audit toolkit. You may not be able – or want – to be an early adopter, but you should watch closely how other teams are using the emerging technology and assess when and how you join them.

“Large language models (LLMs), like OpenAI’s ChatGPT, are one example of how AI can disrupt the internal audit world — and more developments are on the horizon,” says Alex Psarras, Analytics and Automation Lead for Next-generation Internal Audit at Protiviti. “The use of AI to support audit activities is not new. However, until recently, the skills required to implement and maintain AI techniques have created barriers to entry. On-premise LLMs, like Microsoft’s soon-to-be-released Copilot, which leverages OpenAI’s models will make AI much more accessible.”

Copilot will integrate into Microsoft’s 365 applications so will have access to every document produced within Office 365. “That has enormous potential for automating audit activities. However, it also could create opportunities to exploit vulnerabilities and gain unauthorised access to documents, emails or call transcripts,” Psarras warns.

Despite this risk, the ability of AI systems to trawl vast data stores means there is little point using them unless they can access your organisation’s data repositories. There must be a clear distinction between internal AI systems and third-party ones (such as ChatGPT and Copilot), and users will have to become more aware of which type they are using and of data limitations that may affect the quality of responses.

Shehryar Humayun, Audit Director, Applications, Data and Applied Sciences, at Lloyds Banking Group, points out that “all data is inherently biased, so we must always look for verification, but in the near future we’ll be able to point this kind of tech at internal audit data and ask it for insights about, for example, culture, or to identify what issues could have a material impact, and get it almost instantly in the form of a draft report. More widely, others in the bank will be able to pull out anything from the bank’s data, which could be extremely useful. It will also make chatbots far more intelligent.”

 

Predictive auditing

Current developments in AI not only aid report-writing and data searches, but should soon mean that all internal auditing is done in real time – not based on historical data. “We’re already starting to plan on a vision that internal audit will shift from looking back over a six- or 12-month period to 100 per cent sampling happening all day every day, so you can highlight things as they happen, and the people responsible for managing them can act before they develop into a problem,” says Mark Burns, Managing Director UK and Europe, at consultancy Excelledia.

This kind of proactive forward-looking internal auditing will not only help internal auditors to move closer to the strategic centre of the organisation, offering real-time support as things happen, but will also enable them to start to analyse future trends and possibilities. “For example, internal auditors will be able to say ‘we have found that this is occurring in factory 4 and we know from past experience that there is a correlation between this behaviour or event with an eventual crisis’. They can then recommend further reviews or actions,” Burns says.

Various forms of AI that enable this kind of predictive internal auditing are likely to become widely available and will change the focus for internal audit teams, however it will be essential that internal audit makes it clear when AI has been used in a decision, and how. Tracking the decision-making process and where data comes from will be ever more important as machines start to make decisions themselves. As machines do more, the human element – and people’s ability to understand and intervene in the systems – will become even more crucial.

“This technology is moving so fast that all responses rapidly go out of date,” says Boldizsar Kavas, Senior Audit Manager at Lloyds Banking Group. “In future, I believe all internal auditors will use this kind of system all the time, but AI can’t have compassion or understand human mistakes. These systems can’t yet make the intuitive leap that humans can. Internal audit needs to understand how it works to instruct it properly and get the best responses.”

 

Self-programming tech

AI will change the nature of jobs. One day, Kavas predicts, organisations will be able to buy AI that will programme the software you need and build its own systems. “This has huge implications for the tech industry and for AI,” he points out. “How much programming can you use and what do you need?”

Psarras adds that organisations are using LLMs to train the next generation of AI systems, so helping to produce more AI. “Human involvement is minimised in the process, creating a risk that the objectives of the AI do not align with human values and ethics. Moreover, AI systems can interact with the digital world around them using APIs, plug-ins and other online tools.

“The average internal auditor doesn’t understand enough about this yet,” he warns. “If internal auditors fail to acquire the skills to audit AI advancements and to enhance their methodologies, they will cease to be relevant. This tech is going to develop very, very fast.”

There will, inevitably, be an impact on jobs – and not always in the way we expect. AI that builds new AI systems could dramatically reduce demand for jobs in software development. We are already seeing how systems that are designed explicitly to extract and analyse data can reduce demand for previously high-value analysts.

 

Skills for the future

Internal audit leaders should be examining what skills they will need most in their own teams – the ability to operate AI systems, to exercise judgment about AI findings and spot errors, and the imagination to ask questions that use the technology effectively will be ever more important. However, they should also be asking their organisations to consider the jobs their business will need in the future. What jobs will be automated? Will data analysis be superseded by data cleansing? Will AI boost productivity and profits – or will it pose existential threats to their business model?

Internal audit skills need to develop fast. “We must understand the systems to spot potential problems and risks as they evolve. We’ve already seen specific skills requested that we didn’t think about months ago, let alone years ago,” says Esther Delgado, a Managing Director at Protiviti. “For example, people are looking for internal auditors who understand prompt engineering — the ability to formulate precise questions to eliminate bias or ambiguity to generate more accurate responses from an LLM.”

While AI systems may be able to produce internal audit reports, human communication skills have never been more important. Internal auditors need to understand the impact made by an audit finding and use their judgment in a way that computer systems can’t. They also need to develop their critical thinking and scepticism and apply these to the AI research.

“We already know the systems can get it wrong. The art of the possible is huge, but we need people who can spot errors and bias and ensure that AI is used ethically, within strong corporate governance guidelines,” Delgado says.

Peter Williams Deputy Director, Internal Audit Specialisms at the Government Internal Audit Agency, says that ChatGPT has “opened our eyes to potential developments within, and enhancements to, our internal systems”. While in-house large language model (LLM) AI systems will revolutionise the way internal auditors can trawl data for information, they cannot (yet) rival humans for presenting arguments in an impactful way that influences the reader. However, they can produce good first drafts, so freeing people to focus on value-adding activity. Williams also believes they could potentially deliver training for internal auditors in complex, specialist areas, such as understanding liquidity.

 

Think differently

Bias will always be an issue with data, but Burns believes that AI used well could help to reduce it. “If you can get all the data flowing into the system from source, as it would do in an aircraft’s black box, you can ask AI to process it in an unbiased way. If you then ask it the right questions, you may find that it produces answers that are quite different from those a human being would come up with,” he says. “It won’t know all the answers, but it has the potential to make us think harder and differently. We need to be open-minded about what comes out.”

However, he warns that while machines are good at correlating facts, they are less good at identifying causation. Humans will have to do this, but we need to get better at asking the right questions, considering surprising answers and thinking critically about what they really mean.

Professor Rose Luckin, Professor of Learner Centred Design at UCL Knowledge Lab, talking on the BBC’s Today programme on 11 April, said that people will have to start differentiating between “information” and “knowledge”. While “information” will become ever more accessible via AI, she argued that schools, colleges – and, presumably, workplaces – must become better at teaching the knowledge to ask useful questions and to assess and deploy the information AI gives us.

“Do we have an education system that asks people to interpret the world around them better and more broadly?” Burns asks. “Our education is still based on a 19th-century model, so I don’t think we do.” 

 

This article was published in July 2023.