AI: Time to act

Love it or loathe it, artificial intelligence (AI) is here to stay – but there are huge questions about how and when it should be used and how best to seize the opportunities without incurring uncontrolled risks. The launch of ChatGPT and Bard have taken the AI story into a new chapter, prompting a rush to work out what this kind of large language model (LLM) can do for individuals, professions and organisations – and what they shouldn’t do. What is clear is that internal audit should be involved in the debate now. If it is not, the systems will develop further, organisations will use them and internal audit will be out of date and out of the loop.

There are many aspects of all such LLMs that should concern internal audit. The government’s Centre for Data Ethics and Innovation (CDEI) published a research report in late March entitled “Public Expectations for AI Governance (transparency, fairness and accountability” which raises many issues around trust and governance. In the same month, the government published an AI White Paper to “guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust”. 

Trust is a critical factor in both these documents and with good reason, according to Darren Roberts, assistant director for technology and digital audit at SWAP Internal Audit Services. “While it is important that we appreciate the wider potential societal benefits of these technologies, I think it’s crucial that IT internal auditors are included in the wider debate,” he says. “We need to look at these risks. The government has only just decided to ban TikTok on government devices, yet that’s been around for years. Internal audit should be involved with decisions on emerging technology risks such as these from the outset.”

One major concern about current LLMs is that they are third-party applications. “Once you hand your data over, you’re not getting it back – they are learning models and need data,” Roberts warns. “By asking an external LLM a question, you need to set up a profile to use it, so it knows who you are and what knowledge you are potentially missing.”

Start talking

At the very least, internal audit should be talking to management about policies and controls over the use of LLMs in the workplace – all staff must be aware of the risks and management should decide if they are to be used and by whom. “Ask what the trade-off is for your organisation,” Roberts says. “It may be useful, but you need to ensure sensitive and private data is not exposed.” These risks are true for all third-party apps, so this is an opportunity to raise awareness at the highest level.

“When people are excited by technical hype, internal auditors can be seen as barriers,” he explains. “While as an IT auditor I don’t want to pop digital dreams, I do want to provide advice that will prevent digital nightmares.”

This technology is developing fast and regulators and management are playing catch up. Other key issues that require internal audit input around risk exposure and policies controlling staff use of LLMs centre on: the accuracy of data; deliberate manipulation of facts; breaches of data privacy laws; and risks of bias.

AI systems use historical data and this is dangerous even when it is accurate. Using past data to screen job applicants’ CVs, write job descriptions, identify “high risk” types of people or set fees or insurance premiums could lead to discrimination law suits. At best, it could cause organisations to reject good candidates or lose customers.

This is a wider problem with much of the AI now available. Visual recognition packages may struggle to work on darker skins, while voice recognition programmes work less well with some accents. Others promise to read “emotions” over video, which could particularly affect those who are neurodiverse.

While LLMs have exciting potential to increase efficiency and change the way we work, governance is a critical issue and internal audit must ensure that organisations have assessed the risks comprehensively and produced impact statements. The CDEI’s guidance on best practice includes ensuring that people are aware if they are engaging with, or affected by, AI, publishing AI policies and ensuring that there is human intervention at every stage in AI processes so individuals can contact a human operator to give feedback and complain if they are negatively affected.

Embrace super powers

Although ChatGPT and Bard are third-party systems, some organisations are already using language AI models internally to great effect. The new LLMs have raised the bar and made people excited about the potential when more sophisticated products become available. Iain McGregor, director of innovation and development at the Government Internal Audit Agency, is one of these.

“We already use an in-house AI language system which enables us to scan data from the 1,500 audit reports we produce across government each year,” he says. “It’s given us a far deeper understanding of how the whole government works and I can see a huge opportunity to improve the way we extract data and cross-reference it to gain greater insights in future.”

He adds that organisations on tight budgets need not be left out. “Much of what we’re using now is ‘AI-lite’, but it’s useful and easy to obtain and use,” he says. “We are using it to extract sentiment from reports – identifying positive and negative tones to spot opportunities to improve. For example, we are looking to increase staff wellbeing by identifying areas of best practice to promote. Basically, this tech gives us super-human scanning and reading skills.”

LLMs such as ChatGPT offer huge potential to develop this work further. “I said ‘wow’ five times in five minutes when I started experimenting with it and I don’t think I’ve done that since I saw my first computer,” McGregor says. “It crosses another threshold.”

Of course, it can be inaccurate – but so can humans, he adds. The data it uses is flawed, because it comes from people, so internal audit should always seek verification. At best, LLMs prompt people to think better and more creatively. “We’ve been experimenting with ChatGPT using non-sensitive information and asked it questions about the consequences of hypothetical actions,” he says. “It was fantastic – it gave us six really interesting ideas which would be very useful to an auditor writing a report.” Furthermore, he found it good for creating terms of reference for a new audit area. “It gave us lots of ideas that we may not have considered.”

Be sceptical

Scepticism is as important as ever, warns Shehryar Humayun, audit director, models, data and applied sciences, at Lloyds Banking Group. “All data is inherently biased, so we must look for verification of the answers provided by models to avoid adverse outcomes,” he says.

“There is a risk that if a system sounds human, people may forget that it is not – and may think information is more reliable than it is,” warns Humayun’s colleague Boldizsar Kavas, senior audit manager at Lloyds Banking Group. “Don’t use it as an oracle.”

He argues that internal audit leaders must engage with the exciting possibilities, but also ensure that all auditors are trained to challenge what they read. “Internal audit can help the business to understand the implications because internal auditors understand risk – how to question things and the value of scepticism – as well as how the business works across operations,” he says.

“Our team is doing this in our business because we are small and agile and can adapt and move fast – we can experiment with Natural Language Processing (NLP) models and question and answer engines,” he explains. “Our systems aren’t as sophisticated as ChatGPT, but it has shown us how these systems will develop.”

Peter Williams, deputy director, internal audit specialisms at GIAA, adds that internal audit should inform organisational decision-making about implementing new forms of technology through both its assurance and advisory activity and its influence in board-level conversations. The establishment of policies, procedures and guidelines on LLMs are essential – and urgent.

“Assess whether your organisation has a consistent and clearly understood definition of AI, strategy for its use and a grasp of both the risks and opportunities it presents,” he advises. “Senior leaders across all sectors are struggling, understandably, to get to grips with the speed of change – many may not even be aware of the extent to which AI has already been adopted within their organisation, and internal audit should play a key role in increasing awareness and visibility and informing risk appetites.”

Organisations should also beware of the hype – the old adage that if it looks too good to be true, it probably is holds true. “Internal audit needs to advise on updating of the governance and policy framework before anyone uses LLMs,” Roberts says. “You need a business case before you buy any new IT – start with the business need and the benefit you want to achieve, not with the product.”

An LLM is a tool, not an endgame, he adds. “If I walk into an Italian restaurant and ask for a lasagne and they give me a pre-packaged supermarket meal, I will be disappointed. That may be what an LLM will give you. It could tell you what you already know and you need to ensure you know where its ‘truth’ is coming from and add the extra value yourself.”

“Things will develop fast in the next 12 months,” McGregor predicts. “AI is not going to replace IA, but IA plus AI will replace IA without AI.”

This article was published in May 2023.