AI – new truths, new lies? A&R magazine Nov Dec 23
We are at a tipping point in human evolution. That was the stark message from generative AI expert Nina Schick in her keynote speech at the Internal Audit Conference in October. Given that the term “generative AI” was only coined in 2022, we have already seen astonishing developments in both the capabilities and the use of this new type of artificial intelligence model. Netflix took 3.5 years to attract a million users, Facebook took 10 months to reach the same target – yet ChatGPT took just five days. Within 60 days, over 100 million people had used it.
Much of this uptake has been driven by the fact that both generative image models and large language models (LLMs), including ChatGPT, have been made available as open source technology, meaning that anyone could use them. And the whole point of “generative” AI is that it generates new data, new images and new text.
Over the past 20 years, we have developed a vast infrastructure of computers, the internet and mobile technology. Data is already the most valuable asset of most companies and a massive privacy issue for individuals. Generative AI takes the capabilities of this infrastructure and the use – and abuse – of this data to a completely new level. The consequences for human life, politics and economics will be “seismic” Schick warned. It will change what it is to be human.
Productivity and jobs
It is also unstoppable. The AI cat is out of the bag and it won’t go back in. The democratisation of the software has immense potential to increase profitability and productivity, which could benefit the developed countries that have been struggling with low growth. “There is no question about whether it will create value and increase productivity – it will,” Schick said.
McKinsey has predicted that generative AI could add $4.4trn each year in economic growth. Investment in development of AI models and applications is soaring. While the US still leads the way by a considerable margin, China is also investing heavily and the UK, the EU and Japan are all following these leaders. Prime Minister Rishi Sunak has launched a £100m fund for AI development and France’s President Macron has committed €500m.
Regulations will, of course, need to play catch-up. The EU has been redrafting its AI Act, which comes into force in 2026, to take account of recent developments.
There are also obvious risks. Generative AI will affect jobs, and rapid growth raises important questions about how you ensure that the profits are distributed equitably. One early study quoted by Schick estimated that 80% of the US workforce could be affected by LLMs.
“It will cause disruption in the labour market because it will automate knowledge work – Goldman Sachs has estimated that 300 million jobs could be lost because of AI,” Schick said. “The picture is mixed, but the way we will work will change fundamentally.”
Furthermore, it has massive implications for the integrity and verification of data. Who and what do you trust when “evidence” can be created from nothing?
Internal auditors must keep abreast of the rapidly developing capabilities of generative AI, how it could be used by managers and staff and what it means for their business processes and risks. They should also ask how their own function and others in their business could use it to generate new data and, therefore, fresh insights. They should consider what it might mean for future staffing requirements and question whether management is adequately assessing the strategic opportunities and risks it poses – some organisations may find they need to
pivot rapidly.
Fact or fiction?
However, they should also be asking more fundamental questions. What does it mean for transparency, trust and documentation trails? Internal audit is all about monitoring and assessing evidence to come up with insights and conclusions. Increasingly, this has meant analysing data. In a world where generative AI can generate original data and where anybody can access the systems to create “deepfakes” that are indistinguishable from genuine videos, texts and data sets, how can we tell fact from fiction?
“It will be used for brilliant things – and it will be weaponised,” warned Schick. She pointed to Rule 34 of the internet: “If it exists online then there is pornography of it. No exceptions.” While there is a risk that generative AI could intelligently create data that is misleading or incorrect, but which looks convincing, there is also a certainty that malicious actors will use it to spread misinformation.
Furthermore, Schick, who wrote about these risks in her book Deepfakes and the Infocalypse in 2020, before the term generative AI had been coined, points out that it creates a “liar’s dividend”. If AI can generate anything at scale, it gives plausible deniability to everyone – “it never happened, it wasn’t me”. “How can you deal with digital information if it becomes so undermined that it means nothing?” she asked.
Humans are conditioned to trust their own senses, however, we have become increasingly aware of the role of unconscious bias and an inbuilt human propensity to self-deception that makes us see what we expect or want to see, or believe what we want to believe. This has prompted us to place more reliance on “impartial” data. We have learnt to see data as more trustworthy than our own flawed and limited viewpoints
and hunches.
This creates a new danger. Schick displayed photos of former US President Donald Trump being pursued and caught by police officers and of him sitting in prison. Another showed the Pope wearing a white Balenciaga puffer jacket. These all went viral and look convincing, yet they are wholly fake. On her website, Schick talks of a video of former US President Obama calling Trump a “complete dipshit”. This went viral and attracted more than 7.5 million views. It is also fake.
Fake data and fake sources pose a risk for decision-makers, but they also create reputational, cultural and employee risks. How do you counter false accusations backed by fake evidence? Could fake data undermine customer and employee trust in your organisation or your brand, or cause them to behave differently? Anyone who still believes the camera never lies needs a reality check – but where do they source reality?
Fortunately, technology provides solutions, just as it provides risks and opportunities. Schick spoke of her work with Truepic to publicise a way of embedding a cryptographic “signature” into digital content that can be used to attribute all AI-generated documents and video. Such solutions will help to restore transparency over the sources of information and could prove vital for internal audit and for functions seeking to verify or produce trustworthy data.
We already know from past experiences with cybercrime that solutions will help, but they are unlikely to prevent malicious use of rapidly developing generative AI capabilities. Internal auditors will need to ensure their organisations are aware of both the risks of fakes and potential solutions, and are doing what they can to educate staff about these.
The AI cat will never go back in the bag, however, overall, Schick was positive about the contribution that generative AI will have on what it means to be human. “There will be disruption to the way we work and the way we live, but in that disruption, although there are risks, there are also immense opportunities,” she said.
There is on-demand access to Nina Schick’s keynote address, and all other sessions from the Internal Audit Conference. Full details and costs are available on the Chartered IIA website.
This article was published in November 2023.
AI – new truths, new lies? A&R magazine Nov Dec 23
We are at a tipping point in human evolution. That was the stark message from generative AI expert Nina Schick in her keynote speech at the Internal Audit Conference in October. Given that the term “generative AI” was only coined in 2022, we have already seen astonishing developments in both the capabilities and the use of this new type of artificial intelligence model. Netflix took 3.5 years to attract a million users, Facebook took 10 months to reach the same target – yet ChatGPT took just five days. Within 60 days, over 100 million people had used it.
Much of this uptake has been driven by the fact that both generative image models and large language models (LLMs), including ChatGPT, have been made available as open source technology, meaning that anyone could use them. And the whole point of “generative” AI is that it generates new data, new images and new text.
Over the past 20 years, we have developed a vast infrastructure of computers, the internet and mobile technology. Data is already the most valuable asset of most companies and a massive privacy issue for individuals. Generative AI takes the capabilities of this infrastructure and the use – and abuse – of this data to a completely new level. The consequences for human life, politics and economics will be “seismic” Schick warned. It will change what it is to be human.
Productivity and jobs
It is also unstoppable. The AI cat is out of the bag and it won’t go back in. The democratisation of the software has immense potential to increase profitability and productivity, which could benefit the developed countries that have been struggling with low growth. “There is no question about whether it will create value and increase productivity – it will,” Schick said.
McKinsey has predicted that generative AI could add $4.4trn each year in economic growth. Investment in development of AI models and applications is soaring. While the US still leads the way by a considerable margin, China is also investing heavily and the UK, the EU and Japan are all following these leaders. Prime Minister Rishi Sunak has launched a £100m fund for AI development and France’s President Macron has committed €500m.
Regulations will, of course, need to play catch-up. The EU has been redrafting its AI Act, which comes into force in 2026, to take account of recent developments.
There are also obvious risks. Generative AI will affect jobs, and rapid growth raises important questions about how you ensure that the profits are distributed equitably. One early study quoted by Schick estimated that 80% of the US workforce could be affected by LLMs.
“It will cause disruption in the labour market because it will automate knowledge work – Goldman Sachs has estimated that 300 million jobs could be lost because of AI,” Schick said. “The picture is mixed, but the way we will work will change fundamentally.”
Furthermore, it has massive implications for the integrity and verification of data. Who and what do you trust when “evidence” can be created from nothing?
Internal auditors must keep abreast of the rapidly developing capabilities of generative AI, how it could be used by managers and staff and what it means for their business processes and risks. They should also ask how their own function and others in their business could use it to generate new data and, therefore, fresh insights. They should consider what it might mean for future staffing requirements and question whether management is adequately assessing the strategic opportunities and risks it poses – some organisations may find they need to
pivot rapidly.
Fact or fiction?
However, they should also be asking more fundamental questions. What does it mean for transparency, trust and documentation trails? Internal audit is all about monitoring and assessing evidence to come up with insights and conclusions. Increasingly, this has meant analysing data. In a world where generative AI can generate original data and where anybody can access the systems to create “deepfakes” that are indistinguishable from genuine videos, texts and data sets, how can we tell fact from fiction?
“It will be used for brilliant things – and it will be weaponised,” warned Schick. She pointed to Rule 34 of the internet: “If it exists online then there is pornography of it. No exceptions.” While there is a risk that generative AI could intelligently create data that is misleading or incorrect, but which looks convincing, there is also a certainty that malicious actors will use it to spread misinformation.
Furthermore, Schick, who wrote about these risks in her book Deepfakes and the Infocalypse in 2020, before the term generative AI had been coined, points out that it creates a “liar’s dividend”. If AI can generate anything at scale, it gives plausible deniability to everyone – “it never happened, it wasn’t me”. “How can you deal with digital information if it becomes so undermined that it means nothing?” she asked.
Humans are conditioned to trust their own senses, however, we have become increasingly aware of the role of unconscious bias and an inbuilt human propensity to self-deception that makes us see what we expect or want to see, or believe what we want to believe. This has prompted us to place more reliance on “impartial” data. We have learnt to see data as more trustworthy than our own flawed and limited viewpoints
and hunches.
This creates a new danger. Schick displayed photos of former US President Donald Trump being pursued and caught by police officers and of him sitting in prison. Another showed the Pope wearing a white Balenciaga puffer jacket. These all went viral and look convincing, yet they are wholly fake. On her website, Schick talks of a video of former US President Obama calling Trump a “complete dipshit”. This went viral and attracted more than 7.5 million views. It is also fake.
Fake data and fake sources pose a risk for decision-makers, but they also create reputational, cultural and employee risks. How do you counter false accusations backed by fake evidence? Could fake data undermine customer and employee trust in your organisation or your brand, or cause them to behave differently? Anyone who still believes the camera never lies needs a reality check – but where do they source reality?
Fortunately, technology provides solutions, just as it provides risks and opportunities. Schick spoke of her work with Truepic to publicise a way of embedding a cryptographic “signature” into digital content that can be used to attribute all AI-generated documents and video. Such solutions will help to restore transparency over the sources of information and could prove vital for internal audit and for functions seeking to verify or produce trustworthy data.
We already know from past experiences with cybercrime that solutions will help, but they are unlikely to prevent malicious use of rapidly developing generative AI capabilities. Internal auditors will need to ensure their organisations are aware of both the risks of fakes and potential solutions, and are doing what they can to educate staff about these.
The AI cat will never go back in the bag, however, overall, Schick was positive about the contribution that generative AI will have on what it means to be human. “There will be disruption to the way we work and the way we live, but in that disruption, although there are risks, there are also immense opportunities,” she said.
There is on-demand access to Nina Schick’s keynote address, and all other sessions from the Internal Audit Conference. Full details and costs are available on the Chartered IIA website.
This article was published in November 2023.