Interview with Professor Hannah Fry: Human and machine
If you think the pace of technological change is fast now, brace yourself – this is just the start. Professor Hannah Fry will urge delegates at the Internal Audit Conference in October to get to grips with the concept of exponential growth when considering how machines are going to change the way we work and what we can do. This isn’t about the future. It’s started already.
If anyone understands this complex and multi-layered topic, it’s Fry. A mathematician famous for applying mathematical concepts to solve practical problems on radio, podcasts and in writing, she also tackles the philosophical and ethical implications of technological change in her role as Professor in the Mathematics of Cities at London’s UCL. Her book Hello World: How to be Human
in the Age of the Machine is a bestseller (no easy feat when writing about algorithms) and her ability to explain arcane scientific and numerical concepts in a way that enables those who parted company with maths after GCSEs to appreciate how these affect their lives is critical in a world dogged by misinformation and misunderstanding.
Exponential change
So, what do we mean by “rapid technological change”? “At the moment we’re in a period akin to March 2020 – when those who understood epidemiology were stocking up on tins and toilet roll, but everyone else was still meeting friends in the pub,” Fry says. “People who really understand the technological paradigm shift we’re going through are preparing and are already a long way ahead. Everyone else is a long way behind.”
Fry believes we are entering an age of human super-intelligence, when humans and machines working together will enable people to do far more, faster and more efficiently than ever before. This has exciting implications for world-improving inventions, for example in medicine and combating climate change and environmental damage. It also has enormous potential for organisations’ processes and productivity. However, great power creates great risks, both unintended and maliciously intentional.
“AI is creating so much opportunity and potential, but also the chance that many people will miss out,” she explains. “As Azeem Azhar explains in his book Exponential, when exponential change happens, nothing seems to shift at first and then everything changes at once.”
Individuals and organisations that have been developing their approach to AI in a linear fashion, cautiously waiting to see what happens and banking on long-term technological evolution, will suddenly find themselves falling further behind by the minute.
“I’m not so worried about a complete collapse of global economies and employment. I don’t think AI will replace people doing jobs, but I do think that a person with AI will replace someone without AI,” Fry warns.
By “person with AI”, she is not necessarily talking about biologically AI-enhanced humans, although she points to “some whacky stuff going on” in biological technology, including experiments with micro-computers that travel through the human bloodstream. Instead, she means the way in which experiments connecting existing AI programmes with data are already coming up with results that could transform our products and working lives.
She points to advances in science, such as an AI-generated database of folded proteins that enables biologists to skip years of laborious process and focus on finding new medical and commercial uses for them. Similarly, material scientists using AI are speeding up the development of new materials that could perform a multitude of functions, such as transforming battery efficiency. This would be a breakthrough that would enable people to tackle problems from climate change to power security.
“Small advances in a nerdy niche science subject can lead to huge changes in the way we live and work,” she explains. “AI is accelerating progress and optimising the way people work in a way we’ve never seen before. We don’t need to understand the biology or material science, but their discoveries could transform everything we do.”
What we need to do now
Think hard about potential and opportunities – and be aware of the risks, Fry advises. We all need to consider what AI means for us, the way we do our jobs and our organisations.
“AI is not superpower or a deity,” she adds. “It’s very good at what it does well, which is supporting projects that use lot of data and have a clear definition of what success looks like.”
This is good news for internal auditors, since they too apply logic and imagination to large quantities of data. AI’s ability to accelerate scientific research by giving scientists a higher baseline of knowledge, should similarly enable internal audit functions to reach better conclusions, faster, and to identify opportunities for improvements. Not only should this enable far greater efficiency, but it should also start to generate ideas for more transformative changes.
Threats and promises
Of course, there are also well-publicised risks. However, Fry is concerned that failing to keep up with change and to utilise the benefits of AI is itself a major risk.
“There are legitimate concerns about safety and employment that governments need to understand and address. We’ve learnt the hard way that there can be negative consequences to the best intentions, but we need to use AI if we are to find solutions to the greatest challenges we face, such as climate change, desertification and water shortages,” she says. “There is a lot of hope in science right now.”
She herself uses AI every day, asking Google’s Gemini questions and relying on it to trawl internet data for her. “You need to double check what it tells you, and you can’t upload sensitive data, but everyone needs to test it and understand what it does – while being careful,” she says.
And what of the future? Should we fear the machines becoming conscious and making humanity redundant? Fry is not worried about this yet. “I don’t think Gemini or ChatGPT will become conscious any more than Wikipedia will. They will just get bigger with a better conceptual understanding formed by fully integrating video, audio and written data sources.”
Her message to organisations and to internal audit is clear: understand the opportunities and risks of AI; take responsibility for seizing opportunities; boost resilience against negative outcomes; and recognise the transformative effects of exponential change and what this may mean for you and your organisation.
Until Google clones Fry and enables everyone to discuss the impact of technology on society with her from their laptop, anyone interested in technological change and the future of their organisation should come along to her keynote speech. Her insights will also provide a broader context for the many sessions focusing on specific technological risks and opportunities across the two days of the conference.
“I hope AI will change the world for the better,” Fry says. “Hope is active, not passive – it involves recognising risks, understanding the opportunities and taking responsibility for your own response to transformative change.”
This change is already happening and will accelerate. Internal auditors must harness the power of the machines and ensure that their boards do the same. We don’t yet know where we will end up, but it’s going to be one heck of a ride.
The Internal Audit Conference takes place at London’s QE II Centre on 2-3 October. Attendees can also attend virtually and access recordings of missed sessions after the event. Tickets and virtual passes, along with details of speakers, sessions and themes are available on the Chartered IIA website. Attendees can also claim CPE points.
This article was published in July 2024.