![](/media/u4pnvdq2/ciia-article-holding-image.jpg?width=160&height=160&format=webp,webp&quality=70&v=1db29293e51ae60 160w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=320&height=320&format=webp,webp&quality=70&v=1db29293e51ae60 320w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=480&height=480&format=webp,webp&quality=70&v=1db29293e51ae60 480w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=640&height=640&format=webp,webp&quality=70&v=1db29293e51ae60 640w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=800&height=800&format=webp,webp&quality=70&v=1db29293e51ae60 800w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=960&height=960&format=webp,webp&quality=70&v=1db29293e51ae60 960w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=1120&height=1120&format=webp,webp&quality=70&v=1db29293e51ae60 1120w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=1280&height=1280&format=webp,webp&quality=70&v=1db29293e51ae60 1280w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=1440&height=1440&format=webp,webp&quality=70&v=1db29293e51ae60 1440w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=1600&height=1600&format=webp,webp&quality=70&v=1db29293e51ae60 1600w,/media/u4pnvdq2/ciia-article-holding-image.jpg?width=1760&height=1760&format=webp,webp&quality=70&v=1db29293e51ae60 1760w)
Safely social? Social media risks
If there’s one thing all the political parties had in common during the election campaign, it was that some candidates had posted controversial opinions on social media. Regardless of how recent the post was, their comments could be re-published and taken as an example of their beliefs and indicative of others in their party. Such risks also exist for organisations – and recent research suggests that most do not have the processes to identify and deal with social media risk. Many do not appreciate the damage social media could inflict on them.
Social media risks have many guises. There are reputational risks if employees post defamatory stories about the organisation or opinions that run counter to the organisation’s stated culture. These can anger customers and deter investors and may lead to legal or regulatory scrutiny. Social media speculation can also cause panic – for example, bank collapses in the US in 2023 were exacerbated by social media activity.
These risks can be manipulated by malicious third parties (protesters, criminals or state-sponsored hackers). Misinformation can spread almost instantly around the world causing wider political concerns and fuelling conspiracy theories (for example, anti-vaccine stories that targeted pharmaceutical firms and research institutions). Social media posts also give criminals information that exposes organisations to cyber attacks and they can be used to commit fraud and steal identity data. US action to ban TikTok because of concerns about its Chinese owners’ access to US data indicates how social media risks can closely relate to geopolitical risks.
Fines for poor social media compliance in regulated firms can be substantial – last year, financial services firms in the US were fined a total of $549m for failing to maintain records of staff communications on messaging apps.
Given these risks, it is surprising that recent report “The State of Financial Services Compliance” by surveillance tech firm SteelEye showed that 63% of financial institutions do not monitor WhatsApp for compliance – and only 33% of firms have fully implemented monitoring of relevant e-communications channels. While 22% of firms said they were introducing e-communications monitoring, 28% said they were strengthening their policies (without introducing monitoring).
Financial services firms in the US and the UK were ahead of the trend, with 63% and 57% of respondents saying they had started, or were fully, monitoring e-communications channels. In the UK, 37% said they fully monitored all channels. However, 69% said they expected the value of regulatory fines for failing to record communications to rise, and 63% expected the volume of fines to rise.
The complex and interconnected nature of social media and e-communication risks means that having strong policies and, in regulated organisations, monitoring work communications via all channels is essential, but it still won’t necessarily prevent harm.
“There are lots of elements here. For example, when Elon Musk tweeted that he would buy Manchester United football club its shares spiked, only for him to say it was a joke,” points out Matt Smith, CEO of SteelEye. “It’s illegal in some regions to distribute research information to manipulate the markets, so that creates legal risks. In India, some employees were caught sharing inside information via Facebook.”
It’s harder to deal with external actors spreading misinformation or rumours. Financial services governance consultancy Risk Business wrote in a blog in March that “S&P Global Ratings (S&P) has recommended banks monitor social media as part of their liquidity risk management, because ‘while social media is unlikely to be the sole driver of a bank run, five billion users, an ability to rapidly distribute (sometimes false) information,’ and its potential to stress liquidity buffers, shouldn’t be ignored.”
Smith believes that good practice requires robust policies and watching social media trends. Regulators, he says, want to see proof that regulated firms are doing as much as they can and this means well-communicated, effective policies and evidence that they are capturing employees’ work-related messages via all channels.
“If you are not confident that your organisation can do this, then you should ban the use of social media channels, such as WhatsApp, for work purposes – as many banks have already done,” he adds.
However, this may prove hard to enforce. Many companies will trawl prospective employees’ social media in their recruitment processes, but it is not common practice to monitor employees’ posts – and doing so could lead to privacy complaints. A survey by tech firm Global Relay last year found that just 3% of firms were confident that employees would comply with such a ban.
Another common mistake is for policies to refer to specific channels, such as WhatsApp or FaceBook. They should cover all channels, including emerging ones. You must always act when people break rules, and you need a response plan, so everyone knows what to do if a problem occurs, Smith adds.
Artificial intelligence provides new ways to monitor social media interactions, however this doesn’t mean it’s ethical. “There are more and more grey areas between personal and organisational use of mobile technology and communications channels,” Smith admits. “Some financial services people are carrying two phones again to keep work and personal life separate.” This wouldn’t stop someone deliberately breaking the rules, but it can help those who want to do the right thing.
Endorsements and advertising
One emerging risk that regulators are taking seriously is third-party endorsements – for example, celebrities who promote financial products on social media. They have the potential to manipulate the market and to make personal gains. Like other social media risks, this is a particular problem for regulated organisations, but also creates reputational risks for manufacturers of other products.
In March, the Financial Conduct Authority (FCA) finalised guidance for FS firms on financial products publicised on social media. “Unauthorised persons, such as social media influencers, who promote a regulated financial product or service without approval of an appropriate FCA-authorised person may be committing a criminal offence,” it warned.
It added that “poor quality financial promotions on social media can lead to significant consumer harm due to their wide reach and the complex nature of many financial products and services.”
In May the FCA charged nine people with promoting unauthorised trading schemes on Instagram in the first crackdown on “finfluencers”. However, Hugh Fairclough, partner at consultancy RSM UK, wrote “It’s easy to blame the celebrities, but it is the operators themselves who must be held accountable. It also highlights the need for the financial services establishment to rethink how it reaches younger customers who are otherwise exposed to misleading advice.” He called for tighter regulation of social media platforms.
Last year the government announced measures to clamp down on fake product reviews and in May this year it passed the Digital Markets, Competition and Consumers Bill (DMCC), which comes into force in the autumn. In addition to imposing new obligations on large tech firms, this introduces consumer protection over fake reviews. The Competition and Markets Authority (CMA) will police this as well as legislation passed last year to tackle corporate “greenwashing” via online and social media channels.
Legal firm Linklaters advised all consumer-facing businesses “to consider the impact of the DMCC. The advent of direct enforcement and fining powers, coupled with the CMA’s focus on protecting consumers amid the cost-of-living crisis, means consumer protection compliance policies (relating to both existing and new rules) should be reviewed,” lawyers warned.
“Recent CMA enforcement in relation to green advertising claims and price urgency (among others) shows the rising importance of consumer protection on the CMA’s agenda and the DMCC will empower the CMA to take meaningful action where it finds the law has been broken,” they said.
Tech companies’ use of consumer data to target personalised advertising has also come under scrutiny – Meta was fined €1.2bn last year under EU GDPR rules for transferring data from the EU to the US.
Non-financial misconduct
The risks are not just financial. The FCA recently warned financial services firms that it would clamp down on non-financial misconduct – and this could involve checking staff social media profiles. Susan Thompson, Partner at law firm Simkins, wrote in March: “The added scrutiny on employees’ non-financial conduct will lead to an increase of resource time being required from management and HR departments, who may now wish to monitor employees’ social media accounts to ensure that they are demonstrating ‘acceptable’ social behaviour.”
The UK Treasury “Sexism in the City” report was published around the same time, and Thompson pointed out that new employer responsibilities to “take reasonable steps” to prevent sexual harassment among employees (also expected to come into force this autumn) could add another layer of responsibility to monitor social media communications. “These measures could prove to be quite controversial with employees,” she wrote.
There are clear privacy issues here and controversial may be an understatement for employees who feel that work intrudes enough into their life already.
Internal audit’s role
Internal audit must understand the risks and how they are evolving. It is easy for policies and processes to become out of date as practices, platforms and legislation develop simultaneously. Internal auditors should ensure that the audit committee and board are aware of the main issues for their sector and that they have an effective strategy and recovery plan.
When it comes to solutions, this requires input from many parts of the organisation – in particular IT and HR teams – to identify how much monitoring is possible and desirable and how staff are educated. Internal audit can offer assurance and advice on crisis management and recovery, and ensure that social media risk is considered in culture audits.
Internal auditors can work with marketing and PR teams to ensure they are aware of issues around endorsements and advertising, and may need to question third-party suppliers’ awareness of risks and best practice.
“Social media risks in 2024 have never been greater! An organisation must be concerned not only with how it uses social media, but also how its employees and even customers engage on these powerful platforms,” advises Richard Chambers, Senior Advisor, Audit and Risk, at AuditBoard.
“When auditing this risk, IIA Global recommends that the overarching objective of any audit of social media should be to assess if the organisation has a social media strategy, and whether it addresses the governance, use, oversight, and approach to social media. Key risks are often present for each of those elements and should serve as a basis for the objectives and scope of any social media audit.”
Internal audit can also suggest proactive measures to manage potential threats before they escalate, adds Martin Douglas, Director at Protiviti.
“These could include recommending stronger data encryption methods, tighter access controls to sensitive information shared on social platforms or implementing sophisticated monitoring tools that can detect irregular activities or red flags,” he says. “Internal auditors can also help to shape a more informed approach towards managing social media risks by educating management about evolving trends and issues such as changes in regulations or emerging cyber threats.”
What is clear is that work-life balance is no longer just about the division of employees’ time. It is also about demarcating the boundaries in their social communications. Internal audit must understand the current risks – and also adapt quickly as new ones emerge.
This article was published in July 2024.