FS-ISAC helps the financial sector guard against digital impersonations

Deepfakes are becoming increasingly sophisticated, but 60% of companies lack protocols to mitigate this growing risk. Cybersecurity non-profit FS-ISAC has released a whitepaper aimed at helping financial institutions safeguard stakeholder trust amid this growing threat.

Photo: iStock
Photo: iStock.com
Share this article

The Financial Services Information Sharing and Analysis Center (FS-ISAC), a non-profit organisation with a focus on cybersecurity in the global financial system, recently released a whitepaper titled: Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks.  This paper aims to help senior executives, board members, and cyber leaders address the emerging cybersecurity risk posed by deepfakes. 

Scale of the impact

Deepfakes have recently captured public attention through incidents in sectors outside finance, such as the recent cases in South Korea. However, these risks apply across various industries, including the financial one. 

The term "deepfake" combines “deep learning” and “fake” to describe synthetic media created through AI. This technology allows malicious actors to produce hyper-realistic impersonations of individuals' images and voices, enabling them to bypass security protocols and deceive victims into giving up sensitive information. Such cyber threats exploit human vulnerabilities in decision-making and interaction, adding a new layer of complexity to financial cybersecurity.

Although the concept of deepfakes isn't new, advances in AI have made them increasingly accessible and convincing, heightening their potential as cybersecurity threats. However, according to FS-ISAC, 60% of executives lack specific protocols to counteract these risks, indicating an urgent need for companies — especially financial institutions that handle money— to develop protective measures against the substantial impacts of this technology.

Specific to the finance industry

The paper outlines several risks deepfakes pose to financial institutions, such as market manipulation, direct fraud, and reputational damage through disinformation campaigns. 

For instance, last year, a deepfake of an alleged explosion near the Pentagon, paired with fabricated images, circulated on social media. This fake incident triggered an 85-point drop in the Dow Jones Industrial Average, illustrating the potential for deepfake-driven misinformation to influence financial markets. A similar fake video featuring a C-suite executive could sway investors or clients, impacting both market stability and institutional trust.

“The potential damage of deepfakes goes well beyond the financial costs to undermining trust in the financial system itself,” said Michael Silverman, Chief Strategy & amp; Innovation Officer of FS-ISAC. “To address this, organisations need to adopt a comprehensive security strategy that promotes a culture of vigilance and critical thinking to stay ahead of these evolving threats.”

Strategies to cope

To address deepfake threats, companies must invest in both preventive and detection strategies. Preventive controls focus on reducing risks before they materialise, while detection strategies help identify deepfakes that do infiltrate systems. Multi-factor authentication and biometric verification are key preventive tools, providing added security layers that deepfakes alone cannot circumvent. Additionally, measures such as customer call-backs for transaction verification reduce the chances of fraud from impersonations.

Detection strategies, like digital watermarks, help identify altered or false media by verifying authenticity markers. Watermark-based tools allow employees and clients to spot inconsistencies in deepfake media. However, as detection methods rely on the latest data to stay effective, these controls must be regularly updated to keep up with evolving deepfake technology.

Certain measures recommended in the report serve dual purposes, combining prevention and detection. For example, FS-ISAC strongly encourages companies to implement employee training to improve understanding and recognition of deepfake threats. Role-specific examples in training sessions can help employees understand that cybersecurity threats can happen to everyone. It also illustrate the organisational impact of deep fakes, fostering a better awareness of their consequences. 

Hiranmayi Palanki, Distinguished Engineer at American Express and Vice Chair of FS-ISAC’s AI Risk Working Group, said, “Addressing deepfake technology requires more than just technical solutions—it also demands a cultural shift. Building a workforce that is alert and aware is crucial to safeguarding both security and trust from the potential threats posed by deepfakes.”

Share this article