SymphonyAI recommends AI to prevent financial crimes

Despite the potential of AI, banking, financial services and insurance institutions barely use it. Companies like SymphonyAI hope to encourage more AI use to more efficiently combat financial crime.

SymphonyAI
Gerard O’Reilly, APAC Managing Director of SymphonyAI (Photo: SymphonyAI)
Share this article

AI’s entry into the workplace has not only become inevitable, but already happening in a number of organisations across the globe. In particular, the usefulness of AI in preventing cybercrimes has become widely acknowledged. However, a report by SymphonyAI has discovered that few banking, financial services and insurance (BFSI) institutions in Singapore are actually using AI to combat financial crimes.

Setting the Scene

The report, titled “Untapped Potential: AI-enabled Financial Crime Compliance Transformation in Asia – Maturity, Applications, and Trends,” was based on surveys and interviews with 126 financial crime compliance, operation, and technology practitioners from Financial Institutions across the Asia Pacific region. 

It highlights how only 15% of institutions use Generative AI (GenAI) for Anti Money Laundering purposes. Simultaneously, financial crime, money laundering in particular, accounts for up to 7% of global GDP. Southeast Asia is especially vulnerable, with Thailand, Singapore, Malaysia, Indonesia, and the Philippines identified as the region’s top five countries at risk of money laundering. Yet despite the urgency, adoption remains slow, raising questions about what is holding BFSI institutions back.

One of the primary obstacles lies in legacy systems. According to the report, 58.6% of institutions struggle to integrate AI into their existing technology infrastructure, which is often outdated, making it cumbersome to update. Another significant challenge is data quality. For AI models like GenAI to be effective, they require high-quality data to learn from, but many institutions lack access to suitable datasets, with 58.6% of respondents citing data issues as a barrier.

Other factors contributing to the hesitancy include concerns over model explainability — where the opaque nature of AI decision-making processes raises trust and compliance issues — and challenges surrounding data privacy and protection. Together, these hurdles have made institutions wary of embracing AI, despite its clear potential.

Behind the report

Within this doubt, however, is the growing recognition of AI’s potential to mitigate financial risks. A striking 78% of respondents identified financial crime risk mitigation as a top priority for AI deployment. This optimism underscores the importance of overcoming current barriers and unlocking AI’s full potential to revolutionise financial crime compliance.

This report comes from SymphonyAI, a company that develops and provides AI services designed to drive digital transformation and improve outcomes such as fraud detection and operational efficiency. Its technology is meant to be tailored for businesses across various industries, including financial services, healthcare, and retail. With a partnership with Microsoft, SymphonyAI leverages Generative AI (GenAI) technology from OpenAI, the creators of ChatGPT, to develop tools tailored specifically for financial crime investigations.

Unlike general-purpose AI models, SymphonyAI focuses on workflow-specific applications.

Gerard O’Reilly, APAC Managing Director of SymphonyAI described the benfits of AI, saying “AI can create operational efficiencies. These gains can then be reinvested into more targeted risk management, focusing on risk areas while streamlining the customer journey for the rest of the customers which require less targeted risk management.”

“This allows businesses to differentiate the experience for a specific group of customers who need enhanced level of due diligence, rather than applying a one-size-fits-all approach that slows down processes likes payments, onboarding and opening new accounts or services.”

In other words, this tailored approach allows institutions to streamline processes such as payments, onboarding, and account creation, avoiding the delays caused by one-size-fits-all compliance methods.

Addressing data issues

SymphonyAI’s technology enhances risk detection by focusing on customer-specific data, reducing reliance on external sources that may introduce irrelevant information or security vulnerabilities. Tools such as co-pilot systems and Large Language Models (LLMs) play a key role in this process.

This is as co-pilot tools can help BFSI institutions identify suspicious transactions or flag compliance issues, offering targeted support for investigative workflows, while LLMs can analyse data such as transaction details, reports, and communication logs that could indicate fraud or money laundering. 

This combination of tools helps prioritise critical data, ensuring that AI outputs are precise and contextually relevant. SymphonyAI further streamlines the regulatory process by enabling large-scale data analysis, summarization, and actionable recommendations.

Incorporating AI

While these may address financial institutions concerns around having quality data, the problem of legacy systems remains.

O’Reilly gave the following recommendations: “Having a clear and simple place to start applying AI is important — so benefits can be measured, and any risk contained. We address this by recommending a “sandbox” approach — starting small with a clearly defined, low-risk project where benefits can be easily measured.”

This allows BFSI institutions to build confidence in AI’s capabilities before scaling up, while also allowing those in the company to see the positive impacts that AI has on operational and financial outputs.

SymphonyAI also takes steps to ensure its AI outputs are context-specific and reliable. O’Reilly explains, “We address the challenges of irrelevant outcomes and hallucinations by fine-tuning our systems to balance creativity with accuracy. Our approach ensures that the output from our LLMs is specifically tailored to financial crime scenarios, maintaining relevance and reliability in this domain.”

“We also work closely with clients to develop robust governance and transparency frameworks, so that the use of AI is consistent with their regulatory obligations and risk management responsibilities.”

The slow adoption of AI in financial crime compliance reflects the high stakes and complexities of the material being handled. As financial crime grows more sophisticated, tools like AI offer a vital opportunity for BFSI institutions to gain a proactive edge. These companies can harness AI to improve efficiency, better identify financial crime, and channel resources into prevention.

Share this article