To protect data and money in the digital space, financial institutions turn to artificial intelligence (AI) tools to recognise patterns in fraudulent transactions, but how useful are they?

According to Tim Phillipps, 60, Deloitte’s SEA leader of forensic & analytics, such systems are a double-edged sword. “It’s an advantage for us, but a great advantage for the criminals too, because they understand what we’re looking for. They take advantage of transaction models and less human intervention.”

This is because data protection systems aimed at thwarting fraud rely on machine learning – machines are “trained” to look for patterns of behaviour based on previous examples of misconduct. Their ability to rapidly analyse enormous amounts of data allows them to flag suspicious activity much more effectively than humans.

(Related: How to stay relevant in an AI-powered future)

However, criminals are also getting more sophisticated. The digital landscape has allowed for a new normal in financial crime, in which hackers leverage on technology to be smarter, too. If companies focus only on anomalies identified by surveillance tools, fraudsters can circumvent them by avoiding patterns of behaviour that would trigger attention. In this respect, reduced human oversight could actually aid in misconduct.

Phillipps points out that the greater discussion also has to do with AI itself, and not falling prey to “AI imposters”. The anti-fraud models have varying levels of technology, he explains, and business owners need to know exactly what they need, instead of buying software just because it’s labelled AI-driven.

(Related: Don’t go in unprotected – take these cybersecurity steps)