07 Mar GenAI poised for rapid adoption in fight against online fraud, according to researchers
Generative artificial intelligence (GenAI) technology is poised for rapid adoption in the fight against cybercrime, helping to reverse some of the fraud driven by AI, recent research finds.
More than 83 per cent of organisations said they intend to use GenAI to combat fraud efforts over the next two years, according to a report jointly published by the Association of Certified Fraud Examiners (ACFE) and SAS, a data analytics solution provider.
“GenAI has made great strides in these last few years, so it’s no surprise that organisations are incorporating it into their anti-fraud initiatives,” said ACFE research director Mason Wilder.
“As a society, we are still learning all the advantages and disadvantages of using the technology,” Wilder added.
China’s AI gap with US is widening: ‘we are all very anxious’
China’s AI gap with US is widening: ‘we are all very anxious’
GenAI is the technology that underpins powerful chatbots such as Microsoft-backed OpenAI’s ChatGPT.
However, the report also found that there are major challenges to implementing GenAI solutions, with 60 per cent of survey respondents saying GenAI’s output accuracy was a top concern.
Other major hurdles to AI anti-fraud adoption included security and data risks, regulatory concerns as well as staff skill limits, according to the survey.
The report is the third instalment of an anti-fraud technology report series by the researchers, which was initiated in 2019. The latest report was based on a survey of nearly 1,200 ACFE member organisations across more than 22 industries, spanning banking, healthcare, manufacturing and mining.
The emergence of ChatGPT as a powerful GenAI tool has brought opportunities for productivity breakthroughs, but also widened the scope for fraudulent online attacks.
John Gill, ACFE president, said that “the accessibility of GenAI powered tools make them incredibly dangerous in the wrong hands”.
Online scammers and fraudsters have been using GenAI to help write malware, phishing emails and to produce fake identities, consultancy PwC said in a report analysing the risks of GenAI in May last year. PwC predicted a rise in large-scale fraud, privacy violations and cyberattacks at the time.
Hackers and online scammers are increasingly using chatbots like ChatGPT to generate malicious codes, which help lower the barriers for launching online attacks, a representative at Beijing-based online security firm Huorong said in an interview last year.