As ChatGPT expands the use of AI applications, Min Zhou, doctoral student at CityU, looks at challenges and opportunities in anti-fraud work in the field of finance.
ChatGPT, a generative Large Language Model developed by Artificial Intelligence research laboratory OpenAI, has received widespread attention and acclaim since its launch. Compared to traditional dialogue models, ChatGPT utilises a massive amount of data to train the model, allowing it to obtain a stronger understanding of natural language and generate more fluent dialogue according to the context. The integration of technology in the field of finance is an inevitable aspect of the social division of labour, driven by advances in science and technology. ChatGPT will undoubtedly expand the use of AI applications in the field of finance, greatly promoting reform and aiding financial institutions in serving the economy. The banking industry should strive to make full use of the opportunities brought about by such programmes in order to accelerate the implementation of various beneficial applications such as personalised customer service, multilingual translation, intelligent financial management, sentiment analysis, improved customer experiences, automatic risk assessment, and fraud detection. However, at the same time, the industry must be aware of the risks that may accompany AI implementation.
The banking industry has faced new and increasing challenges in recent years, of which one of the most significant is protecting customers from fraud. According to the Hong Kong Police Force, the number of fraud cases in Hong Kong has increased significantly in recent years, up 45% in 2022 year-on-year.
Investment scams and telephone scams cost huge sums each year - HKD 1.8 billion and HKD 1 billion respectively. Finding ways of detecting and preventing fraud has become one of the greatest challenges that banks face today.
AI has allowed fraudsters to greatly increase their success rate. In the past, fraudsters often impersonated organisations such as public security bureaus, procuratorates, courts, government officials, etc, taking advantage of the victim's panic to defraud them. In the present day, scammers can use AI models to generate fake voice and video messages that mimic real organisations. For example, the Baotou police in Inner Mongolia, China, recently shared a typical case on their public account. The fraudster first stole the WeChat account of the victim's friend, and then used AI to impersonate the friend's face and voice, successfully defrauding the victim of 4.3 million yuan in 10 minutes.
When using AI tools like ChatGPT, data and privacy are easily leaked. Due to the low awareness of financial information security of some users and the lack of rules for the use of large-language models in specific banking scenarios, fraudsters may use the data leaked to carry out targeted fraud.
While AI technologies such as ChatGPT bring about new challenges for banks, opportunities also arise.
ChatGPT can aid banks in conducting anti-fraud operations by:
AI technology has been increasingly applied in antifraud in recent years. For instance, the "AI police officer" deployed in China's Yuhang District, Zhejiang Province, makes use of technologies such as intelligent speech, semantic recognition, automated processing and big data analysis to carry out anti-fraud activities. The introduction of these "AI police officers" has led to a 71.4% decrease in fraud cases, and a reduction in case loss of 95.1% in the region. Banks, then, must make good use of emerging AI technologies in order to improve their ability in anti-fraud activities and better serve their customers.
This is a condensed version of an AIFT article.
For full version:
https://hkaift.com/chatgpt-challenges-and-opportunities-in-anti-fraud-operations-for-banks/