Contact

News

Navigating the AI Age: Adapting to the Shifting Landscape of Payment Fraud

Artificial Intelligence (AI) is a buzzword in our current society. People are adopting AI tools into their personal and professional lives, and they are confronted with generated pictures and movies on social media daily. AI is here and it is not going anywhere.

As with all new digital technology, fraudsters are amongst the first to adapt to it. We have already seen examples of how they are using it. So how will this develop and how will AI affect the fraud landscape? What will the impact be? And what can your organization do to prepare for it?

Changes to the fraud landscape
Due to AI, we expect the number of fraud attempts to increase. Because the attacks will become more sophisticated, there is also a risk that the fraudsters will become more successful. So how are the fraudsters using AI now and what can we expect in the future?

  1. Social engineering frauds using AI are a new and big threat to companies and consumers.
    Using only seconds of a video of you, taken from social media, AI can now clone your voice and face. This deepfake clone can be used to convince people in your network that they are you, gain their trust and steal their money. To effectively pull off a fraud attack with a deepfake clone there are currently three different modi operandi we have seen:
    1. Train a specialized AI-model to convincingly create a real-time deepfake for the purpose of a malicious video call. However, this is a complex, expensive, and time-consuming endeavor. This could still be interesting for the fraudster if the gains are high. For example, in CEO fraud. The first successful fraud attack with this modus operandi took place in Hong Kong. By creating a deepfake model of the CFO and 3 other colleagues, the fraudsters were able to convince an employee of the finance department to transfer an amount equal to EUR 25 million. (1)
    2. Use existing models of celebrities to manipulate people into investing in a scam. In South Korea, a woman was manipulated in a romance scam with the help of deepfake images and a deepfake video call with Elon Musk. The fraudsters gained USD 50.000 in this attempt. (2)
    3. Train a model for a deepfake audio call. There have been attempts with this modus operandi. In one of these attempts, a father received a call from his “son”. His son was requesting money to post as bail following a car crash. This is easy to do with the current tools available. However, it is not very convincing yet, due to the fraudster having to type the responses in real-time. The conversation will not feel natural due to the timing of the response given by the fraudster. (3)
  1. Traditional phishing fraud is easier and more scalable with the use of AI.
    Language Learning Models (LLM) like ChatGPT have made it very simple to instantly generate phishing messages in the most common languages. Data on targets is easy to purchase online, just as automation tools to run your attacks. All a fraudster needs to do is sit back and wait till a victim takes the bait.
  1. Malware using AI to breach bank and payment protections is on the rise but comes in waves.
    In China, the mobile malware dubbed "GoldDigger" has been able to capture the user's face and employs AI to use this data to breach face authentications on other mobile applications, like your bank app. Although this is a genuine concern, it is also to be expected. Whenever a defense becomes strong enough, people will try to breach it, and eventually find a way. Once losses become substantial, the defenders will do what they can to patch the problem and eventually manage to do so. AI is simply a new tool here, in that sense nothing new.

Due to advancements in AI technology, we expect that these kinds of attacks will become easier to carry out in the future. Training deepfake models will become easier, less expensive and less time-consuming. Where fraudsters are now using phishing-as-a-service kits, in the future we expect that there will be tools available to carry out automated fraud attacks. For example, romance/ investment scams where the fraudsters can subscribe to a service that will support them in carrying out these kinds of attacks. Contact with the victim can be fully automated with AI chatbots and all the fraudster needs to do is plan and execute the cash-out.

How will these changes impact your organization?

  1. Higher workload
    We anticipate a significant increase in fraud attempts, a higher volume of cases requiring investigation, and a rise in customer inquiries related to fraud. 
  1. More fraud to reimburse
    With more fraud attempts there is also an increased risk that customers will become a victim. One moment of being distracted or not alert, could result in a successful fraud. An increase of sophistication in fraud attacks will also lead to more victims and higher losses per case. With PSD3, impersonation scams must be reimbursed as well and this will also be a driver for more fraud reimbursements. (4)

How can your organization prepare for this new era in fighting fraud?
What we expect to happen is that the way that fraudsters approach their victims will change. The fraudsters will use different stories and technologies to manipulate the victim. From that point on, the modus operandi remains the same as we have seen in the last years. The data your organization receives will remain the same as well. However, we expect victim-initiated frauds to continue to rise. This has led to an increasing demand for banking behavior monitoring in the last years. This remains true while we enter the new AI-driven era. So, what can your organization do to prepare?

  1. Create 100% visibility on the customer / criminal journey
    To be able to do banking behavioral monitoring successfully, your organization needs 100% visibility on the customer and criminal journey. A customer that is being manipulated will show different behavior than when the same customer is doing a legitimate transaction. Having 100% visibility on the customer / criminal journey will allow you to monitor all interactions with your channels and allow you to detect deviationsthe normal patterns of your customers. This will increase your success of stopping fraud from happening and that is always the best approach to reduce workload.
  1. A holistic approach is needed – collaborate internally but also externally
    Customers want a seamless, frictionless experience during interactions with your bank. Banks are adapting to those demands by digitalizing their product portfolio and processes. While this facilitates a good customer experience, it also opens doors for criminals. Work together with the fraud experts in your organization when making changes to your products and processes. Let them help you to identify the risks and mitigate measures to offer a frictionless customer experience. Add friction if it is needed, for example a waiting period if a customer wants to increase their card or payment limits. Currently, banks are the only line of defense against online payment fraud. However, fraud only ends at the bank, the starting point of the criminal journey is on other platforms. In most cases fraud starts on social media, via e-mail or phone. Collaboration with external parties like big tech, telecom providers and internet service providers can lead to successful solutions to combat fraud. A good example is how telecom providers in The Netherlands are currently blocking spoofed calls when phone numbers of banks are being used.
  1. Raise awareness
    While banks are already doing a lot to raise awareness, this new AI-driven era demands a new approach. Next to informing customers about the different modi operandi that exist and mitigating measures they should take, it now becomes important as well to help your customers recognize deepfake and AI generated content. When your customers can recognize this, they will be less likely to become a victim.

Conclusion
With the AI era knocking on our doors, banks must remain vigilant and proactive in the fight against fraud. By embracing innovation, collaboration, and increasing awareness, organizations can navigate the complexities of the AI-age and safeguard against fraud risks. Maintaining 100% visibility on customer and criminal interactions, is crucial for effective fraud detection and prevention. As we continue to evolve in the digital era, prioritizing fraud prevention strategies and investing in comprehensive visibility will be essential for maintaining trust, security, and resilience in the ever-changing landscape of payment fraud.

(1) https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
(2) https://www.independent.co.uk/asia/east-asia/elon-musk-romance-scam-dupes-south-korean-b2533764.html
(3) https://www.independent.co.uk/news/world/americas/ai-phone-scam-voice-call-b2459449.html
(4) https://kpmg.com/nl/en/home/insights/2023/06/psd3---psr-.html