• | 9:00 am

Cybercriminals in the Middle East are leveraging the best of tech tools. Can they be stopped?

AI and deep fakes revolutionize the power of bad actors to threaten everyday life. Experts say remaining vigilant online will be challenging

Cybercriminals in the Middle East are leveraging the best of tech tools. Can they be stopped?
[Source photo: Anvita Gupta/Fast Company Middle East]

Tech is taking over, and nowhere is this more true than in finance, where a digitized approach to services has facilitated the take-off of digital banks like Weyay in Kuwait and Rewire in Israel. Bank accounts are easier to open than ever, and, where in the past it might take days, money can now be sent and received within minutes. But to reverse an old expression, with every silver living, there comes a cloud. In banking, the cloud is an ever-evolving cybercrime with increasing technical know-how. These fraudsters are familiar with AI, deep fakes, and SIM swapping, making a beeline for your bank balance. 

Deloitte’s 2021 Middle East Fraud Survey report found that 48% of respondents had witnessed more fraud than the previous year, and 35% thought Covid-19 had worsened. With Callsign data showing MENA customers receive up to three fraudulent messages on their phones each day, it’s little wonder that both citizens and companies are clued-up to the big business of fraud. 

But few of us realize just how big that business is. “Cybercrime pays a lot,” says Sergey Gribov, Israel-based partner of the investment firm Flint Capital. “In 2021, it was worth about $8 trillion in damages globally.” Far from the image of a lone hacker, cybercriminal outlets are running like companies, and Gribov explains that, much like large companies, they have significant sums of money to invest in growth. 

FACIAL RECOGNITION HACKING 

Facial recognition is one such space in which hackers are investing. “When opening a bank account, there’s been a shift away from physical branch-based processes to identification being done online,” explains Saeed Patel, Group Product Development Management Director at Jordanian compliance, payment, and fraud protection experts, Eastnets. “Typically, identification is done through sending your passport or taking selfies,” he says. “We’re seeing fraudsters start manipulating that, cloning identities and opening accounts based on your facial recognition.” This presents issues for countries in the region that use biometric data. 

Anton Nazarkin, Global Business Development Director of VisionLabs, a facial recognition specialist who works with Emirates NBD, says from printed images of your face held up to the camera to silicone masks ordered from Chinese factories, these attacks can take many forms.

“Silicone masks can be ordered online, and for $300, you will have a pretty high-quality match of someone’s face, which you can put on your own face and use to spoof some facial recognition systems that, unlike VisionLabs, may have poor accuracy,” he says. While these methods sound oddly simplistic, Nazarkin explains that “the biggest problem today is that most phones do not have sophisticated camera systems that reliably check liveness.” That means they find it hard to distinguish depth in an image and, therefore, whether a person is real or not. 

As the tech develops, we will use our faces more often as a contactless identification method. Not only for banking but also for travel, hospitality, and government purposes. As we do so, “the use of deep fake videos will rise,” says Patel. 

Nazarkin explains that “a deep fake is when your real face is substituted with a 3D modeled animated render.” This fake video or image created with artificial intelligence (AI) is highly believable. Patel predicts that in the Middle East, we may see similar cases to those witnessed in the United States and the United Kingdom, where fraudsters mimicked CEOs of companies. Using deep fakes, criminals sent requests for cash transfers, posing as the CEO. “In the United States, one company lost $10 million that way,” says Patel. 

USING AI FOR CYBERCRIME 

Security specialists use AI to spot and predict fraud. But cybercriminals are also putting to work its most potent tool, predictive analysis. “AI goes to a website and tries to fill out the information, then sees what sticks; it’s a numbers game,” says Gribov. By using bots in this way, cybercriminals can collect information about you. This information is then sold in packages of hundreds of thousands of profiles in spaces on the dark web. 

“AI bots are also used for ransomware,” Gribov says. “Bots will try to guess your email address and put together a text that forces you to click on an email and download a malicious program that runs on your computer and encrypts your disk drive.” Locked out of your data and services — a debilitating pain-point for businesses — criminals will then hold your information for ransom. Unlike easier-to-detect email scams of years gone by, this supercharged ransomware uses AI to collect your data, meaning it can personalize the emails you receive, making them more believable. 

THE FUTURE OF CYBERCRIME 

With the speedy transition to digital services brought on by the pandemic, remaining vigilant online will present real challenges. Nowhere is this more true than in the case of account take-overs, where fraudsters access retail merchant accounts, social media accounts, airline accounts where loyalty points can be stolen, or bank accounts. The main way fraudsters are doing this, says Saeed Ahmad, Managing Director for the Middle East and North Africa at Callsign, is through SIM swapping, also known as SIM hijacking. SIM swap is where a criminal steals your phone number and assigns it to a new SIM card. Putting the SIM into a new phone, they can access all your accounts. “SMS one-time passcodes (OTPs) are increasingly used to confirm user identities during online transactions,” says Ahmad, but adds that these passwords are vulnerable to SIM swap. 

And as we look to the future, fraudsters are only set to get more sophisticated. “Financial criminals are increasingly leveraging the digital footprints consumers have on the web,” says Patel. This is paired with the fact that we are now more at ease with AI and online chatbots. “Cybercriminals and fraudsters have taken note, and chatbots are almost certain to become one of their new attack vectors in the next months,” says Ahmad, adding they are “becoming more common as people increasingly use online channels for shopping and other transactions.” 

Experts say the ongoing work of banks and fraud experts will be critical. More specialists in cybercrime are needed as fraudsters grow ever-smarter. “New attack vectors are constantly appearing,” says Gribov. “You can expect criminals to invent many new tools, especially as our lives become more controlled by computers. It’s a never-ending game.” 

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Parisa Hashempour is a contributing writer who covers tech, culture and international affairs. More

More Top Stories:

FROM OUR PARTNERS