gradient

Cybercriminals cashing in on holiday sales rush

Cybercrime is a costly affair, with nearly £11.5 million stolen last Christmas alone, according to the UK’s National Cyber Security Centre. That’s £695 on average per victim. With the festive season in full swing, the rush to snag post-Black Friday bargains and buy Christmas gifts online has led to a sharp rise in threat levels. For cybercriminals, this is the perfect opportunity to deploy their latest techniques, targeting unsuspecting shoppers to steal money and personal data.

UK Fraud Minister Lord Hanson issued a stark warning in November on the dangers of holiday scams. However, the sheer volume of online interactions, the sophistication of cyberattacks, and the increasing reliance on digital shopping during the holiday season make it far more challenging to identify a scam at first glance.

AI-Driven Phishing: More Deceptive Than Ever

Phishing has long been one of the most common forms of cybercrime, but the emergence of AI has revolutionized the way these attacks are carried out. Previously, phishing emails were easy to spot, often riddled with spelling mistakes and strange phrasing. However, with AI, cybercriminals can now analyze the communication styles of businesses, studying their marketing emails and messages to replicate the tone, branding, and even the content of legitimate communications.

Attackers can now seamlessly impersonate colleagues, executives, and even customers, making it harder for targets to identify a scam. It has become easier–and cheaper– than ever to undertake these targeted spear-phishing attacks, which are much more likely to succeed.

AI and Human Behavior: Exploiting Vulnerabilities

AI’s ability to analyze human behavior has also made it easier for cybercriminals to exploit psychological triggers. By studying past interactions and identifying patterns in behavior, attackers can craft messages that play on an individual’s emotions. For example, during the busy holiday season, cybercriminals exploit the stress of missed package deliveries. Imagine receiving a seemingly legitimate text from a courier service, urging payment for redelivery. One victim, distracted and eager to resolve the issue, entered card details on a convincing fake site—only to realize later the text came from an unknown mobile number, not the courier. It’s a reminder that vigilance can’t take a holiday.

AI can also be used to time phishing emails or fake social media ads to coincide with busy shopping periods such as Black Friday and Christmas sales. Cybercriminals can also create fake websites offering massive discounts or time-limited offers, hoping to lure in shoppers eager to make a purchase quickly. Under pressure, people are more likely to fall for scams.

In the same way, AI can be used to create fake bank alerts or financial notifications that play on a customer’s fear of fraud or account security issues. These phishing attacks, which often contain urgent warnings or threats, push the recipient into a state of panic, encouraging them to click on a malicious link or provide sensitive details. It can also be very hard to spot when the destination site or notifications look identical to the official source.

In fact, while it may seem simple to check if a website is secure by looking for the HTTPS prefix or a padlock icon, these are no longer foolproof indicators of a secure site. Cybercriminals have become adept at creating fake sites that look identical to trusted brands, making it easy for consumers to be misled.

Deepfake Technology: Social Engineering with a New Face

Alongside phishing, AI is increasingly being used in social engineering attacks, particularly through deepfake technology. Earlier last year, ARUP lost $25 million to fraudsters after an employee was tricked into believing he was carrying out the orders of his CFO. And everyday people aren’t immune either. A kitchen fitter from Brighton was scammed for £76,000 because he believed a deepfake advert purporting to be Martin Lewis, the money-saving expert.

This method is highly effective because it bypasses traditional security measures we rely on, such as email filters, multi-factor authentication, or the ‘sniff test’, which means that something is awry. Deepfakes create a sense of urgency and authority, making it easier to manipulate people into taking actions they would otherwise refuse. And their realism, especially when duplicate social media profiles are concerned, makes such scams harder to detect, even for those with extensive training.

Protecting Against AI-Enhanced Threats

As the sophistication of AI-driven phishing and social engineering attacks grows, it is essential for both businesses and consumers to adopt proactive security measures. For individuals, vigilance is key. Avoid clicking on links in unsolicited and junk emails, texts that claim to come from businesses or government agencies, or even ads seen on social media platforms. Always manually type in the URL of a website, rather than clicking on embedded links, to ensure that you are visiting a legitimate site.

Multi-factor authentication should also be implemented wherever possible, as it adds an additional layer of security beyond traditional login credentials. Password managers can also help users create and store strong, unique passwords for each account, reducing the risk of credential theft. Passkeys, which rely on biometrics and device management, are the next level of protection that is slowly being adopted.

For businesses, investing in advanced threat detection and response systems is essential. These systems can identify and mitigate phishing and social engineering attacks before they cause significant damage. Machine learning algorithms within these systems can detect patterns of malicious activity that traditional security measures might miss. Regular employee training is also crucial, as the human element remains one of the most vulnerable points of attack.

Moreover, businesses should work to ensure that their employees and customers are aware of the risks posed by deepfakes and other forms of AI-driven social engineering. Implementing robust verification processes, such as requiring multiple confirmations for financial transactions, can also help reduce the risk of falling victim to these kinds of scams. Ultimately, staying ahead of evolving AI threats requires collective vigilance and a stronger commitment to safeguarding personal information.

Checkout our list of the best business password managers.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro