McAfee conducted a study to analyze the use of AI tools in modern-day dating. The study found that a growing number of men are likely to use these tools to express their feelings to their love interests. A vast majority of the participants in the study could not distinguish between a love letter written by an AI and one written by a human. AI may enable people to express their feelings better, but it also allows threat actors to easily catfish victims, create convincing phishing messages, and support other criminal schemes. “Catfishing can be very difficult to spot, especially with new AI tools such as ChatGPT which can help cybercriminals scale their communications and target more people,” McAfee said. ChatGPT is arguably the most popular tool today, but its capabilities are limited to text. There are many other AI tools that cybercriminals can use to scam their targets. These include tools that mimic a person’s voice for phone conversations or generate an image for social media profiles. Cybercriminals can also use AI tools to create realistic-looking deepfake videos.
Hard to Distinguish AI-Written Love Letter
McAfee interviewed 5,109 adults from nine countries for the study. The survey, an online questionnaire, contained several questions about AI tools and their use in modern-day dating. Over a quarter of the people surveyed (26%) said they were planning to use AI to create a letter expressing their affection for their love interests. When asked if they could tell whether a letter was written by a human or AI, 69% of the respondents couldn’t tell the difference. “Two-thirds of adults (69%) were unable to tell that this ChatGPT love letter was written by AI and not a human. Globally, Japanese and German adults were the most discerning with 53% and 59% respectively unable to tell, compared to 78% of Indians and 76% of Americans,” the study stated. Considering this, it’s no surprise that 51% of the participants had either been a victim of catfishing or knew someone who had fallen for the scam. This figure is higher (66%) for adults under 30.
AI Tools as ‘Threat Actors’
Cybercriminals can rely on other legitimate AI tools, such as Murf.ai and Fotor, to con victims of online dating scams. The Canadian Centre for Cyber Security highlighted AI tools as potential “threat actors” in its National Cyber Threat Assessment report for 2023/2024. “As deepfakes become harder to distinguish from genuine content and the tools to create convincing deepfakes become more widely available, cyber threat actors will very likely further incorporate the technology into their MDM campaigns, allowing them to increase the scope, scale, and believability of influence activities,” the report states. Valentine’s Day is likely to see a spike in malicious activities, especially romance scams, so it is important to stay cautious. AI-generated text is impressive, but there are some telltale signs to spot it—at least for now. McAfee says AI tools generally use short sentences and re-use the same words. “Additionally, AI may create content that says a lot without saying much at all. Because AI is programmed to not form opinions, their messages may sound substance-less,” McAfee stated. Always treat messages containing requests for money with suspicion, especially if you have not met the individual in person. Check out our detailed guide to catfishing to learn how to keep yourself safe online.