A new form of cryptocurrency scam is proliferating across YouTube and social media using advanced artificial intelligence to impersonate and defraud victims. Security teams are struggling to keep up with the epidemic of deepfake crypto scam videos targeting high-profile figures like MicroStrategy’s Michael Saylor.
Keypoints
- Michael Saylor’s security team takes down around 80 fake AI-generated YouTube videos per day depicting him promoting Bitcoin giveaway scams
- The videos use deepfake technology to impersonate Saylor and other crypto figures like Brad Garlinghouse to scam victims out of their crypto
- There is a growing epidemic of these deepfake crypto scam videos on YouTube and social media using advanced AI
- Detecting and removing the videos is very difficult and requires collaboration between tech companies and raising public awareness
- AI tools could also be used by authorities to more efficiently hunt down illegal deepfake scam activities
Saylor recently warned his 3.2 million followers on X (formerly Twitter) about the alarming rate of fake AI-generated videos on YouTube falsely depicting him promising to double people’s Bitcoin holdings. He revealed that his team takes down approximately 80 of these fraudulent videos per day, but “the scammers keep launching more.” The deepfakes employ sophisticated techniques to manipulate Saylor’s image and voice, luring viewers into sending crypto to a scam address.
⚠️ Warning ⚠️ There is no risk-free way to double your #bitcoin, and @MicroStrategy doesn't give away $BTC to those who scan a barcode. My team takes down about 80 fake AI-generated @YouTube videos every day, but the scammers keep launching more. Don't trust, verify. pic.twitter.com/gqZkQW02Ji
— Michael Saylor⚡️ (@saylor) January 13, 2024
Saylor is not alone in this fight. Ripple CEO Brad Garlinghouse has also faced a rash of deepfake giveaway scams impersonating him to steal XRP from victims. Earlier in January, a deceptive video featuring Solana co-founder Anatoly Yakovenko began circulating across social media and YouTube. According to Austin Federa, Head of Strategy at the Solana Foundation, “there has been a substantial increase in deepfakes and other AI-generated content recently.”
Detecting and removing deepfake content poses a monumental challenge. The AI algorithms used to create fake videos are rapidly advancing to produce incredibly realistic forgeries that traditional content moderation tools struggle to identify. To make matters worse, the technical barrier of entry for generating deepfakes is lowering over time, making it easier for scammers to pump out fake videos en masse.
While the threat is complex, the solution requires collaboration on multiple fronts. Tech companies and platforms need to pool resources into building specialized deepfake detection capabilities directly into their networks and systems. Simultaneously, authorities are exploring how AI could help accelerate the policing and takedown process for illegal deepfake scams. Just as importantly, crypto figures and security experts must urgently spread awareness to educate the public on spotting and avoiding deception attempts.
With crypto scams already on the rise in 2024, the emergence of AI-powered deepfakes signals a dangerous new phase poised to trick both savvy and unsuspecting users alike. The threat will only grow more severe until the tech world makes inroads against it. For the sake of consumer protection and fraud prevention, the time for action is now.