A bank manager received a call from a director he was quite familiar with. The director claimed the company was about to make an acquisition and needed the bank to authorize some transfers amounting to $35 million. The bank manager even had inbox emails from the director and a lawyer responsible for the deal.
Everything appeared to be in order so the bank manager completed the transfers. Little did he know, he was part of a giant scam: fraudsters had used “deep voice” technology to clone the director’s speech.
This marks the second time fraudsters allegedly used such tools to carry out a fraud. The first incident happened in 2019 and saw scammers impersonate a CEO of a U.K.-based energy firm.
This resulted in a $243,000 illegal transfer to a Hungarian supplier.
“Audio and visual deepfakes represent the fascinating development of 21st-century technology yet they are also potentially incredibly dangerous posing a huge threat to data, money, and businesses,” Jake Moore, a cybersecurity expert with ESET, told Forbes.
“We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deepfake technology and even their existence.”
And the technology is indeed advancing and becoming more ubiquitous. In May of 2021, tech company Veritone announced it aimed to monetize deepfakes that would allow celebrities to make spoken endorsements without ever saying a word. However, deepfakes have come with their fair share of concerns and warnings.
As illustrated by this heist, deepfakes can cause a lot of problems in the hands of the wrong people. Still, there is no stopping technology or evolution so the best we can do is to carry forward with diligence.