Deepfakes as a Service
Deepfakes continue to be a growing security concern. As the technology to alter video footage and replace one person's face with another’s has advanced in ease, sophistication, and availability, the use of deepfakes has become more broadly prevalent, extending beyond novelty use to become another tool in the adversary’s playbook.
Our guest today is Andrei Barysevich, co-founder and CEO of fraud intelligence firm Gemini Advisory. He shares his insights on the growing criminal market for deepfakes, and how organizations can best prepare themselves to defend against them.
This podcast was produced in partnership with the CyberWire.
For those of you who’d prefer to read, here’s the transcript:
This is Recorded Future, inside threat intelligence for cybersecurity.
Dave Bittner:
Hello everyone, and welcome to episode 197 of the Recorded Future podcast. I'm Dave Bittner from the CyberWire.
Deepfakes continue to be a growing security concern. As the technology to alter video footage and replace one person's face with another’s has advanced in ease, sophistication, and availability, the use of deepfakes has become more broadly prevalent, extending beyond novelty use to become another tool in the adversary’s playbook. Our guest today is Andrei Barysevich, co-founder and CEO of fraud intelligence firm Gemini Advisory. He shares his insights on the growing criminal market for deepfakes, and how organizations can best prepare themselves to defend against them. Stay with us.
Andrei Barysevich:
I started researching the deep and dark web, pretty much at its inception. So I've been in this field for, I think close to 15 years now. I started as a translator, and as a consultant to the FBI. And then I worked for law enforcement, as a consultant for a number of years. And then I moved into private industry, when the first threat intelligence companies were launched. At first, I was working as a Director of Eastern European research, at a company called Flashpoint, in New York. I later moved to, and joined Recorded Future as the director of research as well. I stayed with Recorded Future for, I think, three years, three and a half years, before launching Gemini Advisory in 2017.
Dave Bittner:
And so, what is your day-to-day like these days? What are the things that you all do there at Gemini?
Andrei Barysevich:
Well today, or at least lately ... Purely, as of late I would say, bureaucratic and boring work of running the company. Like writing checks and paying people.
Dave Bittner:
The glamorous side of the business?
Andrei Barysevich:
Yeah, indeed. And I do actually miss doing research, and actually talking to the bad guys. Because previously, I used to do it pretty much on a daily basis. Sadly, not anymore. But I do get to engage with fraudsters, and the bad guys from time to time. Especially when my teammates find something of really high interest, or when my professional opinion is needed, or when they could use my help. But I do obviously read a lot. And every bit of research that my team pushes, still goes through me. So I would say, every day I probably read anywhere between five and seven different intelligence reports that our team produces.
Dave Bittner:
Well, I wanted to dig in with you today on deepfakes, and where we find ourselves when it comes to that. Can you give us a little bit of the background of the origin of deepfakes, and what that's led to, where we find ourselves today?
Andrei Barysevich:
Well, I think, we should probably just even start farther down the line. The deepfake technology on itself, on its own, is a fairly new technology. It's based on neural network technology, where anybody can create a video of anyone. They can take the video of a known person, and then the AI, and the neural networks would construct a fake video that would almost be identical to the real person. And on itself, on its own, it's nothing new. A lot has been discussed on the topic. But what's new is that the bad guys have finally started to pay attention to the deepfake technology. They're beginning to use it, although cautiously, slowly, but they're beginning to use it to bypass security controls at many firms, especially financial companies, and cryptocurrency companies.
Dave Bittner:
Well take us through. What things are you all seeing?
Andrei Barysevich:
Just to give you a little bit of a history, the fake document vendors, and I don't necessarily mean physical documents. I mean copies of the documents that you and I, anybody listening, probably have at some point provided to their either banks or online services. Well, vendors like that, were the staple of the criminal underground. From day one when the first criminal forum was launched by the bad guys, there were vendors who offered services where they would fake a document. Any type of document, not just necessarily a passport, or a driver's license, but also a utility bill, and so on, and so forth. However, as companies are turning to more sophisticated ways of detecting fraud, and especially identity verification, and know your customer regulations, that dictate that companies must do everything they can, to validate the identity of a person they are working with.
So the bad guys now find themselves in a position where an old fashioned way of faking and producing a driver's license is no longer sufficient. The companies like Coinbase, for example, can detect fakes very easily. And it's almost impossible for the bad guy nowadays to use old, outdated, and old fashioned technology to bypass the security controls. So what we found, as of late, is that a couple of criminal actors popped up who are actually now offering commercial deepfakes. Where they basically would actually produce on demand a video based on your requirements. So imagine if you are a bad guy, and you're trying to gain access to, and I'm not saying Coinbase is the primary target here, but it's just one of the largest cryptocurrency exchanges in the U.S., and therefore, they have been targeted pretty heavily by the bad guys. But they're not alone.
So we know that bad guys are attempting to take over Coinbase accounts all the time. Because once you steal someone's Bitcoins, once you transfer it to a wallet that you control, there is no recourse. You cannot cancel the transfer, unlike a bank transfer. So online cryptocurrency exchanges are prime targets. And what we're finding is that, let's say, the bad guys are trying to take over someone's account. And Coinbase will typically, if they suspect anything, if they see that, maybe a user is trying to log in from a different IP address, maybe they're using a different device, which hasn't been registered before. They will occasionally ask you to provide identification. And one of the forms of the identification, would be a video of yourself. Where they actually instruct you to, let's say, look straight at the camera. Look left to the camera, look right to the camera.
Sometimes they would even say, "Okay. You need to write something on a piece of paper, and hold it in front of the camera, so that we could basically validate that it's you." So what we think the bad guys will attempt to do, or probably are doing already, but in silence, without actually revealing their methods, is that once they identify a high value target, and if they've found that person's video, on let's say social media, such as Facebook or Instagram. They could potentially produce a deepfake video that will be identical to the real person. And they will be able to fool fraud controls and security controls of a company. And we found two, I think, even three different vendors, who are now offering deepfake videos to be made on demand. And the price is all over the place. Some vendors ask only about 20 to 30 dollars per one minute of video. The others would charge you roughly 100, 150 dollars per minute.
Dave Bittner:
And so we're not quite to the point where these things could be generated in real-time? This is a thing where you would go to them, you'd know what was going to be asked of you, and that video would be custom made?
Andrei Barysevich:
Exactly. So it's not that real-time validation, where you could jump on, let's say a video camera, and in real time talk to a person on the other end. And they would see a completely different person. Luckily, we're not there yet, but the direction is definitely heading towards that. And this is something that's of significant concern. And if you actually think about deepfakes, and deepfake technology, it actually goes to significant lengths. If the bad guys can use it to fool a bank, well, what stops a bad guy from producing a video and then using it to blackmail someone, for example. So we haven't seen the weaponization of deepfakes on a massive level yet. But we're probably just one step away from the time when bad guys will start doing that.
Dave Bittner:
Yeah. I think when deepfakes first hit the news, that a lot of it had to do with people taking footage, videos, whatever, from adult websites, and putting either celebrities' faces on it, or that thing. It was that activity. And I could imagine, you combine that capability with the many of the phishing attempts that we see. Where people try to do this sextortion, where they threaten someone, and say, "Oh. I have video of you doing something you'd be embarrassed about." Well, if I can actually present a video that is a deepfake, whether or not that video is real, I wouldn't want that shared with my friends and family. I could imagine a new type of ransomware, if you will.
Andrei Barysevich:
Indeed, Yeah. It could actually become an extra ransomware. Well, at least I wouldn't be surprised if the ransomware gangs will start using deepfake technology to entice their victims into paying money. So it may become yet another method, in a blackmailing arsenal of the bad guys.
Dave Bittner:
And I suppose it's inevitable, that we're going to reach a point where it's possible to do these things in real time, or near real time. Right?
Andrei Barysevich:
Well, it seems like the technology is definitely going that way. And as I mentioned, I wouldn't be surprised to see a technology like that, widely deployed in the next few years.
Dave Bittner:
What do you suppose this means for trust in the media in general? There's that old saying that a picture is worth a thousand words. But I could see, obviously the political implications of something like this, they run deep.
Andrei Barysevich:
Indeed, And we haven't seen adversaries, actually using, or leveraging deepfake technology for their political gains yet. There's been a lot of discussion about the capabilities of deepfakes. And people thought that adversaries will start using it more proactively. We haven't seen that yet. It's hard to actually ascertain how damaging this technology could be. But again, leveraging my experience researching bad guys, I can say, that if it's actually working, it's just a matter of time before the bad guys start using it. Because if you think about the ransomware, I'm just leveraging your example, the first ransomware appeared only about four or five years ago. Actually, 2013, that was the first lockers. When Bitcoin wasn't popular at the time and payments were made in gift cards.
But nonetheless, within three years we saw the first evolution of ransomware as a service. That was at the point, when the first gang came on to the dark web market, and said, "Look. Now you don't have to actually buy the software. You don't have to spend $1,000 on the software. We're going to give it to you for free, but then you've got to share the profits with us." And the next step, we saw a massive level of infections of ordinary people. And then as we saw that within the security community, we were saying that this is just the beginning. We're going to see attacks on the companies, because the bad guys will learn fairly quickly that instead of trying to collect 500 dollars from 1000 victims, they could just infect one company and collect half a million dollars.
And within a year, we saw a massive level of attacks on businesses. And the next step was the extortion element. Where the bad guys would not only infect the victims, but they would first steal the information. And then they would attempt to extort the money, by either attempting to release the data on the dark web, or elsewhere. Or going public, and basically telling the public that they have the information from the company. And unless the company pays, they will release the information, or incriminating data, whatever it may be. So I think it's just a matter of time before the bad guys will actually realize that there's definitely a way for them to make money using deepfake technology. And I don't necessarily mean hackers. I mean pretty much any type of adversary. It could be a nation state adversary. We could potentially envision North Koreans, publishing a video incriminating the U.S. Government, or maybe Iran, or elsewhere.
Dave Bittner:
For the organizations that have found themselves having to deal with this, those financial organizations you described earlier, how are they reacting to this? What measures are available to them, to parry back against this sort of thing?
Andrei Barysevich:
We have deployed a certain software that supposedly has a capability of detecting deepfakes. Although based on our research, we found that the best software out there offers only about 60, 65 percent detection ratio. So it's still a long way to go before they will be able to detect deepfakes with close to 100 percent accuracy. So I think for a while it's going to be a cat and mouse game. And honestly in my opinion, let's say, the voice recognition has better protection than the deepfake protection. Because, let's put it this way, if you combine voice detection with video detection, then I think you get the best of both worlds.
So I think a lot of companies will start leveraging more voice recognition in user authentication. And I actually have experienced it myself recently, with one of my bank accounts. When I was calling the bank, they hadn't asked me anything. They just asked me one or two questions, which had nothing to do with my PII information. No social security information was asked. They didn't ask my date of birth, nothing like that. And yet they were able to detect with 100 percent accuracy, that it was me calling. And I was able to pretty much conduct business with them.
Dave Bittner:
Did they let you know that's what you were doing? Or did they raise your suspicion? And you said, "Wait a minute. You haven't asked me anything. How do you know it's me?"
Andrei Barysevich:
No. It's actually, I remember that several months back when I was talking to them, they asked me if I would opt in to use my voice, as another method of authentication. And I said, yes, because I was really curious to see how it was going to work. And in my opinion, it actually worked quite well. So I would say, once you combine both technologies, I think you have a pretty robust and secure system.
Dave Bittner:
It's interesting, just to speak for myself, that thing makes some of us who have hundreds, if not thousands of hours of extremely high quality recordings of our voices, just a little bit nervous.
Andrei Barysevich:
Yes. That's the downside of being a public person.
Dave Bittner:
Right. Absolutely. Well, where do you suppose this is going? Certainly, it seems like it's going to be cat and mouse for a little while here. Are we going to have to inevitably find other methods of authentication?
Andrei Barysevich:
I think biometrics has made huge advancements. And I think that some form of a combination of different technologies will win the race. But all in all, I think biometrics, especially when you combine it with, let's say, smartphone technology, because we keep phones with us all the time. Just yesterday I was at a grocery store, and I realized that I left my wallet at home. But within seconds I was at the register and I was able to pay with my phone using Apple Pay. So we'll have our phones with us all the time, pretty much 24/7, at least I do. And I think that biometrics will be the golden standard. I think that's where we're heading to. I think that voice recognition, once you have the biometrics, and once you have the video validation, all three technologies will morph into some form of a uniformed product, or technology, or capability, that we'll be using in the future for the validation purposes. I know for sure, that we'll not be using passwords.
Dave Bittner:
Our thanks to Andrei Barysevich, from Gemini Advisory for joining us.
Don't forget to sign up for the Recorded Future Cyber Daily email, where every day you'll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.
We hope you've enjoyed the show and that you'll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast production team includes Coordinating Producer Caitlin Mattingly. The show is produced by the CyberWire, with Executive Editor Peter Kilpe, and I'm Dave Bittner.
Thanks for listening.
Related