Good evening, everyone. I'm glad that there are still a few people that either are still awake or not drinking yet, so at least someone is here to listen to me. We're entering into a new era where the line between reality and fiction is becoming increasingly blurred. Deepfakes are just the beginning of this new revolution. This is really, you know, it's from a great book there, if you haven't read it. I don't know if you can see it back there, but Deepfakes, the coming infocalypse, which I feel like is exactly where we're going, that we can no longer trust anything that we see, you know, just because you hear someone's voice, see someone's face, it doesn't mean anything. And that's what we're going to talk about a little bit tonight. If my clicker will work. There we go. Decoding Deepfakes, AI's Dual Role in Digital Deception and Detection. Before I get into it, a little bit about me. Kyle Hinterberg, I'm a QSA, a CISSP, CISA, AWS, SCS, senior manager at LBMC, also the president-elect at the ISACA Madison chapter. If you want to join ISACA, you really should. Here in Milwaukee, there's an awesome chapter. Over in Madison, we have an awesome chapter, so if you're not part of either of them, you really should be. If you've never heard of LBMC, we're just doing our 40th anniversary right now, a big tax audit and advisory firm. I specifically work in cybersecurity. We kind of do anything under the sun related to cybersecurity. I myself am a QSA, so I work over there in PCI. I know everyone loves PCI if you get to deal with it. That's the reason I'm talking about deepfakes, because not everyone loves PCI all the time. So tonight we're going to not dig into PCI, we're going to be talking about deepfakes. But before we get into that, a little bit of the fun side of Kyle. I love my dogs, I got a lot of dogs. So that's Ace, Oakley and Apollo, that's the main pack. I love that picture, I don't know how it worked out that I got a picture of them all lined up like that, but I could never do it again if I wanted to. This is the newest member of the pack, Artemis, but I have some more because she just had puppies. So right now I got 12 dogs in my house, and my wife is wondering when I'm getting back home because she doesn't want to take care of them anymore. And that's the one I'm going to keep. So that's Strider, so pretty soon I'll have five dogs, and this is the whole start of my plan, so you can understand how I think a little bit. The white dog up there in the big picture, he's my favorite dog ever, so I got the chocolate lab to have puppies with him, so now I got Strider. Now in eight years I'll have him have puppies and continue the line and the heritage, and so when I'm like a 90-year-old man, I'll be able to have Apollo's great-great-great-grandson. So that's the whole plan here. So over the course of the next 60 years, if you ever need a dog, give me your number, I'll be having cycles of dogs. So just, if you need a lab, let me know. Let's get into some deepfake. So what is a deepfake? You know, there's a lot of different definitions. So to be able to make sure we have a solid definition of exactly what it is, I figured the best thing to do is to ask the experts. So I went to ChatGPT. So for the definition of a deepfake, it's a synthetic media created using artificial intelligence, manipulated audio, video, etc. That's what a deepfake is. So what that turns into is a number of different types of deepfakes. I think we all just think about images. That's the big one. Obviously here you've got the Pope in his puffy jacket. This came out two or three years ago. People thought the Pope got fashion. It was just a deepfake. There's also text or audio deepfakes for just audio purposes. You can pretend to be whoever you want. You can steal their voice. Literally 30 seconds of someone's voice, you can take it to copy their inflection, their tone, sound like them. There's text deepfakes. You can, again, use ChatGPT. Say, hey, write me this article, write me this blog in whatever, like Snoop Dogg. Write it like the president. Whoever you want, and you can deepfake that person's personality. There's video deepfakes. And this one for me is probably the most memorable deepfake I ever saw. It came out now, I think, five years ago. And when it first started, it has President Obama talking, and he's saying things that did not sound very president-like. And then it turns out it's Jordan Peele deepfaking himself into President Obama. And it was the first time, at least for me, that I ever saw a deepfake where I honestly thought it was real, and I didn't realize it was a deepfake. And again, this was years ago, and he had a little bit more production behind him than your average individual being a producer, and an actor, and a director, and all of that. But now, anyone can use the same technology that he had then. And this gives you the scariest thing now, the live deepfakes. This is where, you know, you can very easily, with software, you can go download off the internet for free, do a live deepfake. If no one has had the chance to yet, over in the wards, I'm running a deepfake ward, and you can come walk by. They have some deepfake technology set up. As you walk past, you can be whoever you want to be. Right now, I think it's usually running Nick Cage. I see that that's the best. It tends to go onto people's faces the best, but you can be Nick Cage, you can be Keanu, you can be whoever you want to be in just a click of a button. One thing, too, just for the technical side of a deepfake, and specifically for the image deepfakes, how we're able to get images that look so realistic, the main technology is if you ever heard of it as a GAN, or Generative Adversarial Network. Essentially, you pit two AIs against each other. This was really the breakthrough a number of years ago that gave us the technology that allows for such awesome deepfakes. It allows for such good image creation, and it's where you have one AI who's trying to create a fake image of an individual, and then you have another AI who is the discriminator, and it decides, and essentially, the generator generates a picture, the discriminator decides, yeah, that's not good, and then tries again, and it tries again, and it can do this a thousand times, a million times, until it gets a picture that the discriminator believes to be real. And that's how we've been able to advance and get such good AI-generated images, it's by essentially having those AIs fight with each other. And to show you how good these are, we can now have a little game here. Who here thinks this is a real person? Is this a real person, or is this a deepfake? Anyone think it's real? Okay, what about this one? Real or deepfake? Raise your hand if you think it's real. Is that a real guy? Okay, we've got a few people thinking it's real. What about this one? Raise your hand, do you think he's a deepfake? Raise your hand if you think he's fake. Okay, a few people think he's fake. And this dude, he real or is he a deepfake? Okay, all of them are deepfakes. Every single one came from thispersondoesnotexist.com. It's one of the most fun websites you can go to, you just type it in, every time you refresh the page you'll get an entirely new person to look at. You can refresh it a thousand times and see a thousand faces, they're all different, they're all fake. And all of them look just as good as anyone you're going to find on LinkedIn, so you can never trust that anyone looks real at all. We now have this technology using, this is specifically using GAN technology that you can't tell if it's real, you can't tell if it's fake, there's no absolute way to know that unless you know these individuals. To have a little bit more fun, we can see how deepfake video has evolved. Back in 2023, for some reason, an AI video of Will Smith eating spaghetti went very viral. And this is that video, of an AI making Will Smith eating spaghetti. As you can see, it is Will Smith eating spaghetti, but it is not very good. The next year, in 2024, a new video came out. This one looks really good. That's because that's actually Will Smith eating spaghetti. He just heard about the video, so he decided to do it for real. This is the AI version of Will Smith eating spaghetti in 2024. Obviously quite a bit more advanced than 2023, it doesn't maybe look the best and has him in the bathtub, but it looks fairly realistic. You can put that in a movie and for a shot or two, you wouldn't realize anything. Maybe you would on that one. But now you get to 2025, and this is where we're at. So this is currently Will Smith eating spaghetti, which maybe doesn't look perfectly like Will Smith. I feel like the face isn't always the best in some of the angles, but it does look like a real person. If you just saw this, I believe most of us would believe, you know, that's a dude who looks like Will Smith and he's eating spaghetti. Even the liquid, the water, the hands, he actually has the right number of fingers. There's no way to tell it's fake, or at least not many ways to tell it's fake. So now that we've seen a little bit about deepfakes, let's do a little bit of deepfakes 101, understand how these are made, what's going on behind the scenes. First of all, I'll say don't try this at home, but that's just to make it so I don't have any liabilities, so if you do try this at home, because you should, understand how these things work, go play with it, I think it is fun, but I'm just not endorsing it. And you can obviously go out on, you know, most app stores and you can get like face swaps and that's kind of what this is doing, but it's not as good or as powerful as what you can do on your home computer with a little bit of free technology. So right now the most popular I would say is DeepFaceLab. This is the one that if you've looked on YouTube and looked at deepfake videos, you know, one of the, there's one creator out there who will swap Arnold Schwarzenegger into just about any movie, anything that has happened, that's likely, he hasn't announced it, but I'm pretty sure he's using DeepFaceLab. This is one of the generally most powerful tools out there and it's also entirely free. One thing is that, as you can guess, it's usually not used for positive purposes. Other than making humorous videos on YouTube, there's not a lot you should probably be doing this with this that's legal or ethical. And so as you can see, it's not always downloadable, but if you look long enough, you can find it and you can still download it. Essentially when you download it, you just get a whole bunch of batch files and then it's as simple as walking down the numbers. You know, you have batch file one, batch file two, batch file three, you just essentially walk down them, you run the first one, you run the second one and each one does the next step of the process. So you start out and you give it a data image and a data destination and a source. In this case, I'm taking me as the destination and one of my coworkers as the source. And obviously we don't look very similar, but I'm going to DeepFake us onto each other. Then you start running the software, it'll go through and start pulling out all the faces. And so essentially it goes through that entire video of however long it is and it chops it up into frames. Once you get those videos chopped up in individual frames, then it starts grabbing all of the faces. And if you want to do this good, you have to go through and essentially map out the faces. If you look at really good DeepFakes, they have done this, they've gone through on numerous individual frames, they will map out that face and show it so that way the DeepFake program knows exactly where the face is. Especially this is important if you're trying to grab a beard or hair or glasses, all of those types of things tend to make it more difficult for the program to find it itself. And then it starts matching it. And so in this case, it's taking that video of me and the video of Brian, it has taken all of those faces, every single frame, and now it goes through and starts saying, hey, this frame looks like this frame, this frame looks like this frame. And it starts adding them up together to make that model so that way whenever my face is turned a little bit to the left with my mouth open, it has an image of Brian with his face turned to the left with his mouth open. And then you get something like this. And this is obviously a very poor quality version, but it shows you what it's capable of. This is literally, you know, a half hour's worth of work. If you take more time, you can make the Schwarzenegger videos that look quite a bit better. And you can also do this live. This is DeepFace live. And again, over in the wards, I'm using DeepLiveCam. That's another option, another version of this that you can see. It's a little bit less, a little bit easier, less involved to set up the DeepFace live here. But you can see that just with a click of a button, I can be Brian right there. And I can swap and maybe I want to be a billionaire. Or I can swap and maybe I want to be a president. Feel like some of them, you can obviously tell, fit my face a little bit better, look a little bit more realistic. And those are all ones that are just based off of a single image. So that's all that's being used to create that. It's just a single image. But you can do more of the big models. And so now I'm switching over here. I'm going to grab one of the models. I want to be someone a little bit more cool. So I'll choose and be Keanu Reeves because I don't know if you can get too much more cool than Keanu. So give it a chance. There we go. I'm looking pretty good there. But again, it doesn't entirely match up with my face and my glasses. They're messing it up a little bit there. So instead of Keanu, I should probably choose a face that fits me a little bit better here. There we go. I think that's the face for me right there. But as you can see, it's literally just a click. And then you say, well, that's only the video side. You want to do the audio side too. Well, you can go out. One of the really cool tool outside of just deep fakes is Pinocchio. If you've never heard of or seen Pinocchio, you should write that down. You should go download it. It's essentially a search engine and installer for AI tools. So in Pinocchio, the individual and organization that runs it makes it so it's a really easy way for you to identify and install different types of AI tools. In this case, I used it to install RVC, which is a tool that does real-time voice conversion. You can grab someone's voice, load it in, just, you know, 30 seconds, a couple minutes, and in a little bit of time, you can build a model. And then here, you can listen. Hello. I am most obviously Brian Willis and not Kyle Hintenberg, and just wanted to tell you that I think Kyle is really pretty awesome, and I really hope that you're enjoying his presentation. I don't know if I can turn that up. If you heard that, that was me pretending to be Brian telling you how awesome I am. I think that might be as loud as I get. But anyways, it's as simple as a five-minute exercise to steal someone's voice. And if you just have a couple minutes of their audio, you know, someone who's ever spoken on YouTube or called you on the phone, you can steal that audio, and using free software, you can go and copy it over. Again, over in the wards, I have some instructions. There's a website you can go to right now, fakeyou.com, and it's a free site, and you can do voice conversion to whoever you want to be, or create your own voice and be whoever you want to be, or copy their voice or have them copy your voice, either way. And so now, we've talked through how awesome deep fakes are, and it's honestly the most cool fun thing in the world, right? You can do a whole lot of fun stuff just like this. But then we have the opposite side. And this is really the bad side of deep fakes, that even though it can be a lot of fun, when you really think about it, most of the things that happen with deep fakes are negative. Most of the things are bad. There are, you know, this is one news story about a teen who died, you know, by suicide because of deep fakes that were created of her. And she's not the only one. This isn't just one individual that had this happen. There have been many cases of this happening. And it's beyond just that. There's, you know, politicians. As you know, we just went through a very contingent political race, and there was a lot of deep fakes being involved. You can clone voices. The one there, I think, was kind of really egregious, was someone took MLK Jr.'s voice and used it to endorse him for political office, which just seems, you know, beyond unethical. Using it for blackmail, you know, pretending to make people say things they didn't say. There's been cases of people, you know, making individuals do things that they didn't do. People pretending to be doctors or other individuals of, you know, potential renown, you know, saying, hey, you should do this, when they're just deep faking them. And, you know, one of the big things for people here, you know, trying to protect organizations and companies, is all the people who've been scammed out of millions. I'm sure you've all heard the stories of, you know, individuals deep faking people and saying, hey, I'm the CFO, here's my picture, you can see me, you can talk to me, now authorize my $20 million transaction. And because you can see me and you can hear me, you think it really is me, but it's not. This is a slide that I really think, it's from last year, and it showed the explosive growth of AI-powered fraud from 2022 to 2023. And so if you look at those numbers on there, you'll see, like, over in Japan, it's 2800. That's a 2800% increase. So that's not 2800 more instances, it's a 2800, 2800% more. Even here in America, a 3000 % increase from 2022 to 2023. And you can only assume that this is continuing to explode, continuing to grow from 2023 to 2024 to 2025. This isn't going to stop. Before we get too negative and talk about everything horrible, I do want to call out there are some potential beneficial uses for deep fakes. One of the easy ones that I noticed at least for some of my coworkers during the COVID times, is you can have Zoom or Teams or any tool you're doing put makeup on your face. You know, there are some individuals in this world that we as a people have decided that if they're going to be on a business call, they're supposed to be wearing makeup, and that's a pain in the butt for them. So if they want to, they can just turn on a filter, and even though that's just a simple filter, that's essentially deep faking their face and adding that makeup onto their face. You know, something simple, but it is a beneficial use of the technology. One of the things I don't have currently, but I think would be cool, is NVIDIA has made technology that makes it so it always looks like you're looking at the camera. So you can be on a call, and you can be scrolling on your phone, you can be doing whatever you want, and everyone thinks you are completely plugged into that call. I think that should be just automatic going forward. Everyone needs that. Something cool and, you know, for kind of local for me and my company, LBMC, we're based out of Nashville, is Randy Travis. Assuming most people know him, he's a country star, a country singer. But a number of years ago, he had a massive stroke, and so he lost the ability to sing. He's still, you know, cognitively still there, still would want to sing to create music, but he no longer has the physical capability to do so. And so now, he has written songs, they have taken another singer, had that singer sing his song, and then they ran it through a voice conversion algorithm, trained on Randy Travis's voice, and Randy Travis has released new music on the radio. So he is now a back professional, you know, practicing artist, using AI to make him, you know, be able to do that. One of the probably the most important things I've seen as far as positive uses for deepfakes is for journalists. In this case, there was a group of journalists in Venezuela, and they were afraid of being persecuted by their government. And so they don't want people to know what they look like or what they sound like, but yet they still want to be able to connect with their audience. They want to have faces and voices that their audience can see and not just be, you know, letters on a screen. And so they have created deepfakes of themselves. And they will, you know, do their journalistic things and do their reports through their deepfake personas, so people can still connect with them at that human level, but no one knows what they look like. So now that we understand that, we can talk through what are some of the protections in place to protect us. This isn't a complete list of every single law out there on the books, but it's a decent look at what laws we currently have. This is what we have going on at the federal level in the United States. As you can see, most of these are just in the proposed or pending. One of the issues that you can say is either good or bad for America is the fact that we have the right for freedom of speech. Not all other countries understand that the same way that we do. And because of that, it's really hard to regulate deepfakes, because most things that you create, you can do that under the right of the freedom of speech. So even if other people don't like what you're doing, you can still say freedom of speech, I can create this, which is where some of these laws have been having a hard time making it through. I have to call out my favorite one, though. It's the deepfakes act, because I think whoever, you know, is making these really needs to get paid for figuring out how they make these words. So it's the defending each and every person from false appearances by keeping exploitation subject to accountability act. They have to just have someone in the, you know, government who that's all they do, is just try to figure out how to make these things. Here's a quick look at some of the state laws. You can see there is one in Wisconsin. We have Act 123, or 123. It mandates the disclosure of AI content and political ads. That's one thing where a lot of this is related to politics and political ads, which is good, but I feel like some of the deepfake pornography laws are more important, and you do see there are a number of those as well. One other interesting one, just to call out Tennessee again, is the Elvis Act. That's another one that they have obviously made sure they know how they're writing that name, but that's protecting images and likenesses of essentially stars and singers. So obviously Nashville has an important amount of individuals there who want to make sure their songs are going forward. This is kind of a look that WIRE did last year, showing deepfake pornography laws by state. So if you look there, all of the bright blue have passed laws. So those do have some law on the books that protects individuals from deepfake pornography. The purple have proposed and failed. The light blue is proposed, and the gray hasn't done anything yet. So as you can see right now, we're still in a big mix. Some states you can't, some states you can, and even in the states where it's against the law, we still have issues because of freedom of speech. That's one place where I think Korea, they have possibly the hardest law. If you look here, this is a look at some of the international ones, and Korea has decided, South Korea, they really do not want this to happen. If you make any type of deepfake pornography, that's instantly, you know, something you can get jail time for. We do have something a little bit like that here in America, which I find interesting, is when it comes to child pornography, and that it's not that it's more illegal, in a sense, to create child pornography using deepfakes, it's just that currently the law sees it exactly the same as the real thing. So whether you're making it for real or through a deepfake, the law sees it the same and you get punished the same way. So obviously we do have some protection from the government, but when it comes to technology the government tends to lag behind, and a lot of times we have to protect ourselves. And there are a lot of tools out there that you can use to protect yourself or try to identify deepfakes. I'm just not sure that any of them are everywhere they need to be yet, but what those tools use is a lot of different things that essentially the same things that you would use to identify deepfake, but they can do it at a level that we can't necessarily see or be able to understand just by looking at an image. Looking at inconsistency in facial artifacts, you know, maybe they see pixels that are jumping out of line, something that you might not be able to pick up. Audio-visual mismatch, seeing that the audio isn't matching the face, but a lot of times they could just say, hey that's a faulty zoom connection causing that issue. Pixel level analysis, motion analysis, metadata and hash analysis, that's one that could work, but again it's really reliant on the software application, you know, be it Zoom, be it Teams, whatever, to be able to confirm that or if you're looking at a video, you know, that's a case where at least that video could have a hash and you could confirm that hash to make sure you really are looking at what you assume you're looking at and it's not a deepfake. Eye movement analysis, making sure the eyes are both pointed in the same way, sometimes deepfakes make eyes look two different directions. Shading and lighting, one thing that's interesting that some companies are doing from a shading and lighting perspective, which tends to be pretty accurate at determining if you're looking at a deepfake or not, is by changing the color of the screen for the monitor that the other individual is looking at. So you're on a team call, team's call with someone and the software would essentially make their screen slightly more red. Not enough that they would notice necessarily, but enough that it should reflect a little bit of red light off of their face and then their camera should pick up that red light and then the program can say, yeah we made red light, it bounced off their face, they're really there. Or, we made red light, we didn't pick it up, this is obviously a deepfake. So that's one of the technologies that can help identify and really see if there is a deepfake. Temporal artifacts, again, things jumping around. Biometric features, this is another one where essentially most cameras, even in laptops, do have the ability to see if blood is pumping in your face. You know, little things like that, you can see, again, you might not be able to pick it up, but a camera, even an iPhone camera, etc., does have enough technology, enough ability to pick that up and see that, yeah, this is an actual living individual. But even with that, one of the problems is escalation. As soon as that's what we're all relying on, then they'll just make us do the deepfake pulses. One of the coolest things that came out last year was eye analysis, and this was kind of a little bit of a happenstance where we use technology to identify galaxies in space and determine how far away they are based on the way the light from the sun or other solar bodies are bouncing off of them. And they're now using that same technology to look at eyes of people and seeing how the light is bouncing off of eyes. And most times, as you can see kind of laid out there in the image on the right, even though those eyes look like they're both looking at you, if you actually track the way the light is bouncing off of them, it makes it so that there would have to be multiple light sources to create that light, which would imply that it's a deepfake. I do not believe that we are where we need to be at, though, when it comes to these identification methods. I feel like we're back in the 90s when it comes to fishing. You know, we've now advanced over 20 years in the world of fishing where we finally, for the most part, are protected, you know, a decent amount. Obviously, fishing is still a big issue, but we're a lot better off. We have technologies that can stop and save your customers or your employees from being fished. We're kind of in that same place back in the 90s when it comes to deepfakes, where maybe in 10, 20 years, we'll be able to have technologies that really give us some level of protection. But right now, the big issues are, is first, availability. Does anyone here, to their knowledge at their company, have any software tools that help them identify deepfake videos or audio? I don't think I've really... we got one hand. So out of this whole group, there's maybe one company that is doing something to detect it. So that's one of the big issues. You know, unless you're maybe working in the government, having to do something where you have to do like, you know, your customer type controls, you probably aren't doing this. And the other thing is false positives. Something that I think, you know, the more I've dug into this, that I think is going to be an issue, is we're coming to an era where people are really choosing how they want to be seen, how they want to be portrayed. And that might be with cat ears. As you can tell, a lot of folks around this conference are wearing cat ears. And so if an individual has decided they want to have a filter that gives them cat ears, what right do we have to say, you can't wear cat ears, if that's who they are and what they want to portray themselves as. But at that point, it's a deepfake, and it would trigger any tool that's trying to identify them as a deepfake. So some of this has caused me to think that we're probably just chasing down the wrong path, and that we can't necessarily rely on an individual's likeness as being their primary identifier. Just because I see you doesn't mean you are you. I have to have some other way to confirm who you are. Kind of go off on a little bit of a tangent here. Back in 1784, there was a gentleman named Joseph Brahma. He developed this lock. It was the Brahma lock. It might seem impossible, but it was unpickable. Literally unpickable. Anyone who had this lock knew whatever they locked with it was absolutely secure. It was known to be secure. There was no one that could in any way pick this lock. For 67 years, that lock stayed unpickable. And then it was picked in 1851. After that, we lost all sense of security from that lock. We are in that same state right now with images. For many years, we have had images that we can protect. We can understand. We can see this. My face is my face. If you see my face, you know it's me. We have now moved to 1851. We no longer can trust our faces. We have lost that sense of security. We've lost that sense of trust. We need to find a new way to confirm that. So, maybe we have some ideas of what we can do. The MIT Media Lab, they went through and they identified kind of their top list of recommendations of what you can do to identify deepfakes. So, pay attention to the face. It seems pretty freaking obvious, but that was MIT. Look at the cheeks and the forehead. That's also part of the face. Look at the eyes and the eyebrows. Pay attention to glasses. Look at facial hair or lack thereof. Look at facial moles. That can be a good one if it's jumping left to right, right to left. Look at blinking. Make sure the person does actually blink. Look at lip movements. I feel like all of these are pretty obvious, but none of them are necessarily going to help us too much. One thing that I have noticed myself in playing with deepfakes and is something that a lot of places that have to do like know your customer type controls are recommending is have someone wave their face in front of their hand. That, in my opinion, is one of the best tells, is waving your hand in front of your face, because that tends to break it. But all of this is a little bit broke by a report that came out Fooled Twice. So, in this report, they worked with a bunch of people, had them all look at deepfake images, had them look at not deepfake images, and then asked them what was deepfaked, what wasn't. After this was all completed, they determined that people cannot reliably detect deepfake images. Raising awareness or giving incentives to better detect deepfakes did not improve detection. So, if they told people, we'll pay you money if you can tell us what's the deepfake, they got worse. People tend to mistake deepfakes for authentic videos, not the vice versa. So, they tend to think that authentic videos are the deepfakes and the deepfakes are authentic, so we can't figure any of this out. And they overestimate their ability. This is where I'm guessing if by a show of hands, most people probably think they're a better than average driver, but we're probably all at most average, and we all think that we're better at average at detecting deepfakes, but we're at most average. We're probably, if anything, less than average. And ultimately, this report identified that people are highly susceptible to deepfake manipulation. And the real nail in the coffin for me is this came out in 2021. So, since then, we've obviously greatly surpassed our technology and our ability to create deepfakes, and we have not gotten, at least as far as I'm concerned, any better at detecting them. So, trying to identify and detect a deepfake yourself is a losing game. You can't do it. So, my recommendations is that you completely forget all of that. Don't worry about if it's a deepfake or not, and just go back to social engineering awareness. If you get a call, or video call, or whatever it might be out of the blue, are they trying to give you urgency or pressure? Are they unusual requests? Emotional manipulation? Is it unexpected? Is your boss all of a sudden calling you through WhatsApp? Requests for money, trying to make you transfer $20 million. Inconsistencies in communication. Unfamiliar links, don't click the link. Impersonation of authority, someone being your CEO because they, you know, like to FaceTime you on the weekend. Too good to be true offers. Unusual behavior. This is really all we have. You know, in my opinion, it kind of sucks. We have all this technology that is able to make such awesome deepfakes, and we do have some technology that can detect them, but I don't think it's worth trusting them, or at least I wouldn't recommend relying on that, and it just goes back to the basics. Last year was one of the best examples, I think. There was a Ferrari executive. He was supposedly contacted by the CEO of Ferrari, and this was, I believe, actually through WhatsApp, and at first this exec was like, hey, why are you contacting me through WhatsApp? And the supposed CEO said, well, we're going to be doing this big deal over in Europe, and I want to make sure it's kind of, you know, on the down-low, and no one knows about this, and I have to keep it secret and quiet. We can't be talking about this. I don't want it in our email system. And so this went on for a while, and the exec really thought it was real, and eventually he just couldn't determine if it was accurate or not, if it was real, and this person was asking him to do things, and he thought it was his boss. So then he asked the guy, hey, what was the name of the book I lent you last week? And that complete the end of the conversation. And so that's where we're at. You know, if you are talking to someone, you know, it's to the point where maybe you have a secret word that you talk to your family and even maybe your team at work about and say, hey, this is the secret word. You know, we don't ever share this through digital means. We only share it in physical means. If there's ever a time that I call you and you don't know if it's me, I should be able to provide you with this secret word. Or maybe it's something more like this. You say, hey, what did we have for breakfast yesterday? Hey, when did I pick you up? What's my car like? Something that only they would know, something that only you would know, and not relying on something like a voice, like a video, because you can't trust that. The truth of the matter is, is we live in this unreality of deepfakes. And we're not going to get back. It's only going to get better. It's only going to get worse. You know, again, come check out the ward and see how easy it is to make it look like Nick Cage. And right now that's just running on my personal computer. Nothing high tech, nothing high end. And in another year it's going to get better. It's going to get worse depending on the way you look at it. So that's where we're at. I don't know where I'm at for time. If I have, I got some time for questions, if anyone has any questions. If not... The most I know, and the question was if the government's looking into this from InfoSec and Intelligence, is I know that a lot of government organizations do leverage some level of technology to identify deepfakes, but at the same time it's not working. Just last year there was a case where an individual contacted a senator and was, I believe it was a senator, it was either a senator or representative, and was asking about our war plans or tactical strategy plans in the Ukraine . And they got very specific and the individual, the senator, representative, whichever one it was, was answering the questions until the questions got to be very precise and like strangely specific and he assumed he was talking to an individual from Ukraine. And at that point though when these questions got weird they ended the call and it ultimately turned out it was a Russian hacker who was deepfaking an individual from Ukraine and had contacted an American senator representative and we didn't detect that as a deepfake. It wasn't detected as a deepfake or an issue until it got too far and was asking pointed questions that likely that individual shouldn't have been asking. So yes, they are doing some things, there are some know your customer controls and protocols in place to try to identify these, but it gets back to that availability side that it's not being rolled out everywhere. I believe they were just constructing fakes, and so just using kind of like a thispersondoesntexist.com, something like that, and just creating entirely AI generated personas. Exactly. I'm totally on the same way that I think that we're moving to a point where in another five years the assumption is going to be that whoever you're talking to is either a fake or is in some way manipulated, be it makeup, be it cat ears, be it whatever, and so you can't assume that there is or isn't any technology altering an individual's likeness, and so you're better to just rely on the social engineering side of it, and you know, if they're trying to do something stupid, don't do it. You know, rely on your emotions more than what your just eyes are seeing. Probably? We haven't quite made it, I don't think, to the world of the you know, the masks that, you know, you can entirely steal someone, but you know, I don't know. I think face-to-face is probably safe. Yes and no. Technically you do to a degree. If you are a famous person, you have more ownership over it because that's always the fair way to do things. So as just an individual, you don't have as much as you would like, and the big thing is going to be if you can prove that whatever they're doing is in some way illegal. So that's where like a lot of the deepfake pornography laws are coming from, but if they are just using your likeness to do something that they can claim to be parody, then you don't necessarily have any ability to stop them from doing it. Again, that's where there's already been some precedent where people are having, you know, different political individuals say things, and the opposite side is claiming it's freedom of speech to be able to make manipulations as long as they're claiming parody or saying that this is an AI deepfake, but we all know that, you know, that little bit of a snippet might get taken and put out of context, and then, you know, people believe it's real. There's no other questions. Thank you all. Come check me out at the war... Yeah, I would agree, but it depends on the specific uses. So if I just steal your face and make you say something stupid, and I say that I'm doing this as, you know, fun parody, then I can get by with American law, and that's not necessarily illegal, depending on who has the better lawyer. And again, if you were a movie star, and your likeness is your job, then you have more ability to fight that. Us common folk don't necessarily have so much ability there, which, again, not to say that's right, but that's the point that we're at right now, until we get better laws around this. This is where we have to have laws catch up a little bit with your likeness being your identity. It does. It makes it so I'm absolutely protected, no one can get me, the aliens can't steal my likeness, the FBI can't read my brainwaves. Yes. Yeah, they can do all of that. And it's, you know, potentially parts of that are still legal. And even like spoofing your voice, that's a very hard place to get to, to even prove that someone is or isn't doing that. An easy example is, you know, most comedians obviously do, you know, pretend to be other individuals, they can, you know, do that pretty well. And that's not illegal. That's where that's just all falls under parody. And unless they're doing it in some malicious way, it's hard to even claim that's a bad thing. A lot of times, even if they're doing that and stealing your voice and doing you know, whatever, it's not generally illegal, or at least easily proven to be illegal until they've actually committed the illegal act of stealing money or whatever it might be. Thank you all very much. Have a good night.