[00:55.010 --> 00:56.590] Hey everybody. [00:56.610 --> 00:58.570] We are going to go ahead and get started. [00:58.570 --> 01:02.390] Welcome to the AI and Cybersecurity Executive Panel. [01:02.430 --> 01:06.750] I'm going to take a minute to tell you a little bit about the group that helped us pull this together. [01:06.950 --> 01:11.010] The Chief Architect Network is a group I started maybe a couple years back. [01:11.010 --> 01:13.110] It's a 501c3 non-profit. [01:13.110 --> 01:16.250] It has about 450 chief architect executives. [01:16.250 --> 01:19.470] I like to say from New Zealand to Hawaii the long way. [01:19.690 --> 01:31.490] And we really like partner with events like this as a place to really invest in sharing knowledge, collaboration, and also trying to invest in talent. [01:31.530 --> 01:36.670] So this group is not the extent of what we have in mind for CypherCon. [01:36.670 --> 01:39.870] Michael is a member of this group or you might know him as Monster. [01:39.970 --> 01:49.170] And he co-leads the security track of this group with Sandesh and another gentleman named Rameshwar who's in Texas. [01:49.290 --> 01:52.610] And that is kind of how these two things came together. [01:52.610 --> 01:58.410] And what we want to do is there's about 100 some people in the cybersecurity network of the Chief Architect Network. [01:58.490 --> 02:04.170] What we want to do is we use that group to kind of... 10 folks from that network have joined us today. [02:04.170 --> 02:06.890] Thank you for being here to create this. [02:06.890 --> 02:09.610] And we're going to try to add some critical mass. [02:09.710 --> 02:16.030] So I'm very excited to have today's conversation with Tina and John that aren't members of the Chief Architect Network. [02:16.030 --> 02:18.250] They're executive guests with us today. [02:18.250 --> 02:28.470] And my hope is that next year at CypherCon we have all kinds of IT executives on the stage with us and we have a lot of other engagement sources with that group. [02:28.470 --> 02:31.930] This is the largest technology conference in all of Wisconsin. [02:31.930 --> 02:34.470] And it's on a really cool topic of cybersecurity. [02:34.710 --> 02:42.390] We've got to have the executives here as well to kind of nurture this community and make sure that it is supported as it continues to grow. [02:42.510 --> 02:43.870] So that's a little bit about the group. [02:43.870 --> 02:46.330] I'm going to have everyone get a chance to introduce themselves. [02:46.570 --> 03:02.610] Today we're going to be talking about creating resilient and secure architectures leveraging AI to manage cyber risks and to improve security posture with a focus on current challenges, integrated business aligned strategies, pragmatic practices, and innovative approaches. [03:02.610 --> 03:08.150] We've got a set of questions, but what's really cool is we have these amazing people with us. [03:08.150 --> 03:09.590] And you're all in the room. [03:09.790 --> 03:14.030] So what you have as a question is far more interesting than what we've come up with. [03:14.030 --> 03:14.770] I promise you. [03:14.770 --> 03:17.110] And we'll probably get our points in one way or the other. [03:17.110 --> 03:25.210] So what I'm going to have the panel do is we're all going to quickly take a few minutes, talk about ourselves and how AI and cyber security is relevant for us. [03:25.210 --> 03:26.170] We'll go a little long. [03:26.170 --> 03:30.690] We'll maybe spend two to three, maybe four, even five minutes each on kind of that topic. [03:30.750 --> 03:37.670] And that will give hopefully you folks ideas around what do you want to know from this audience, okay, from this group of leaders. [03:37.710 --> 03:40.750] And if we have no questions, we've got them. [03:40.750 --> 03:42.930] I can jump over to the questions and they're pretty good. [03:42.930 --> 03:45.270] But I think what you have in mind is even better. [03:45.310 --> 03:46.910] So why don't we just start down the line. [03:46.910 --> 03:49.350] John, do you want to go first, grab a mic and say hello? [03:50.150 --> 03:50.990] Hi. [03:50.990 --> 03:51.330] Yeah. [03:51.330 --> 03:52.590] My name is John Poteas. [03:52.590 --> 03:55.030] I run AI ops for FIS. [03:55.390 --> 03:58.370] I had a vendor here ask me, why are you here? [03:58.710 --> 04:03.890] Because I'm not directly in security, but I am your customer. [04:04.350 --> 04:09.330] So I use AI and RPA automation to maintain stability in operations. [04:10.110 --> 04:13.190] And to be honest, sometimes you people mess with me. [04:14.170 --> 04:26.470] So it's very important for us to have a holistic view from both an operations and an AI and a security perspective, because ultimately security for security isn't all that useful. [04:26.650 --> 04:32.910] The first time I talked at CypherCon, I said, the only reason we need security people is because of security people. [04:33.090 --> 04:34.050] That's pretty funny. [04:34.050 --> 04:36.010] So nobody laughed then either. [04:36.010 --> 04:36.810] I did. [04:36.810 --> 04:37.870] I did. [04:39.150 --> 04:42.030] So it's of keen interest to me. [04:42.130 --> 04:46.290] And I look forward to hearing what everyone has to say and whatever questions you have. [04:47.630 --> 04:50.250] Grant, thanks for having us here today. [04:50.250 --> 04:52.210] And thank you to you all. [04:52.210 --> 04:53.110] I'm Tina Chang. [04:53.110 --> 04:54.790] I'm the CEO of SysLogic. [04:54.790 --> 04:56.830] I have a couple of team members here. [04:56.970 --> 04:58.570] Shout out at SysLogic. [04:58.570 --> 05:05.990] I get the privilege to lead a handful of amazing technologists and cybersecurity professionals. [05:05.990 --> 05:13.810] One of ours, Tyler Grant, is running the 3D printing village or ward downstairs, if you want to sample that. [05:13.810 --> 05:23.810] In addition to SysLogic, I'm also a partner at GhostScale, who is also running the Bluetooth and wireless hacking village downstairs. [05:23.810 --> 05:27.370] And you can make yourself an entire card down there. [05:27.370 --> 05:40.110] And if I'm not working in those two facets, I have an operating partner role at a private equity company in New York, where I am the cyber coach to their portfolio companies. [05:40.210 --> 05:54.410] And in addition to that, I have the privilege of serving on three corporate boards, two of them publicly traded, following SEC rules, where we get to talk about, finally, technology and cyber in the boardroom. [05:54.530 --> 05:58.590] Tina, before you move on, tell us what passions you have for this topic. [05:59.570 --> 06:08.870] Boy, I have a passion for being here for my goal to bring executives into this topic a bit more at conferences like that. [06:08.870 --> 06:11.770] So that's what originally brought me here today. [06:11.770 --> 06:27.390] But specifically related to cybersecurity and AI, boy, what gets me so excited for the first time, starting with cybersecurity, now AI, technology is finally discussed at a strategic level in the boardroom. [06:27.510 --> 06:29.630] And it hasn't been. [06:29.630 --> 06:38.830] So the fact that CIOs have had trouble in the past getting a seat at the table, that is now quickly changing, largely because of these two topics. [06:38.830 --> 06:39.810] Thank you. [06:39.810 --> 06:40.550] Phil? [06:40.750 --> 06:41.730] Hey. [06:41.730 --> 06:43.030] My name is Phil Finucane. [06:43.030 --> 06:49.050] I'm the CTO of Carecentrics, which is a company which provides home healthcare for patients. [06:49.050 --> 06:56.050] We essentially drive down costs for health plans and provide better outcomes for patients in those plans. [06:56.230 --> 06:57.310] Why am I here? [06:57.310 --> 06:58.470] Well, so let's see, a couple of things. [06:58.470 --> 07:06.950] So first off, the formative job of my career early on was I ran login at Yahoo from 2003 until 2006 or 2007. [07:06.950 --> 07:17.950] And so that was just sort of a trial by fire, learning how to deal with hackers, how to build secure APIs, how to do rate limiting and identify bad guys who were out there. [07:17.950 --> 07:24.270] And it was just a fascinating experience and it sort of got me excited about being in this space throughout my entire career. [07:24.730 --> 07:25.470] I'm blown away. [07:25.470 --> 07:29.910] I actually did a brief stint working production engineering on AI at Meta. [07:30.470 --> 07:34.150] And so I can understand some of the stuff that you're working on. [07:34.150 --> 07:38.410] It is amazing how far the technology has come and all the things that we can do today. [07:38.410 --> 07:44.510] And so the confluence of all those things, I think, are why I'm excited to be here and excited to hear what you guys have to say as much as anything. [07:44.650 --> 07:45.670] That's awesome. [07:48.150 --> 07:50.570] Good afternoon, everyone. [07:50.570 --> 07:52.570] I'm Rahul Trivedi. [07:52.890 --> 08:01.070] I bring over 30 years of experience in technology, product development, engineering and operations. [08:01.730 --> 08:08.510] My earlier career was focused on more on the product development, engineering, product engineering side. [08:08.510 --> 08:16.730] Last 10 years I have been focused more on operational improvements, performance improvement using RPA and AI. [08:16.830 --> 08:21.330] And that's why this topic is of great interest to me. [08:21.330 --> 08:37.530] I remember back in my master's days when I did my master's in computer science, my thesis was on knowledge-based system design and that's what AI was known in 1980s and 90s, those days. [08:37.530 --> 08:40.530] So I've been associated with AI since then. [08:40.530 --> 08:44.870] And think of AI as a superpower. [08:44.870 --> 08:57.670] So we used to hear about these zero-day vulnerabilities five, six years back and at that time very few people were able to exploit zero-day vulnerabilities. [08:57.710 --> 09:15.890] Now once the vulnerability is out, an average programmer or not even a great programmer, a business analyst potentially could write code to exploit that zero-day vulnerability and that is why the role of the AI in addressing your cyber security is important. [09:16.210 --> 09:16.950] Thank you. [09:16.950 --> 09:21.570] So imagine you have all these leaders in front of you and they're willing to answer any question you might have. [09:21.570 --> 09:23.230] That is the exact moment you're in. [09:23.230 --> 09:25.810] So does anyone want to be that first brave soul, raise their hand? [09:25.810 --> 09:29.370] Otherwise we can go to the backup questions that I've got on the next slide. [09:29.410 --> 09:30.450] What do you think? [09:31.410 --> 09:32.630] Right in the back. [09:43.000 --> 09:46.440] I'm so glad we have Tina here and I'm going to repeat the question for the recording. [09:46.480 --> 09:49.920] So is AI something that you're hearing in the boardroom? [09:49.920 --> 09:51.000] Tina, go ahead. [09:51.000 --> 09:58.060] And I would love to hear from the rest of you if you've been invited into the boardroom to talk about AI because that's where it starts. [09:58.060 --> 10:16.560] We've definitely had AI in the boardroom but where that conversation is starting is what is AI and are we missing the boat, aka losing opportunities to either innovate or grow market share or soon to be obsolete if we're not thinking about AI. [10:16.560 --> 10:27.340] And I've got to be honest with you, I have been telling all of my boards, don't worry, it's not exactly what you think, it's not all the hype that you're hearing, but it's coming. [10:27.380 --> 10:33.560] And so where that really needs to start with AI in the boardroom was how do we get our arms around governance? [10:33.560 --> 10:37.080] How do we make sure that we are setting the right policies? [10:37.120 --> 10:57.060] And then how are we introducing these very technical concepts, not unlike cybersecurity, across the C-team, across the leadership team, so that the proper functions like legal and risk and technology can all work together to come up with great governance and great policies that are appropriate for the organization. [10:57.060 --> 10:59.500] If you're legal, you want to shut down everything. [10:59.500 --> 11:03.600] If you're IT, you want to innovate around everything and adopt everything. [11:03.600 --> 11:11.540] And there's probably a really good sweet spot that really needs to draw on multi-function, multi-departmental brainpower today. [11:11.540 --> 11:14.700] But we're definitely hearing it in the boardroom. [11:17.010 --> 11:21.660] So I would say the answer is kind of nuanced. [11:21.780 --> 11:25.220] Depends from organization to organization. [11:25.570 --> 11:33.680] Every organization has a different kind of a tolerance to the risk and the pace of innovation is different. [11:33.980 --> 11:54.940] In my previous organization, TransUnion, which was a highly data-heavy organization, they put a lot of emphasis on security, cybersecurity, because they were getting attacked by the likes of the People's Republic of China, government-sponsored hackers from Korea, [11:54.940 --> 12:02.220] and basically the world's best hackers, the dedicated companies, they were attacking 40,000 to 60,000 times a day. [12:02.380 --> 12:04.320] So they had to be up on the game. [12:04.320 --> 12:07.320] So those organizations place a lot of emphasis. [12:07.580 --> 12:11.680] And then there are organizations which say, no, this is not this one. [12:11.680 --> 12:13.580] We will probably not be the first adopter. [12:13.580 --> 12:20.580] We will probably wait and watch and see if there are returns, then we are going to invest in this. [12:21.720 --> 12:26.540] Yeah, from my perspective, I'm going to build on what you were saying about the nuance that's involved here. [12:26.540 --> 12:42.380] As I talk to the board and to others in the C-suite, I think when we say AI, somebody who's outside of technology immediately goes to Skynet or HAL or some other evil entity that goes and controls the world. [12:42.520 --> 12:53.480] And when you go sit down and play with ChatGPT, it's really easy to think, oh, my goodness, this thing understands everything and can really control the world, not understanding that it's an amazing tool but has limited capabilities. [12:53.480 --> 13:02.720] And so I think the place that I like to start with is just describing different types of machine learning, how they're built, how they work, the applicability of each. [13:02.720 --> 13:15.380] And I think when you're talking to the board about generative AI, which is nondeterministic, it can be scarier when you say, hey, this thing might be asked the same question by two of your customers and give slightly different answers each time. [13:15.380 --> 13:17.360] And how do you know it's going to do the right thing? [13:17.360 --> 13:19.520] Those things can be really scary for the board. [13:19.520 --> 13:24.980] But then when you go in and say, hey, I've got a linear regression model, and it is deterministic. [13:24.980 --> 13:28.080] Every time I put in a set of inputs, it's going to give you the same outputs. [13:28.080 --> 13:29.220] I can explain it. [13:29.280 --> 13:47.360] It really demystifies some of that for people and allows you to use machine learning, which, as we all know, is used for tons and tons of different things across everything that we do in our lives, and separate it from that scary generative AI that comes along with a lot of unknowns. [13:48.660 --> 13:59.820] Yeah, I mean, I would just add that, you know, the world kind of changed in November of 22, when, you know, the public release of the research version of ChatGPT came out. [13:59.820 --> 14:11.880] And I got to attend Gartner in the following October, and 22,000 C-level people sitting in a room being told that if you're not on this, your competitive edge is gone. [14:12.420 --> 14:17.640] And the end state is, I think it's been more of a topic at a board level. [14:17.900 --> 14:20.620] But, you know, some of us represent very small companies. [14:20.620 --> 14:22.860] Some of us represent very large companies. [14:23.000 --> 14:30.160] And at the board level alone, it's not sufficient to really drive an adoption of AI. [14:30.700 --> 14:33.380] You know, it's a great start. [14:34.020 --> 14:41.380] But, you know, as some people mentioned, there's great misconceptions about what it is. [14:41.680 --> 14:47.600] You know, there's the Skynet, you know, the Terminators, and all that great stuff. [14:47.600 --> 14:56.540] But then there's also the fact that most of us can't afford the level of complexity that you get with ChatGPT. [14:56.540 --> 15:06.480] It takes as much energy as it would to run a small city in Kansas to retrain that model, which is why it's often months out of date. [15:07.000 --> 15:13.840] You know, which means that as companies adopt AI, and this is probably something that board members need to start to comprehend. [15:14.080 --> 15:25.280] To your point, it starts with integrating machine learning in your basic operations, building and expanding on that, and then growing into this generative AI approach. [15:25.320 --> 15:31.760] Because, you know, right now the misconception is, is I had one person ask me, why do I even need you? [15:31.760 --> 15:33.500] I'll just ask ChatGPT. [15:33.500 --> 15:36.860] I'm like, go ahead and get the bill. [15:37.260 --> 15:39.520] And then you'll see that I'm much cheaper. [15:40.160 --> 15:50.340] So, you know, it's great that it's talked about in the board level, but it really needs to be understood and talked about at every level below that for a true revolution in our companies. [15:50.340 --> 15:51.120] I love that. [15:51.120 --> 15:53.060] And this is actually something that's measured. [15:53.060 --> 15:56.540] There's a group, I kind of follow a gentleman named Jeff Winter on LinkedIn. [15:56.540 --> 16:04.100] And one of the things he posts about is how often topics, what are the topics that are in the boardroom, and how much change is there in that. [16:04.100 --> 16:08.720] And AI has topped those charts since 2023. [16:08.720 --> 16:17.260] They weren't there quite when it was just a research release in 22, but early 23 onward, it has sustained one of the top topics. [16:17.260 --> 16:19.180] Like it's next to tariffs right now. [16:19.180 --> 16:21.280] And we all know tariffs are huge. [16:21.280 --> 16:26.320] And other topics have come and gone, but AI has stayed firmly in that conversation. [16:26.320 --> 16:28.240] So, I mean, I think it's incredibly important. [16:28.240 --> 16:31.080] And it's not just the chatbot helping you out. [16:31.080 --> 16:40.540] We're also seeing a big trend in boardrooms and in the IT executive teams where CIOs and progressive companies are now owning P&Ls, you know, profit and loss centers. [16:40.540 --> 16:43.520] They're now looking at it as not just a cost center, it's a profit center. [16:43.520 --> 16:45.940] So how are we making digital products? [16:45.940 --> 16:49.760] And if we're going to make a digital product, you bet it's going to have an AI on it somewhere. [16:49.760 --> 16:51.280] They got AI toothbrushes now. [16:51.280 --> 16:52.800] I don't know what that is, right? [16:52.880 --> 16:55.500] So, yeah, it's all over. [16:55.500 --> 16:56.940] And it's a great question. [16:56.940 --> 16:58.260] We have another brave soul. [16:58.260 --> 16:59.400] We do right here. [17:10.580 --> 17:14.500] How are we protecting One more time, just so I make sure I restate it correctly. [17:14.500 --> 17:16.660] How are we protecting our... [17:22.080 --> 17:23.000] Ah, got it. [17:23.000 --> 17:23.460] Sorry about that. [17:23.460 --> 17:24.440] Thank you for the repeat. [17:24.440 --> 17:36.980] So, how are we protecting ourselves from training third parties on our data and effectively creating knowledge graphs that effectively take our competitive advantage in our IP to become part of the general knowledge? [17:37.140 --> 17:38.400] And it's a great question. [17:38.400 --> 17:40.520] I'd love to know who on the panel might like to speak first. [17:40.560 --> 17:41.780] I'll give it a shot if you're ready. [17:41.780 --> 17:42.380] Yeah. [17:42.720 --> 17:44.340] Yeah, it is a great question. [17:44.720 --> 17:55.760] In early releases of ChatGPT, particularly people in Amazon, were able to find proprietary code, their code, in the ChatGPT backend. [17:56.060 --> 17:56.300] Okay? [17:56.300 --> 17:58.960] So, it is a very legitimate, important question. [17:58.960 --> 18:03.140] And there's two ways that I've seen companies approach it. [18:03.220 --> 18:11.080] The first way is to build your own LLMs, which is not as daunting as it sounds. [18:11.380 --> 18:12.840] You can do it. [18:12.840 --> 18:20.140] They are not nearly as good, you know, because that's what OpenAI and Microsoft are charging you for. [18:20.140 --> 18:31.020] But if your data can be analyzed using one of these, you know, often open source available models, it can be done internally. [18:31.020 --> 18:33.520] And so, many companies have followed that approach. [18:33.520 --> 18:39.580] They find an internal model or an updated model, HuggingFace, the hub, is great. [18:39.620 --> 18:42.980] And then they build their own analytics around it. [18:42.980 --> 18:49.320] Other companies I've seen that engage in private partnerships with large AI providers. [18:49.500 --> 19:02.600] Copilot, for instance, if your enterprise is using Copilot internally, you're probably using it on an internal instance of Copilot that has a direct connection to, you know, the big brain and Microsoft, but nobody else has access to. [19:02.600 --> 19:05.640] However, Microsoft does. [19:05.640 --> 19:13.420] And if they're using any level of recursive learning, they're taking everything you put into it and using that to make their model better. [19:14.200 --> 19:21.820] So, I mean, there's, those two are probably the larger ways when you're talking about large scale models and big amounts of data. [19:23.480 --> 19:27.940] I'm going to take a different angle to that, which is back to the governance angle. [19:27.940 --> 19:35.320] And I'll be honest, when I'm sitting on the boards, for all three of the boards that I'm on, I'm the only technologist on the board. [19:35.320 --> 19:41.820] So, it's very scary and also a lot of responsibility to hold the power of one that's giving this level of guidance. [19:41.820 --> 19:51.680] But my guidance has been from the governance perspective on AI because of data and data privacy and cybersecurity, no AI allowed. [19:51.860 --> 19:58.680] That's where I'm advising them to start until they catch up with their governance and they catch up with their policies. [19:58.680 --> 20:04.780] And essentially for this crowd, you'll understand, I'm taking a zero trust approach to AI adoption. [20:04.780 --> 20:15.080] I'm basically saying, before you allow anybody to do anything, create some good rules of engagement and allow them to come to you to check for exceptions. [20:15.120 --> 20:23.880] Go through a good protocol to understand why they want to use it, how they're going to use it, what data is going into it, how they're going to be responsible for the results. [20:24.140 --> 20:26.960] And then you can say yes to that. [20:26.960 --> 20:35.780] And oh, by the way, what that helps you do is oftentimes you're going to recognize because of data protection and because of data privacy, you're not using the free versions out there. [20:35.780 --> 20:52.140] So you get ahead of what that budgeting process needs to look at, as opposed to somebody going down the road of some very exciting skunk works project all of a sudden to have to come back to management and said, oops, the version I need actually is going to require X amount of dollars. [20:52.220 --> 20:54.620] Executives don't like to be surprised that way. [20:56.540 --> 21:01.720] Yeah, for my part, actually, this is something that we're actively grappling with at CareCentrics. [21:01.720 --> 21:18.860] Because we have lots of PHI and PII in-house, and because we do lots of processing that requires skilled nurses, for example, to look at medical charts and make determinations, it seems like there's great opportunity to use some of these generative tools to be able to go out there and expedite those processes. [21:19.000 --> 21:22.240] At the same time, we don't have the scale to run those things in-house. [21:22.240 --> 21:30.940] And while we're using machine learning in lots of places to improve outcomes for patients and whatnot, we don't have the scale to be able to bring... [21:30.940 --> 21:36.300] maybe we do, and we just don't know it... to bring a model in-house to be able to do all of that work. [21:36.800 --> 21:39.020] And so at the moment, we're grappling with that. [21:39.020 --> 21:47.040] And I don't know that I have a great answer for the generative piece of the puzzle, but for basically all other kinds of machine learning, which we are using heavily, those things we just keep in-house. [21:47.040 --> 21:55.440] And like John said, we don't let them outside of our doors, and so we don't really have to worry about at least explicitly leaking that data out into the public, right? [21:55.670 --> 21:56.440] Yeah. [21:57.400 --> 22:14.580] Just to add to what John, Tina, and Phillip have already said, so in order to prevent your knowledge graph to escape to outside world, one is like you can develop your own language models. [22:14.820 --> 22:17.640] Not necessarily large language models. [22:17.640 --> 22:24.700] You can do small language models, because you don't want to solve world problems in your enterprise or for your enterprise customers. [22:25.060 --> 22:28.860] So you want to solve just a limited set of very small set of problems. [22:28.860 --> 22:34.620] You don't need to... it will be trained on New York Times and Tolstoy and Shakespeare, basically. [22:34.660 --> 22:44.640] You just want to train on your business model, your customer support documents, your employee-related, whatever questions, their health benefit questions are there. [22:44.640 --> 22:46.900] So those kind of things you want to train. [22:46.900 --> 22:52.720] Or even if there are proprietary things, like even engineering design and models, you can train it on that. [22:52.720 --> 23:01.480] And for that, you don't need a large language model of 470 trillion, or I'm just saying trillion, like 470 billion number of parameters. [23:01.480 --> 23:02.560] So you don't need that. [23:02.560 --> 23:06.320] And that could be done at a much cost-effective way. [23:06.320 --> 23:07.200] So that is one way. [23:07.200 --> 23:28.760] Second is you can host these other publicly available models in your environment and have these controls on your firewalls and in your data sharing that those things are not getting shared back to somewhere else. [23:28.760 --> 23:37.220] And I think Microsoft already provides that kind of a model that the data or the parameters, whatever you train, they will remain private to your organization . [23:37.220 --> 23:48.500] So those are the things which can help in keeping control of your data, your business strategy, your edge over the competition. [23:48.500 --> 23:53.520] Yeah, I'll just add one more thing, and it really ties back to what Tina said. [23:53.780 --> 23:57.740] The governance piece is uber-critical in your environment. [23:57.820 --> 24:03.600] We have yet to see the full onslaught of AI-related legislation across the country. [24:03.620 --> 24:18.660] Colorado has passed legislation, New York has passed legislation, and portions of Europe have, that all require any sort of discriminatory model, like how much is your insurance going to be, needs to be auditable. [24:18.940 --> 24:26.020] Now you take the complexity of your environment where even now we're starting to push AI to the edge. [24:26.300 --> 24:35.880] You've got AI, instead of it running in some giant central brain, you have AI distributed across 100,000 devices. [24:36.200 --> 24:39.920] All those may need to be auditable. [24:40.440 --> 25:00.380] So as we develop and as we implement, there's going to be back steps as Wisconsin and every other state start to go, wait a second, I don't know if I want an AI model determining if I'm a good employee, or a good employee candidate. [25:00.380 --> 25:10.820] I worked with one person that developed a model for the state of Detroit that was basically designed to pick out the best candidates who apply for a job. [25:11.480 --> 25:14.060] I'll tell you, I used to be a history teacher. [25:14.740 --> 25:16.860] And before that I was a principal. [25:17.200 --> 25:19.640] And now I run AI ops for FIS. [25:19.640 --> 25:21.820] I would never have been selected. [25:22.880 --> 25:30.960] So those types of models, you know, don't always select the right person, first of all. [25:32.040 --> 25:40.100] And, you know, part two, they will be more and more scrutinized as legislation grows and grows over time. [25:40.200 --> 25:53.800] So another thing to keep in mind, the reason why a zero approach is probably the safest approach, because we just don't know what we'll be bound to over the coming months and years, et cetera. [25:53.800 --> 25:58.200] So one thing I want to add, builds on both of your points, Tina and John. [25:59.020 --> 26:04.800] When you're doing this work and trying to understand what your North Star is, you create an AI policy. [26:04.800 --> 26:12.160] And you do that, and you create an AI governance group that actually goes through and figures out how to actually activate AI in your organization. [26:12.260 --> 26:17.760] And they leverage all of their black box processes with outshoots to, like, legal. [26:17.760 --> 26:19.640] Okay, let's have an external legal review. [26:19.640 --> 26:32.520] So that when we're going to bring in a third-party software that's going to have an AI product, we're looking at and making sure that they are agreeing that they will not train their third-party knowledge graph on our data. [26:32.520 --> 26:34.400] So that's, like, the legal side. [26:34.400 --> 26:37.180] Or you talked about some of the ethical concerns, right? [26:37.180 --> 26:39.680] Like, how do we feel about sales copilot? [26:39.680 --> 26:43.380] It's a great tool that shows me the sentiment analysis as I'm talking to customers. [26:43.380 --> 26:52.540] Well, what if I hang up with the customer, and now I call an internal person or my boss, and now I have a sentiment analysis graph of my boss and how that went. [26:52.540 --> 26:54.960] And do I use that in an ER case against him? [26:54.960 --> 26:57.060] Look, I'm in a toxic work environment. [26:57.060 --> 27:07.140] So there's a lot of these kind of concerns where they go back up to the governance board and go, no, we're not going to use HR, we're not going to use AI for hiring and firing decisions without human review. [27:07.140 --> 27:13.220] No, we're not going to allow haves and have-nots on analysis of employee interactions, right? [27:13.220 --> 27:18.080] So these kind of policies get created over time as things come up to that group. [27:18.140 --> 27:22.180] And then, and the next thing I would talk about is, this is a real concern. [27:22.320 --> 27:26.660] So earlier in the days, Miro was released, an AI add-on. [27:26.660 --> 27:37.520] And you had to click the newsletter and then click into the terms to see that they were going to train their knowledge graph on the usage of that AI button across all customers. [27:37.560 --> 27:44.620] So anything you did for knowledge creation in Miro, you were contributing to a big borg of knowledge inside of your organization. [27:44.620 --> 27:46.580] It was basically AI spyware. [27:46.740 --> 27:51.220] Now, a couple companies tried this, like Zoom, and they immediately had backlash. [27:51.240 --> 27:52.960] But Miro didn't. [27:52.960 --> 27:54.100] Nobody noticed. [27:54.180 --> 27:59.300] But then about three, six months later, I don't think they ever built on it, but the terms allowed them to. [27:59.300 --> 28:02.080] And it was a user click-through agreement to activate the feature. [28:02.600 --> 28:04.360] They got smart and they turned that off. [28:04.360 --> 28:06.340] And I don't think they ever did anything like that. [28:06.340 --> 28:19.380] But the fact is, we have to be looking at it both from a tech perspective, as my colleagues have spoken about, as well as from a legal and contract perspective, because that's ultimately when we're using somebody else's software and somebody else's cloud, [28:19.380 --> 28:27.480] are they going to treat all the data, including the insights created from the data, as our customer data, or do they claim some ownership in that? [28:27.600 --> 28:34.040] And just remember, if you aren't paying for an AI product, you are the product and they're absolutely training on anything you put into it. [28:34.040 --> 28:41.140] So Grant, was your example hypothetical, or do you have some real experience with someone on your team doing sentiment analysis of you? [28:41.140 --> 28:42.100] Sorry, it was just a joke. [28:43.160 --> 28:47.540] You know, I'm still fighting that ER case, but I think it's going to go okay. [28:48.160 --> 28:50.620] I have a funny example on this. [28:50.620 --> 28:54.160] I won't name which organization I was part of. [28:54.160 --> 29:00.240] So the organization turned on a meeting summary note-taking tool. [29:00.840 --> 29:09.740] After 45 minutes of meeting, seven participants, the tool summarized, you guys went round and round on the same thing. [29:09.820 --> 29:11.480] Oh, that's fantastic. [29:11.480 --> 29:12.200] Insightful. [29:12.200 --> 29:15.500] Wow, I would love to be called to the carpet by the AI machine. [29:15.500 --> 29:17.360] Who's got the next question to take us into? [29:17.600 --> 29:19.280] Right there in the green. [29:40.810 --> 29:42.090] I'll repeat the question. [29:42.090 --> 29:43.230] Thank you for it. [29:43.250 --> 29:49.550] Are we seeing demand for, and are we supplying training for AI and cybersecurity or the combination of both? [29:49.550 --> 29:50.710] What do you guys see? [29:54.780 --> 30:03.100] So I think it's more like using the AI for improving the efficiency of cybersecurity. [30:03.100 --> 30:09.360] And of course, using cybersecurity for your AI assets, especially your data and the model. [30:09.360 --> 30:19.840] But mostly it is using the AI for threat detection, for patching, for response, for any kind of a threat. [30:19.840 --> 30:26.000] A lot of things are there, like behavior analysis of individual actors, a lot of things there. [30:26.000 --> 30:29.220] But of course, you've got to protect your AI models and the data. [30:30.320 --> 30:39.240] I will add any investments where AI can be leveraged for cybersecurity, that's not in the boardroom. [30:39.460 --> 30:54.540] And I make an appeal to the people who are really smart in this room, why it's not in the boardroom is because when you're thinking about securing your organization, the C-levels and the board members, don't worry about how you do it. [30:54.540 --> 30:55.520] Just do it. [30:55.520 --> 31:04.680] If you're going to leverage AI to do it, that's on you, not something that actually is understood or can be made from an approval perspective that high up the chain. [31:04.680 --> 31:21.840] So what do you need to do better as cybersecurity professionals or people who want to innovate with AI in the cyber field is helping to make that business case that as cybersecurity becomes a bigger problem for organizations, aka, we have to invest more, [31:21.840 --> 31:28.960] how AI is going to help scale to meet that need without just hiring more people or buying more tools. [31:28.960 --> 31:36.900] The tools and the stacks are really good, but they're oftentimes expensive, oftentimes not affordable to the small, medium-sized organizations. [31:37.060 --> 31:39.520] Hiring people isn't always much better. [31:39.520 --> 31:48.040] And so how do you leverage AI in order to help you with that scalability to meet an ever-changing threat vulnerability environment? [31:48.040 --> 31:56.400] So from a training perspective, I've actually see training come up the pike and we say, well, are you going to pick in training of that or something more critical to the business? [31:56.400 --> 31:57.620] They don't understand it. [31:57.620 --> 32:03.900] So if you could help make that case, it will end up being a part of every training curriculum in the near future. [32:05.900 --> 32:09.920] When I was at Yahoo, our security team was called the Paranoids. [32:10.340 --> 32:17.200] And we were a big company that was kind of a conglomeration of a bunch of acquisitions, and so it was the Wild West. [32:17.200 --> 32:20.240] And our central security team couldn't stay on top of everything. [32:20.240 --> 32:23.600] So what they did was they set up what they called their local Paranoid program. [32:23.720 --> 32:31.720] And what they did with local Paranoids was they pulled in a person from each team, an engineer, and basically taught them how to hack their own application. [32:31.760 --> 32:34.880] And that in and of itself is a huge eye-opener. [32:34.880 --> 32:42.920] If you've never seen an engineer go, my application is secure, and then they learn how to use an API in a way that's unintended and they're shocked. [32:42.940 --> 32:44.900] It's always an amazing thing to see. [32:44.900 --> 32:47.680] But it's great because you embed this talent across your teams. [32:47.680 --> 32:57.340] And that's a model that I've used in every place that I've been since, essentially, bringing in a white hack to teach somebody in each part of the organization a little bit about how to hack. [32:57.340 --> 33:03.600] I love the question because I don't think we've ever had any of them open up a Jupyter notebook or something along those lines as a part of that training. [33:03.600 --> 33:06.980] But it's something that I will definitely bring back and see if we can figure that out. [33:07.680 --> 33:14.940] One thing I would add is that security has a bit of a different problem than other groups. [33:14.940 --> 33:25.660] So when it comes to training for the use of AI for a use case, you know, you guys are faced with the fact that both the cops and the robbers have a big gun. [33:26.340 --> 33:26.760] Right? [33:26.760 --> 33:29.320] And sometimes the robbers are better with it. [33:29.660 --> 33:39.180] So what it comes down to is how do you train relevant information to the people who need it in a timely fashion? [33:39.560 --> 33:45.360] Can an organization teach us all how to use AI tools? [33:45.360 --> 33:46.200] Sure. [33:46.200 --> 33:52.520] And in six months, a year, you may be competent in it. [33:52.520 --> 33:59.000] But by that time, what your use case was is now gone. [33:59.000 --> 34:00.980] And you now have a new one. [34:01.380 --> 34:16.540] So training, particularly in cybersecurity as it relates to AI, it's a very daunting task, which is why I've not seen many companies adopt a specific training strategy for cybersecurity. [34:16.540 --> 34:23.240] There's definite training strategies for just about every other aspect of the business, except that. [34:23.280 --> 34:25.800] Because keeping pace with it is almost impossible. [34:25.800 --> 34:27.120] Boy, boy, do I love this. [34:27.120 --> 34:29.720] And it really dovetails well on Phil's comment. [34:29.720 --> 34:31.000] Tina, were you about to say something? [34:31.000 --> 34:31.440] Okay. [34:31.440 --> 34:32.040] Sorry. [34:32.060 --> 34:35.760] So one of the things that I'm seeing is people are asking me, hey, where can I get trained up? [34:35.760 --> 34:36.680] Where can I learn? [34:36.680 --> 34:39.540] And my answer is, go try this stuff. [34:39.540 --> 34:40.540] Go download it. [34:40.540 --> 34:41.060] Go work with it. [34:41.060 --> 34:45.060] Because the problem is, the training isn't keeping up with the pace of innovation. [34:45.120 --> 34:57.240] So if you're getting trained on AI, even these like fancy programs at big schools like Wharton or MIT even, the stuff that they're teaching, it ends up being... actually, I should take MIT out of the equation. [34:57.240 --> 34:58.040] I know about their program. [34:58.040 --> 34:59.700] They probably have one of the better ones, honestly. [34:59.700 --> 35:03.100] But a lot of them are six to nine months behind. [35:03.380 --> 35:07.760] Because by the time they've had to develop that curriculum, they're training you on how to make a chatbot. [35:07.760 --> 35:09.660] And we're all learning about agents, right? [35:09.660 --> 35:18.980] So I find the best way to learn about things is to bring in somebody that really knows the space, a white hat, right, that can teach other teams like Phil's talking about, get their hands into it. [35:19.060 --> 35:21.240] Or, you know, do with the two in the box. [35:21.240 --> 35:29.580] Bring in a consulting partner that does these kind of agent buildouts and then start to work with them and have your people learning along the way. [35:29.580 --> 35:33.520] Because you're solving a relevant problem right now that matters. [35:33.520 --> 35:42.200] And with today's tech, not taking some kind of one-way conversation that's, you know, three to, at best, nine months old. [35:42.200 --> 35:46.040] So it's honestly learning by doing in both the security spaces. [35:46.040 --> 35:52.440] And Grant, to your point, there are some great programs out there to actually do applied learning. [35:52.440 --> 36:14.620] So for those of you here in Wisconsin, I'm sure there are others, but WCTC not only has a great legacy cybersecurity program that led the nation when this ended up being part of curriculums, but they just opened their applied AI lab that really tries to couple concepts with AI as well as cybersecurity there. [36:14.620 --> 36:15.980] So go check that out. [36:15.980 --> 36:22.740] I know that we have instructors here from UW Oshkosh that's educating on cybersecurity and AI. [36:22.740 --> 36:25.500] And so don't wait for your companies to necessarily do it. [36:25.500 --> 36:29.960] There are lots of little labs and ecosystems that are popping up. [36:29.960 --> 36:39.540] Milwaukee Tech Hub is another one that's doing a series of trainings where you can converge on AI and or cybersecurity outside of your organizations. [36:39.740 --> 36:41.060] I love that ad. [36:41.520 --> 36:48.120] Actually, is there anyone else in the audience that has a plug they want to make for a training or a resource that you found valuable? [36:50.020 --> 36:50.980] Keep it in mind. [36:50.980 --> 36:52.780] I'll take those at the end if you'd like. [36:52.780 --> 36:53.980] And the questions have been excellent. [36:53.980 --> 36:56.280] I'm going to give you these just to look at if they inspire you. [36:56.280 --> 36:59.960] Is there another brave soul that might have a question for us to jump into? [37:04.260 --> 37:05.560] Yeah, go ahead. [37:19.960 --> 37:21.320] What a great question. [37:21.340 --> 37:36.840] So when we're looking at, you know, putting in our controls and trying to meet regulations, are we looking at kind of those historical things that we are accountable for today or are we looking forward to the emerging types of AI governance? [37:36.840 --> 37:38.100] What was the one you said again? [37:38.100 --> 37:39.880] NIST AI RMF. [37:39.940 --> 37:42.200] NIST AI RMF, for example. [37:42.200 --> 37:43.540] Hopefully I said that correctly. [37:43.540 --> 37:45.120] So what does the panel think? [37:47.120 --> 37:50.420] I'll tell you at the highest level, I love CISA. [37:50.540 --> 37:54.900] Now, CISA is, if you're not familiar, cybersecurity infrastructure. [37:54.900 --> 37:55.360] Okay. [37:55.820 --> 38:03.560] And they're going through a lot of changes, new leadership, new roadmaps, new executive orders, etc. [38:03.560 --> 38:07.140] But they have an AI roadmap listed. [38:07.140 --> 38:13.240] They have a number of topics of what to think about from a governance perspective. [38:13.240 --> 38:22.720] And key for me, they are focusing in on key infrastructure that protects our nation but also our reliance on that key infrastructure as private businesses. [38:22.720 --> 38:37.840] If you haven't spent many times at utilities and water treatment plants and other types of things we take for granted but very much depend on every day, especially at the municipality level, they are ripe for failure. [38:38.120 --> 38:44.120] They're often not invested in enough and they often can't afford the know-how to keep them safe. [38:44.120 --> 38:59.740] And so for me, starting with our most critical infrastructure, being supportive of understanding what those roadmaps look like, helping them align to them, helping us align to them, and the general infrastructure in general is where I tend to start. [38:59.740 --> 39:07.660] It's something that's easily adoptable because it's publicly available and at the top of everybody's minds as these executive orders are coming down the pipe. [39:11.260 --> 39:17.080] So it's a kind of a difficult question. [39:17.080 --> 39:30.100] So in the difficult question in the sense that most of the organizations, commercial enterprises, they are using AI. [39:30.100 --> 39:33.300] They are not doing a lot of research in the AI. [39:33.300 --> 39:48.020] Very few organizations, they are doing the true research in the field of AI, Gen AI, LLM, and they are working on the things like those NIST standards or explainability or ethical AI. [39:48.020 --> 39:50.420] Those are the things those organizations are trying to work on. [39:50.420 --> 39:53.800] Or in the academic field, basically, those things are being worked on. [39:53.800 --> 40:03.800] A vast amount of the organizations, they are trying to cope up, adopt, implement in their organizations to whatever best is available. [40:03.800 --> 40:05.740] So that is where we are right now. [40:05.740 --> 40:19.680] Hopefully, as the field matures, we will have the solutions which are compliant to the latest standard equivalent to HIPAA and GDPR and other things like in the data field. [40:19.680 --> 40:22.620] So similar standards, AI will be compliant. [40:22.620 --> 40:26.740] But right now, most solutions are probably not compliant. [40:29.160 --> 40:41.280] Yeah, I would only add that as you brought up, I was at a conference a year ago, and I asked a group of people, where are you in your AI journey in your company? [40:41.480 --> 40:43.480] Are you at advanced level? [40:43.480 --> 40:45.340] Are you at a very junior level? [40:45.340 --> 40:48.760] And then I said, how many people are just barely getting started? [40:48.760 --> 40:52.280] And the just barely getting started was the vast majority. [40:52.500 --> 40:59.340] And the result is that there isn't a tremendous amount of governance in many organizations. [40:59.340 --> 41:01.840] You brought up the term skunk works . [41:01.840 --> 41:11.380] I would say that a good majority of AI projects within more established businesses really start there. [41:12.560 --> 41:18.180] Now, our company, for instance, they have built a center of excellence for AI realizing this. [41:18.380 --> 41:22.760] I mean, if you look at the vast org charts, we have a tremendous number of data scientists. [41:22.900 --> 41:24.420] Nobody knows what they're doing. [41:25.020 --> 41:29.340] I mean, they were hired for a certain purpose for a certain organization. [41:29.340 --> 41:30.200] That's fine. [41:30.320 --> 41:33.840] But there was no centralized approach. [41:33.840 --> 41:40.680] So finally, we've taken that step of building that governance where any AI project needs to go through a certain level of scrutiny. [41:40.820 --> 41:43.340] I'll say the scrutiny isn't terrible. [41:43.420 --> 41:48.360] It will probably get more intense as time goes on, but at least it's a start. [41:49.020 --> 41:55.780] So I think the point is that many companies are just very immature still in this aspect. [41:56.260 --> 42:00.000] Many are looking at, you know, how do I buy it instead of build it? [42:00.000 --> 42:02.000] Which is a very legitimate approach. [42:02.040 --> 42:04.940] But it's also a critical part of your vendor selection. [42:04.940 --> 42:10.820] Because everything we've been talking about, well, you've kind of laid that in your vendor's lap. [42:10.820 --> 42:18.600] So when the government or regulatory agency comes to you, the faults of your vendor are now your faults. [42:19.120 --> 42:29.080] And so being more picky about any sign you see that says we use AI, okay, as a vendor, what is your governance process? [42:29.300 --> 42:31.420] You know, I think these are important questions. [42:31.420 --> 42:41.420] And John, to your point, I love it when one of the companies I'm part of or one of our clients says, hey, there's this vendor or this platform, what do you think? [42:41.420 --> 42:42.580] Should I adopt it? [42:42.580 --> 42:49.680] I said, I don't know, tell me what your third-party vendor risk management program looks like. [42:49.840 --> 43:00.020] And trust me, organizations are way behind the times even getting to third-party vendor risk management for cybersecurity. [43:00.160 --> 43:07.100] So they've got a long way to go before they understand how to adopt a critical third-party for AI as well. [43:08.500 --> 43:10.260] Yeah, I'll just echo that. [43:10.540 --> 43:24.440] I'm only five minutes, five minutes, five months into my current role, and we are definitely on the early side of adopting machine learning in places where still our governance around things like generative is generally, we kind of stay away from it in large part, [43:24.440 --> 43:30.380] but most of our consideration is around third-party risk at this point, because we have tons of vendors who are leveraging it. [43:30.380 --> 43:41.340] And we're finding that in some spaces when we're looking at functionality that's critical to our core business, it's harder for us to figure out how to move forward. [43:41.360 --> 43:49.240] What we are seeing, and we have found a case this week, is anomaly detection using machine learning in the tools that are out there. [43:49.240 --> 43:54.080] Hopefully everyone has got those kinds of tools in your ecosystem, are incredibly effective. [43:54.580 --> 44:06.860] And yeah, I mean, just this week, our company, we have contractual obligations to ensure that none of our patients' data leaves the United States, so all of our employees with access to PHI are in the U.S. [44:07.040 --> 44:20.320] We discovered earlier this week that there was a person who was offshore tunneling in through a public VPN into Utah, coming from Thailand into Utah to essentially work for us. [44:20.460 --> 44:21.740] And you can understand why. [44:21.740 --> 44:22.060] U.S. [44:22.060 --> 44:24.320] wages living in Thailand, that's a good schtick. [44:24.320 --> 44:32.800] But we were able to identify the pattern because we have those models out there doing the searching for us and, you know, saves us a lot of liability. [44:33.540 --> 44:34.500] That's incredible. [44:34.680 --> 44:35.320] Wow. [44:35.400 --> 44:36.520] Great question. [44:36.520 --> 44:41.140] I think we all want to say yes to the question, right? [44:41.140 --> 44:42.720] Are you looking forwards and backwards? [44:42.720 --> 44:43.640] On all sides, right? [44:43.640 --> 44:46.340] Of course, we want to have all that kind of in mind. [44:46.340 --> 44:47.620] But it is hard. [44:47.620 --> 44:48.200] It is new. [44:48.200 --> 44:52.600] And I think a lot of us are looking to third parties to help us manage that risk. [44:52.600 --> 44:58.920] And that's why third party risk management also means we need to understand and have a good list of what we're asking them for. [44:58.920 --> 45:03.760] Because ultimately, if not on the menu and we didn't know to ask for it, then it's not going to show up in our meals. [45:03.760 --> 45:08.540] So even if we do outsource that, we can't outsource knowing what good looks like. [45:08.540 --> 45:16.680] And having that kind of viewpoint to even have the smart question to ask is something we all have to get schooled up on so we can be effective in our roles. [45:16.920 --> 45:33.380] Well, and just to add one more thing, and not to beat a dead horse, but, you know, ultimately, one of the realities of either your development or vendor selection is what is your capability to stay current with what the most appropriate and available models are. [45:33.540 --> 45:39.680] You know, it takes time to build, you know, an image classification model. [45:40.060 --> 45:40.380] Right? [45:40.380 --> 45:45.800] I mean, but here's your problem with an image classification model in the cyber security space. [45:45.800 --> 45:49.460] You know, you can train it on 10 million pictures that look like a duck. [45:49.460 --> 45:52.440] And you can give it a new picture and it's going to go, that's a duck. [45:53.140 --> 45:57.380] But what happens when a new duck emerges with four legs? [45:58.180 --> 45:59.740] It's not a duck. [46:00.140 --> 46:08.680] You know, the ability to train, retrain, and evolve is critical in both your development and in picking your vendor. [46:08.680 --> 46:17.300] I mean, if you were to ask me what the most fascinating new development, it's not even all that new, in cyber security, it's really graph neural networks. [46:17.780 --> 46:20.560] Graphs change, pictures of ducks don't. [46:20.560 --> 46:27.600] Just like your environment changes, your attack surfaces change, the modes of attack change. [46:28.300 --> 46:34.400] How do you proactively detect that when your model's still looking for the duck? [46:34.800 --> 46:41.000] So these are things you also have to be aware of as you build your own team or look for a vendor. [46:41.000 --> 46:42.720] How are you staying current? [46:42.720 --> 46:47.060] And in order to do that, you have to yourself be somewhat current. [46:47.620 --> 46:51.020] You know, how do you know what to ask if you don't know what to ask? [46:51.520 --> 46:56.060] You know, these are things that are all part of this maturity process that we're talking about. [46:56.800 --> 46:59.120] All right, do we have a next question from the audience? [47:01.660 --> 47:03.980] I would love to take the one that's on the top of the list here. [47:03.980 --> 47:05.420] I think it's a good one for us. [47:05.420 --> 47:06.820] We have one up there. [47:06.820 --> 47:07.380] What's that? [47:07.740 --> 47:08.360] Oh, we do. [47:08.360 --> 47:08.620] Great. [47:08.620 --> 47:09.480] I'm sorry I missed you. [47:49.680 --> 47:50.920] That's a great question. [47:50.920 --> 48:00.020] So just to restate it for the recording and for folks who may not have heard, the question's really around, okay, so you build all this great in-house compliant AI solutions. [48:00.020 --> 48:08.360] How do you stop somebody from basically just going onto the internet and grabbing one of those free solutions to get the insights they're trying to achieve? [48:08.360 --> 48:13.300] How do you think Amazon's private code got into ChatGPT's repository? [48:13.300 --> 48:14.660] I think it's the same answer. [48:14.660 --> 48:16.240] So what does the panel think? [48:16.400 --> 48:18.500] Okay, so that's the easy one. [48:18.580 --> 48:21.660] You prevent the USB to be plugged in into the computer. [48:21.660 --> 48:31.000] So you prevent that, and then you don't allow any other data to be transferred through any other means, through Bluetooth, no through other, any other means. [48:31.380 --> 48:36.140] And many banks, financial institutions, those kind of policies in place. [48:36.160 --> 48:37.540] Yeah, data exfiltration. [48:37.540 --> 48:48.320] You got to be protected against it, and there's nothing, well, as you all know, it's not foolproof, but you can at least make it difficult for folks to exfiltrate data. [48:48.320 --> 48:58.080] I mean, because, you know, taking, pulling out my iPhone and taking a picture of my screen, like, there's nothing that's going to stop that, but you're not going to be able to exfiltrate, you know, petabytes worth of data that way. [48:59.340 --> 49:05.160] I also say as an employer, no strategy beats great culture. [49:05.160 --> 49:12.080] Create great cultures within your company and your teams so that your own insiders don't want to harm you. [49:13.240 --> 49:21.980] Yeah, I wish I had a really funny or eloquent answer to that, but I mean, I think the panels brought up the key areas. [49:22.040 --> 49:29.740] You know, can somebody, you know, take a CSV of a couple hundred thousand, you know, lines? [49:29.940 --> 49:32.760] Yeah, we all are creative people. [49:32.760 --> 49:34.460] We work with creative people. [49:34.800 --> 49:36.800] Yeah, is it impossible? [49:36.800 --> 49:37.520] No. [49:37.660 --> 49:38.620] Are you going to find it? [49:38.620 --> 49:57.100] Probably not, but there are definitely preventive measures, and I think Tina's point to a positive work culture is critical because, you know, these are things that could potentially damage your organization, your reputation, and if you're all in lockstep and have the same mission as a company, [49:57.100 --> 50:00.400] those things become far less frequent. [50:01.320 --> 50:12.920] And also, like, there are basic things which organizations need to do in order to prevent the data leak, and you will be surprised, like, how many organizations, they don't do that. [50:12.920 --> 50:25.660] So when I was in consulting, I want to name which company and which client, so one of my clients, they mailed me a PDF with 10,000 plus social security numbers. [50:26.060 --> 50:31.080] So now, that kind of behavior is inexcusable. [50:31.080 --> 50:43.300] First of all, the person was irresponsible, and second, the company did not have enough filters in place that the SSNs are getting out of the mail, and they should have been prevented or caught there. [50:43.940 --> 50:46.660] I find this an unequal posture, right? [50:46.660 --> 51:01.660] And I find that, you know, banks and financial institutions and insurance and government, they're locking down everything, and then some companies are, you know, on a graduating scale to just kind of blind trust. [51:01.680 --> 51:17.520] But I think at the end of all of it, if somebody wants to steal something from the company, steal includes using it inappropriately outside of policy, they're going to, and they're going to find a way in most cases. [51:18.080 --> 51:22.540] Having that culture as that backstop, I think, is one of the most important things we can do. [51:23.580 --> 51:32.880] Let's hope that they actually aren't effective in that if they did try to do that, but the culture and having people you trust in your organization is the best security you can have. [51:33.240 --> 51:34.820] Is there another question from the group? [51:36.380 --> 51:37.900] Well, let's take this top one here. [51:38.360 --> 51:39.300] Oh, good. [51:39.300 --> 51:39.920] Thank you. [51:39.920 --> 51:43.480] How can AI enhance an organization's cyber security? [51:43.480 --> 51:45.140] What a great question. [51:45.140 --> 51:45.780] Wow. [51:45.780 --> 51:48.540] How can AI enhance an organization's cyber security? [51:48.540 --> 51:49.760] That's fantastic. [51:49.760 --> 51:51.100] Cyber security posture. [51:51.100 --> 51:52.420] What does the panel think? [51:53.820 --> 51:58.740] Yeah, I'll kind of go back to the answer that i had a little earlier, and that is doing anomaly detection. [51:58.740 --> 52:11.260] You know, being able to look at the patterns of the people inside of your company, and the behavior of your infrastructure, and when you can spot that something's out of the norm, you can react quickly to it. [52:11.260 --> 52:19.300] And I think that's something that, you know, with models today, becomes far more tractable, and the tools that are out there that make it possible. [52:19.300 --> 52:21.620] And I always like telling this story. [52:21.620 --> 52:32.540] So for a while, I was working at Zynga, the game maker, and we had Zynga poker was one of our more profitable products that was out there, right? [52:32.540 --> 52:44.620] And so the way that the game worked when I was there was everybody who was playing a hand would connect to a single server, and then, you know, if the server crashed midway through the hand, whoever had the best hand would sort of get all the chips. [52:44.620 --> 52:48.320] And so whether or not you like that approach, okay. [52:48.580 --> 52:53.680] But hackers had figured this out, and so basically there was this huge secondary market of selling chips to players. [52:53.780 --> 53:02.780] And so the way you would do that would, you know, you'd gain chips by using this exploit, and then you'd go to another table and lose the hand in order to give chips to somebody who would pay you for them in the end. [53:02.780 --> 53:07.860] Well, we had, you know, a hacker who was out there who was using this exploit a lot. [53:07.860 --> 53:17.120] We finally figured out what the problem was, fixed it in the code, got it to be remediated, and the hacker who was making a lot of money off of this got really annoyed. [53:17.120 --> 53:31.280] And so what they did is they went to a different game, they embedded some malicious script into a place that it would never appear in this other game, but when a customer care person pulled it up, he could actually take control of a browser. [53:31.280 --> 53:40.540] He got in, scanned the network, found our source control, figured out where the fix was for the code, reverted the fix, deployed it, and then went back to scamming folks. [53:40.540 --> 53:55.460] And it's like those sorts of things that, like, if you've got anomaly detection and you realize that your customer care agent for some reason is accessing source control and updating code, like those things are really, really hard to spot unless you've got something that's doing more holistic, [53:55.460 --> 54:00.140] you know, network-wide views of normal behavior versus abnormal behavior. [54:02.000 --> 54:08.560] I'm just to second what you said, and this kind of comes back to one of our topics earlier. [54:08.560 --> 54:20.340] There are very approachable and usable models that can be used in-house to do things like anomaly detection, event correlation, you know, root cause isolation. [54:20.340 --> 54:26.820] I mean, these things can be used as that starting point as people evolve in their space. [54:26.820 --> 54:50.040] We've kind of, as a whole data science community, gone from a skateboard to a Ferrari in the scope of, you know, a handful of data scientists, and now everyone wants Gen-AI, when there are tremendous use cases that can be applied now securely, governed effectively within your own environment. [54:50.040 --> 54:55.060] So point being, you know, you don't need the Ferrari for every use case. [54:55.060 --> 54:56.840] Sometimes the Yugo works. [54:57.120 --> 55:04.580] It may not be as sexy, but, you know, it'll get the job done in places that you need to get it done. [55:06.660 --> 55:17.520] Just to add to what Philip and John has already said, so AI is great in pattern recognition, and that's how you detect the anomalies. [55:17.520 --> 55:28.960] So anomalies in terms of the traffic flowing, anomalies in terms of the user accessing, different kind of things that should generate red flags. [55:28.960 --> 55:35.500] And then on the corrective side, so there is a patching, there is a response. [55:36.020 --> 55:43.200] All of those could be done quickly with the help of AI, and that is how you improve cyber security with the help of AI. [55:44.440 --> 55:55.600] One thing to note about anomaly detection, when you work with a vendor that says they have it, you know, have them explain how they do it, because one thing base models aren't good at is recognizing new normal. [55:56.360 --> 56:09.660] And so, you know, again, comes back to your question, you all as cyber security professionals have to become junior data scientists, so that you can ask the right questions both internally and externally. [56:10.000 --> 56:12.520] I will say I have a really fun one. [56:12.520 --> 56:22.340] For those of you that work at small companies that don't have security teams, I think I mentioned I work at a PE company, and I'm the cyber coach to some of their portfolio companies. [56:22.340 --> 56:29.640] And you got to think like car washes and restaurants and things of that nature that know nothing about this. [56:29.640 --> 56:36.780] It was great because one of these organizations recognized that something was going on in their organization. [56:36.780 --> 56:44.540] It was JD Vance made a visit to this restaurant, all of a sudden this restaurant started to get attacked, and all this activity was going on. [56:44.540 --> 56:49.120] And they ended up having an incident, and what did the CEO do? [56:49.120 --> 56:53.900] Went to chat GPT and said, what do I do if I've been breached? [56:53.900 --> 57:01.960] And ended up kept prompting and kind of gave him a little bit of an incident response playbook, which they had never developed in the past. [57:01.960 --> 57:14.300] So there's a great version of how AI can help with response and maybe even building a little bit of resilience when they don't have the luxury of having experts like you in their house. [57:14.760 --> 57:15.360] Amazing. [57:15.360 --> 57:16.660] I want to thank this audience. [57:16.660 --> 57:22.260] You guys have provided fantastic questions that really helped us put a pretty good conversation together. [57:22.260 --> 57:23.380] Especially that last one. [57:23.380 --> 57:24.360] That was a doozy. [57:24.360 --> 57:26.360] Yeah, that was so creative. [57:26.820 --> 57:28.020] It was really awesome of you. [57:28.020 --> 57:29.080] Thank you for that. [57:29.520 --> 57:35.900] What I'd like to do, because we're right about at time, is give each of the panelists just a chance to give you some closing thoughts. [57:35.920 --> 57:40.900] I pulled up the list of questions in case one of them inspires your final thoughts for the group. [57:40.900 --> 57:43.720] But why don't you start us off and we'll work this way. [57:44.340 --> 57:45.060] Sure. [57:45.160 --> 57:57.040] So AI in cyber security, if you are trying to do all on your own, it will seem very costly. [57:57.140 --> 57:58.700] It will seem hairy. [57:58.700 --> 58:00.500] It will seem very difficult. [58:00.580 --> 58:13.780] But the good news is that a lot of vendor products or a lot of your existing SaaS providers are incorporating AI in their products by default. [58:13.780 --> 58:15.840] And you are getting that benefit for free. [58:15.840 --> 58:20.420] For example, Microsoft is having phishing detection. [58:20.420 --> 58:22.020] Google is also having that. [58:22.020 --> 58:31.100] So a lot of your existing vendors, big software services providers, they are having AI and that will help in improving the cyber security at your end. [58:31.840 --> 58:36.860] Yeah, I guess the only thing I'd add is that as we all know, this space is moving quickly. [58:36.860 --> 58:47.700] And so if it's not you, you should have somebody on your team or in your organization who you are allocating time and budget to, to go keep current. [58:47.700 --> 58:49.220] Play with new models. [58:49.220 --> 58:51.100] Stay on top of what's going on out there. [58:51.100 --> 58:56.620] Because it is really, really easy for a vendor to walk in and it slices, it dices, it does julienne. [58:56.620 --> 58:59.700] They'll sell you something that sounds fantastic. [58:59.700 --> 59:06.620] And if you don't have somebody there in the room with a bit of a BS meter on you, then, you know, you really can get dragged into some strange places. [59:07.520 --> 59:11.500] I'll add on to Phil, you know, continue to stay current. [59:11.500 --> 59:19.900] But remembering from that executive communication lens, we all as technicians and technologists tend to get in the how. [59:19.900 --> 59:21.120] And we love the how. [59:21.120 --> 59:28.540] By the time you're talking to them, try to elevate to talk about the why and the very broadest parts of the what. [59:29.780 --> 59:30.720] Great call. [59:30.720 --> 59:33.100] And I'll, I'll just, you know, partner with that. [59:33.100 --> 59:36.680] Your journey in AI for your company doesn't start at the top. [59:36.880 --> 59:39.780] We started this session with a question about the boardroom. [59:40.040 --> 59:40.560] Okay. [59:40.560 --> 59:44.200] That, that, that's not where it starts and it doesn't end at the bottom. [59:44.500 --> 59:46.580] It's, it's a bi-directional communication. [59:47.020 --> 01:00:03.480] So again, as cybersecurity professionals, when you go to your leadership and say, I want this, or I want to do this, that has to be translated into the how, what, where, when, how much, et cetera, so that those discussions can legitimately happen. [01:00:03.680 --> 01:00:09.040] You know, because as you had mentioned at the top, you know, security is just a given. [01:00:09.040 --> 01:00:10.220] I have a security team. [01:00:10.220 --> 01:00:12.040] Why do I need to talk about it? [01:00:12.200 --> 01:00:16.780] If I get a major breach, well, heads will roll and then I'll have a new security team. [01:00:17.300 --> 01:00:26.040] So, you know, make sure, make sure that, you know, you experiment yourself. [01:00:26.280 --> 01:00:29.020] I'm, like I said, former history teacher. [01:00:29.020 --> 01:00:29.940] I love what I do. [01:00:29.940 --> 01:00:32.000] First of all, I get to dress like this every day. [01:00:32.620 --> 01:00:37.000] And secondly, you know, my entire life is non-stop experimentation. [01:00:37.500 --> 01:00:38.840] I mean, how cool is that? [01:00:38.840 --> 01:00:40.180] I want to add one last thing. [01:00:40.180 --> 01:00:48.000] My daughter's in the audience and she swore she wasn't going to listen to a word I said because she listens to me too much at home, but I saw you watching. [01:00:48.560 --> 01:00:49.360] That's awesome. [01:00:49.360 --> 01:00:49.480] Yeah. [01:00:49.480 --> 01:00:51.120] My daughter's never listened to me at home either. [01:00:51.120 --> 01:00:54.200] So this would be a first if she came, if my daughter came and actually listened. [01:00:54.820 --> 01:00:55.540] Nice. [01:00:55.540 --> 01:00:56.660] Well, thank you, panel. [01:00:56.660 --> 01:00:58.780] This was an absolutely incredible conversation. [01:00:59.040 --> 01:01:08.180] And to each of you, consider kind of how you can help us in this strategy of bringing executives and leaders into CypherCon. [01:01:08.180 --> 01:01:17.040] So if you've got a sponsor in your company that you think would benefit from kind of meeting and being experiencing the event, they're interested in speaking, have them reach out to me. [01:01:17.040 --> 01:01:23.700] I'm going to run the executive track for next year and I'm looking to really put this together as a way to continue to expand and support this community. [01:01:23.700 --> 01:01:24.840] Thank you so much. [01:01:24.840 --> 01:01:27.180] I'm looking forward to CypherCon 9. [01:01:27.180 --> 01:01:28.320] Have a great day.