[00:25.120 --> 00:27.740] Remember the message. [00:27.980 --> 00:31.040] The future is not set. [00:41.400 --> 00:42.100] This [00:54.700 --> 01:01.180] is America's global dominance in AI, which is exactly what... [01:01.180 --> 01:01.860] Hello! [01:01.860 --> 01:03.460] Thanks for joining me. [01:03.620 --> 01:04.320] I... [01:04.320 --> 01:07.640] Gavin sort of heard me before, but I'm gonna... [01:10.020 --> 01:11.640] Yeah, yeah, yeah. [01:11.640 --> 01:12.360] Thank you. [01:14.100 --> 01:15.300] Awesome. [01:15.300 --> 01:34.380] So first, I am a lawyer, and this is, of course, not legal advice, and the reality of someone from a large law firm talking about anything that has to do with the federal government is that my firm is very... [01:34.380 --> 01:45.580] One of the many firms that are potentially being targeted by the administration, because every large firm is potentially being targeted by the administration. [01:46.160 --> 01:58.000] And so all of my public speaking, I am trying to keep it extremely factual and not express any opinions about whether what is happening is good or bad. [01:58.620 --> 02:10.080] So as I say here, the opinions expressed in my presentation are my personal opinions and do not necessarily reflect the opinions of anyone at my firm or any of my clients. [02:10.080 --> 02:21.100] And when listening to someone talk about AI, I think it's really important for the audience, for an audience member to understand what their rules of engagement are when they're speaking in public. [02:21.260 --> 02:29.820] And so I should disclose to you, I am disclosing to you, that I give legal advice to big tech companies. [02:29.960 --> 02:41.200] I give policy advice to deployers and developers of AI who would have large compliance costs if there were a lot of regulation. [02:42.420 --> 02:47.600] So besides that, my background is not just on the corporate side. [02:47.600 --> 02:52.080] I spent a decade, over a decade, within the federal government. [02:52.380 --> 02:58.060] And that decade included time at both the National Labor Relations Board and the Equal Employment Opportunity Commission. [02:58.060 --> 03:03.260] So I am a former federal regulator on AI. [03:03.260 --> 03:05.840] And during the Biden administration... [03:05.840 --> 03:08.200] Why does that keep on going back? [03:08.460 --> 03:09.080] Huh. [03:09.360 --> 03:10.220] Interesting. [03:10.660 --> 03:11.460] That's not good. [03:11.460 --> 03:14.360] Okay, so I should pay attention to my screen if you can actually see my slides. [03:14.360 --> 03:16.320] I don't think I have any timing on that. [03:16.380 --> 03:22.520] By the way, the picture there, that's from Stable Diffusion, that's part of our demo over there. [03:22.520 --> 03:37.060] I lured myself last year, I keep on saying I deep faked myself, and so you can put me on a dinosaur and put me on other stuff if you use the right tag on our Stable Diffusion demo back there. [03:37.820 --> 03:47.460] So back during the Obama years, the Obama folks were fond of saying personnel is policy. [03:47.460 --> 03:59.780] And I think that is very true when you look at the Trump administration and the kind of people and their background who President Trump has appointed to senior advisory roles in the government. [04:00.060 --> 04:05.460] So first at the cabinet level, we have David Sachs, who is both the crypto and AI czar. [04:05.460 --> 04:08.920] We actually haven't heard that much from David Sachs in that role. [04:08.920 --> 04:14.800] I think we've heard a lot from him on the crypto side, but not so much on the AI side. [04:14.820 --> 04:22.060] Michael Kratzios has just been confirmed as the director of the OSTP, the Office of Science and Technology Policy. [04:22.060 --> 04:34.660] OSTP did a giant RFI asking for public comment on what the administration's national AI strategy and priorities should be. [04:34.980 --> 04:43.420] And so what OSTP does with that, and NTIA and these other federal agencies do with that, we shall see. [04:43.740 --> 04:49.980] Kratzios, of course, was the former CTO of the United States during the first Trump administration. [04:49.980 --> 04:52.880] So he knows how government works. [04:52.880 --> 04:56.840] He knows how government policy and regulation work. [04:57.300 --> 05:02.560] And he was confirmed in a vote with a lot of Democratic support. [05:02.860 --> 05:08.920] Now, interestingly, usually the OSTP director is not that controversial a nominee. [05:09.760 --> 05:30.000] The first time the Senate actually took a roll call, you know, having to record the was for Biden's OSTP director, Archie Probkar, who I think, you know, you all might recognize or remember that name from DEFCON, because Director Probkar was at DEFCON talking about OSTP's work on AI red teaming and some other collaborations. [05:30.000 --> 05:37.580] So that was the first vote, and Kratzios, they took the vote, and a lot of Democrats voted for him. [05:37.700 --> 05:38.500] Okay. [05:38.520 --> 05:40.840] Here's what he said at his hearing. [05:40.840 --> 05:50.440] The shape of future global order will be defined by whomever leads across AI, quantum, nuclear and other critical and emerging technologies. [05:50.440 --> 05:57.860] Chinese progress in nuclear fusion, quantum and autonomous system press on the urgency of the work. [05:57.860 --> 06:17.300] As President Trump has said, as our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. [06:17.440 --> 06:26.520] This is the person at the White House, confirmed at the White House, to lead the country's technology policies. [06:26.520 --> 06:28.120] That's what OSTP does. [06:28.120 --> 06:29.630] That includes AI. [06:29.630 --> 06:49.210] So for the rest of my presentation, I am going to try to explain to you what we think, what we predict, what I predict it means to make, I have even better quotes coming up, to make America first on AI, to achieve global technological dominance in AI. [06:49.310 --> 07:00.610] And so the way, the biggest way that I invite you to think or that I tell people to think about what that means is we are competing with China. [07:00.610 --> 07:12.930] That President Trump and his closest advisors believe that in AI, this is a global competition and we are competing first and foremost with China. [07:13.050 --> 07:27.810] And the solution to competing with China, the way we make American AI, the American industry dominant, achieve global dominance, is to be hands-off on regulation. [07:29.830 --> 07:30.310] Okay. [07:30.310 --> 07:44.510] So other senior advisors, Shrinam Krishnan, again, he's a VC, PayPal mafia, senior advisor on AI, I expect he's going to be, we'll hear more from him. [07:44.510 --> 07:49.510] And of course, Elon Musk has had a lot to say about AI safety. [07:49.510 --> 07:51.610] And I should just pass. [07:51.610 --> 07:52.210] Okay. [07:52.210 --> 08:01.350] So on February 11th, the Vice President spoke in Paris at an AI safety summit. [08:01.350 --> 08:22.130] So before there was Bletchley Park, and so Vice President Harris, then Vice President Harris spoke at Bletchley Park, and the Bletchley Park conference was supposed to be about the existential threat of AI, that the folks who organized Bletchley Park were wanting to talk about the existential threat. [08:22.230 --> 08:24.430] And you know, will the robots take over? [08:24.630 --> 08:45.190] Vice President Harris went in there and sort of kicked the sandcastle over, and she said, we need to talk about ways that AI not just might have these existential threats, but also that the misuse of AI or that using models in a bad way can harm people right now. [08:45.190 --> 08:58.010] Before even thinking about AGI, before even thinking about any of these other issues, Vice President Harris was saying that AI safety needed to include these other risks of discrimination. [08:58.730 --> 08:59.330] Okay. [08:59.330 --> 09:15.250] So AI safety, according to Vice President Vance, in February, he just came in and kicked the sandcastle over again and said, I am not here to talk about AI safety, I'm here to talk about AI opportunity. [09:15.290 --> 09:29.330] And so he has, in Paris, announcing the U.S.'s position on AI, the policy positions on AI, literally saying, I am not here to talk about AI safety. [09:29.330 --> 09:42.710] That seeing AI safety or characterizing AI safety or fear about AI safety as getting in the way of AI opportunity. [09:43.050 --> 10:05.990] The other thing that the Vice President did was, in Paris, in front of a European audience, he delivered a very explicit and very direct criticism of the EU AI Act and the way that European regulators were thinking about AI safety, calling it self-conscious and risk-averse. [10:06.990 --> 10:18.410] So, you know, onerous intentional rules, and his other quote is, excessive regulation of the AI sector could kill a transformative industry. [10:18.410 --> 10:23.930] And again, citing our competition with authoritarian regimes like China. [10:23.930 --> 10:43.390] So the philosophy that we've seen from the President's top advisors and the Vice President is that deregulation or avoiding heavy-handed regulation is imperative to making America first in AI. [10:43.510 --> 10:50.710] The other thing that the Vice President said in Paris is he characterized this as a worker-first policy. [10:50.750 --> 10:55.830] That we had, of course, President Biden talking about worker-friendly AI policies. [10:55.830 --> 10:59.230] I know folks could disagree with whether that's true or not. [10:59.230 --> 11:10.790] The Vice President in Paris says that his agenda, that President Trump's agenda is worker-first because it enables the U.S. [11:10.790 --> 11:13.510] AI economy to grow. [11:13.510 --> 11:17.530] And with that growth, that lifts up the workers. [11:18.130 --> 11:27.430] Okay, sometimes it's helpful to just listen to President Trump's own words, to think about how President Trump thinks about AI. [11:28.050 --> 11:35.870] AI is very scary, but we absolutely have to win, because if we don't win, then China wins, and that's a very bad world. [11:38.210 --> 11:40.650] Okay, words of the man. [11:41.090 --> 11:43.110] Day one executive order. [11:43.110 --> 11:45.710] He revoked President Biden's EO. [11:45.730 --> 11:48.270] That's a campaign promise. [11:48.270 --> 11:57.750] So on the campaign trail, he said President Biden, everything that the Biden administration did on AI was harmful and was hurting America's competitiveness. [11:57.750 --> 11:59.350] I'm going to revoke it on day one. [11:59.350 --> 12:01.730] He fulfilled that campaign promise. [12:01.730 --> 12:11.430] January 23rd, he signs a separate executive order on AI and lays out his priorities. [12:11.570 --> 12:17.470] So that's where we're getting the global dominance, the achieving global dominance on AI. [12:17.470 --> 12:19.950] That's straight out of his executive order. [12:19.950 --> 12:30.070] He's also emphasizing in the EO the need for AI to be free from ideological bias or engineered social agendas. [12:30.510 --> 12:33.310] Now, what does that mean? [12:33.310 --> 12:45.650] How do we as AI security people or AI researchers think about ideological bias or engineered social agendas that are baked into AI models? [12:45.650 --> 12:50.980] I look forward to hearing what the administration has to say about that. [12:53.110 --> 13:00.410] I mean, you know, Grok, you can get some fascinating output out of Grok. [13:01.330 --> 13:02.800] Fascinating output. [13:03.250 --> 13:07.580] Okay, so what... so OSTP has already blown this deadline. [13:08.210 --> 13:23.110] What the initial executive... President Trump's executive order directed OMB to do is that by March 24th to revise M2410 and M2418, which are binding OMB memoranda. [13:23.110 --> 13:31.670] So the framework that we're under is that President Trump killed Biden's executive order. [13:31.670 --> 13:37.530] But we don't explicitly have M2410 and M2418 revised. [13:37.530 --> 13:49.950] For those of you that aren't deep in AI policy, those are the federal government's own AI risk management and AI purchasing requirements. [13:50.070 --> 13:56.830] And so, you know, I expect that with with Krantzios only recently being confirmed, they're holding it until he gets to weigh in. [13:56.830 --> 13:58.710] But yeah, we'll see that. [13:58.710 --> 14:12.150] And we have another deadline by July 22nd for agencies to look at what they did pursuant to the Biden executive orders and to either revoke, revise, adopt, say it's okay. [14:12.150 --> 14:25.750] So it's not as breakneck a pace as some of the other things we've seen at federal, you know, on the federal level, you know, with respect to, like, say, you know, DEI or the other things that are administration priorities. [14:26.310 --> 14:33.450] And I expect we'll see more from the federal government on on all of this in in the months to come. [14:34.630 --> 14:50.270] In the absence of federal regulation or legislation, because Congress ain't acting, not on AI safety, we will see we will see a lot of pressure from the states. [14:50.270 --> 14:55.250] By the way, so, you know, I'm an employment lawyer, in addition to, you know, doing AI risk management. [14:55.250 --> 15:03.030] Most of my AI risk management conversations are about employment law and and non-discrimination laws and and how that applies. [15:03.030 --> 15:09.710] And so I tend to focus on on my chunk of things, which is the EEOC and employment law. [15:09.710 --> 15:14.270] OK, so AI safety, what does AI safety mean? [15:14.270 --> 15:16.290] It is in the eye of the beholder. [15:16.290 --> 15:18.090] It depends on who you talk to. [15:18.550 --> 15:42.230] So as a former EEOC and fed on the civil rights enforcement side, my starting point is that our existing civil rights laws apply to the use of AI and that President Trump revoking President Biden's executive orders or pronouncements in executive orders, [15:42.230 --> 15:44.690] they don't change the underlying law. [15:44.690 --> 15:47.750] Executive orders are not laws. [15:47.850 --> 15:52.710] Breaking an executive order by itself doesn't mean that you can get sued. [15:52.710 --> 15:56.950] Now, the agencies can do other things and it can talk about enforcement priorities. [15:56.950 --> 16:01.670] But executive orders, the president's executive orders are not laws. [16:02.150 --> 16:13.370] What we have is we have civil rights laws that have long said it's unlawful to discriminate on the basis of race, on the basis of sex. [16:13.370 --> 16:21.030] There is nothing in the existing law that says, but it's OK if a robot does it. [16:21.750 --> 16:33.350] There is nothing in our existing civil rights laws that says you have an affirmative defense, you have a get out of jail card if you say, oh, I didn't know how the AI model worked. [16:33.350 --> 16:37.770] I shouldn't be responsible for the results of using this AI model. [16:38.150 --> 16:42.470] The robot made me discriminate isn't a legal defense. [16:42.470 --> 16:49.590] So if we do it with automation or if we do it with humans, it's the same result under the law. [16:49.590 --> 16:51.190] That's my starting point. [16:51.250 --> 16:58.010] So a lot of people asked me during the Biden administration, do we need new AI laws? [16:58.010 --> 17:05.250] It's like, well, our starting point is we have all these existing civil rights laws that already make it illegal to discriminate. [17:05.290 --> 17:06.690] Why do we need new AI laws? [17:06.690 --> 17:08.390] Well, there's a lot of answers to that. [17:08.390 --> 17:10.890] But that's my starting point. [17:10.890 --> 17:25.290] OK, so what happened during the Biden administration is that the EEOC and the Department of Labor, they issued a bunch of documents that essentially said what I just said, that our existing civil rights laws apply to AI. [17:25.290 --> 17:29.430] If you use AI to make decisions about workers, you may get in trouble. [17:29.430 --> 17:41.770] And one of the first things that the EEOC and the Department of Labor did during the Trump administration is they very quietly or not quietly removed all of this technical assistance from their website. [17:42.330 --> 17:58.310] If it helps to frame this, the other documents that the EEOC not quietly removed from their website during the first week of the Trump administration related to gender identity and discrimination or nondiscrimination against transgender individuals. [17:58.310 --> 18:01.190] They just said, no, we're not doing that anymore. [18:01.190 --> 18:03.510] That's not aligned with the administration's priorities. [18:03.510 --> 18:04.510] Going away. [18:06.550 --> 18:14.470] They did an issue, a press release about the AI stuff, but it was taken down in the same cycle as a gender identity documents. [18:14.470 --> 18:18.570] And so, you know, I think I think you can you can infer things about that. [18:18.570 --> 18:28.800] Same thing with with the Department of Labor and really a lot of Department of Labor guidance on wage hour and minimum wage and other other things. [18:30.350 --> 18:32.030] Attorney General Bondi. [18:32.030 --> 18:39.770] So let me let me let me be a lawyer and do some basic legal terminology education here. [18:39.770 --> 18:46.410] So so there's when we talk about discrimination, there's there's disparate treatment, which is overt. [18:46.410 --> 18:54.330] I am going to make a decision on the basis of race, on the basis of sex, you know, on the basis of one of these protected categories. [18:54.330 --> 18:57.230] I'm going to not hire you because you're a woman. [18:57.370 --> 18:59.850] That's disparate treatment. [19:00.230 --> 19:14.450] Disparate impact is saying I'm going to have a neutral or a facially neutral policy that has a disparate impact against someone on it with a protected category. [19:14.450 --> 19:16.930] Someone falling into one of these protected categories. [19:16.930 --> 19:19.290] So how do we get the law of disparate impact? [19:19.290 --> 19:25.130] Well, back during the civil rights era, there was there was a Supreme Court case involving Duke Power. [19:25.270 --> 19:36.050] And right after Title 7, these non-discrimination laws got passed, Duke Power decided we're going to require all of these jobs to have a high school degree. [19:36.490 --> 19:39.790] And that sounds like a facially neutral policy. [19:39.790 --> 19:57.930] But saying all of these jobs need a high school degree, it had a disparate impact against black people who, statistically speaking, the population of black people in the South who had a high school degree was a lot smaller in the 1960s and 70s than the non-black people. [19:58.070 --> 20:07.130] And the litigation that the Supreme Court decided in the Duke Power case was the high school degree requirement had no relation to the job. [20:07.130 --> 20:14.770] They just put that requirement in there because the end result that they wanted was not to hire black people. [20:14.770 --> 20:18.290] So that's my disparate impact. [20:18.290 --> 20:19.930] Where did it come from? [20:19.930 --> 20:21.160] That's where it came from. [20:21.390 --> 20:31.270] Attorney General Bondi has said that she wants the Department of Justice to have a narrower view of disparate impact theories. [20:31.270 --> 20:44.850] And she wants a narrower view that statistical disparity alone does not constitute, does not automatically constitute unlawful discrimination. [20:45.790 --> 20:59.230] This is huge for people doing AI work and dealing with models that have data with people, about people. [20:59.230 --> 21:18.150] Because a lot of the work that we do when we're looking at models and potential unlawful bias involves looking at statistical disparities and making inferences from what those statistical disparities will result in or what's causing them. [21:18.410 --> 21:28.030] And so for the attorney general to say, and all of this body of case law that's been developing since the 1970s, I have a very different view of that. [21:28.030 --> 21:37.590] This week, President Trump made his nominee for the Solicitor of Labor, which is the number three position at the Department of Labor. [21:37.590 --> 21:49.510] By the way, the number two position at the Department of Labor is former AI, the Deputy Secretary of Labor, Keith Sonderling, former AI Village keynote speaker in what, 2021? [21:49.510 --> 21:51.500] So yeah, we're connected. [21:51.910 --> 22:03.060] But like the Solicitor of Labor nominee has testified before Congress that he believes that the entire theory of disparate impact discrimination is unconstitutional. [22:03.280 --> 22:10.200] It's a very aggressive, it's a cutting edge legal theory. [22:10.200 --> 22:13.640] And so we'll see where that goes. [22:13.640 --> 22:14.850] We will see where that goes. [22:15.920 --> 22:17.860] Okay, so I talked about China. [22:17.860 --> 22:19.220] So let's talk about deep seek. [22:19.220 --> 22:20.860] Okay, people who aren't Gavin. [22:20.860 --> 22:23.020] Y'all know what deep seek is, right? [22:23.020 --> 22:24.240] Yeah, no? [22:24.240 --> 22:25.600] Yeah, okay, nod your heads. [22:25.600 --> 22:37.620] So when we talk about, well, hands off on AI regulation, hands off on laws that make it harder to use AI, but not China. [22:38.020 --> 22:40.740] But not China, but definitely not China. [22:40.740 --> 22:59.960] So in February, we saw a flurry of activity relating to deep seek with laws, decoupling America's artificial intelligence capabilities from China, essentially making it harder or impossible for the government to use deep seek for anything. [23:01.820 --> 23:09.780] And literally, we have the intelligence community calling deep seek a five alarm national security fire. [23:09.780 --> 23:33.240] So, you know, I think that whatever attitudes I might have, or I might predict about hands off on regulation, I would predict that things relating to deep seek, to the extent we think deep seek is a real threat, or a credible threat, I expect that litigation or that legislation to have quite a bit of momentum. [23:35.380 --> 23:53.620] So, NIST, National Institute of Standards and Technology, certainly a lot of my friends in the previous decade or so, under the Biden administration, we had what I've publicly said is very good work with the NIST AI RMF, the Risk Management Framework. [23:54.040 --> 24:03.700] In response to President Biden's executive order on AI, we had something called the AI Safety Institute, the NIST AI Safety Institute stand up. [24:04.740 --> 24:09.780] And we've certainly seen some interesting changes under the Trump administration. [24:09.840 --> 24:13.680] So this is from a Wired article on March 14. [24:13.680 --> 24:35.520] Again, with the mission of the AI Safety Institute being refocused from wherever it was before, however we want to characterize it before, we have the mission being refocused to address ideological bias from foundation models, from our cutting-edge models. [24:36.300 --> 24:53.700] The other thing, apologies for the small text here, this directive, according to Wired, so if you want to be in partnership with the AI Safety Institute, you would have to sign an agreement. [24:53.700 --> 25:06.340] The latest version of this agreement eliminates references to AI safety, responsible AI, and AI fairness, and only says reducing ideological bias. [25:08.360 --> 25:15.660] Those are the priorities of the Trump administration and what it means to focus the work of the AI Safety Institute. [25:19.120 --> 25:21.720] Okay, practical considerations. [25:22.000 --> 25:30.520] This is a slide I made for non-lawyers, not for actual AI people, but we have all of these pressures. [25:30.580 --> 25:47.240] We have media coverage of spectacular failures of AI systems, or funny failures of AI systems, plus worker rights, and all of these other pressures saying to lawmakers and regulators, do something. [25:47.820 --> 25:58.000] And if the federal government isn't going to do something, then that increases the pressure on state legislatures and people who aren't the federal government to do something. [25:58.920 --> 26:09.360] And so, as I've said with my crystal ball, in the absence of federal action, we're really going to get the state action, which is what I'll talk about in a couple of hours on the schedule there. [26:10.060 --> 26:12.040] Practical considerations, though. [26:12.240 --> 26:25.540] So, under the Biden administration, the discussion about AI safety was often, in my personal opinion, mutually unintelligible. [26:25.540 --> 26:49.860] Because what, like, scale AI, or open AI, or folks talking about AI safety for foundation models, for GPay, if we're using the European terms, for these cutting-edge models with huge data centers, that's very different from how someone thinking about potential bias, [26:49.860 --> 26:55.520] or discrimination, or even misinformation or disinformation might think about AI safety. [26:55.540 --> 27:10.880] And that's very different than an AI security-focused discussion about model vulnerabilities, about perturbation attacks, about evasion attacks, about model poisoning, data poisoning, all of these nuts-and-bolts things. [27:10.880 --> 27:30.560] And so, if we think that AI safety and AI security is limited to foundation models, and AGI, or NBC, oh my gosh, nuclear, biological, and chemical weapons of mass destruction, that's what AI safety is about. [27:30.560 --> 27:32.900] Well, to some people, yes. [27:33.140 --> 27:42.640] But there's a different component of AI safety that talks about model misuse, or unlawful bias, or, you know, all of these other things that come in there. [27:42.720 --> 27:44.700] And so, it's a broad field. [27:45.040 --> 27:45.920] And when J.D. [27:45.920 --> 27:51.120] Vance says, I'm not here to talk about AI safety, what exactly does he mean by that? [27:51.120 --> 27:53.100] I don't know. [27:53.100 --> 28:05.080] I think it would be a mistake to say that the federal government is going to exit the sphere of AI safety if we think about that as including AI security. [28:05.520 --> 28:19.260] Because, let's be real here, if we think about APTs and we think about global competitiveness and global adversaries, we need secure infrastructure. [28:19.580 --> 28:37.100] The federal government has a very big interest in making sure that the AI supply chain is secure, and doing all the things that the government is notionally good at to ensure safety and security on those issues. [28:37.200 --> 28:57.170] So, you know, refocusing, yes, but the AI security aspect of AI safety, I think, has to continue and has, you know, we have to see things from this, or I would hope we would see things from this, because that's the nature of safety and security, at least when I look at it that way. [28:58.060 --> 29:00.810] So, AI red teaming. [29:01.440 --> 29:17.360] I know that my friends, you know, that are running, you know, the ward, the village here, we have varied opinions about AI red teaming and how some of us feel that that term has been misapplied. [29:17.360 --> 29:32.900] That the notion that AI red teaming is only trying to jail break an LLM and getting some funny, or it's like, you know, hey, can I get, like, lock picking instructions out of chat GPT, or llama, or getting grok to, like, give me ridiculous output. [29:32.900 --> 29:37.300] By the way, it's really not that difficult to get ridiculous output out of grok. [29:37.300 --> 29:39.680] That's, like, you know, foundational stuff, right? [29:39.760 --> 29:40.380] Yeah, sure. [29:40.380 --> 29:41.760] Is that AI red teaming? [29:41.760 --> 29:42.980] Well, I guess. [29:42.980 --> 30:01.080] But the other part of AI red teaming, where we're talking about really doing red teaming in the traditional sense of, you know, hey, is, you know, pickle vulnerabilities, supply chain vulnerabilities, you know, are there exploits here that, you know, evade classifiers, [30:01.080 --> 30:11.720] that evade, you know, all of these things, exfiltration attacks against training data, that's part of red teaming also. [30:11.720 --> 30:26.020] And so, you know, just because we might not care about certain aspects of AI safety on that front, I think some of these other fundamental security issues need to continue, and should continue having emphasis from the feds. [30:26.020 --> 30:27.800] But we shall see. [30:28.140 --> 30:35.480] My last point, complex interactions between federal, state, and international regulation is going to be a mess. [30:36.460 --> 30:39.840] Full employment for folks doing this work, at least, I hope. [30:39.840 --> 30:41.080] Anyway, that's my talk. [30:41.080 --> 30:46.140] Thank you for coming, and I will be back to talk about state stuff later on this afternoon.