(American Habits)—Coming out of a presidential election season—and heading into a new presidential administration—it can be easy to think all politics is national politics, and that the solutions (or lack there of) to our problems will be found in the District of Columbia. But we may in fact be entering a new golden age of federalism, with the pace of change outrunning the federal Congress’ output, and executive actions bouncing back and forth between administrations generating no consistent policy. Combined with the Supreme Court’s much-belated rejection of deference to executive branch actions, there are many areas the federal government is simply not going to resolve any time soon where state solutions could well lead the way.
And indeed, the past year or two have seen state laws aplenty regulating, for instance, children’s access to social media—in both red and blue flavors. The Supreme Court’s decision in Dobb’s likewise ‘sent abortion back to the states,’ and different states have, needless to say, gone in very different directions with that mandate. But today let’s focus on perhaps an even newer area with a lot of ground still to tread: state approaches to AI regulation.
At the moment, there isn’t really a federal approach to AI one way or the other. The closest thing to a federal policy here is President Biden’s executive order of a year ago, which is essentially an articulation of various risks and fears about the harms from the technology—everything from the dramatic “Skynet” scenarios’ to the already apparent tendency for these “deep learning” systems to deeply learn some biases of we humans who created them. But all indications are that the incoming Trump administration will toss the lot of that out in favor of what is expected to be a more industry-friendly approach advocated by the sort of tech moguls who flocked to the President-(re)elect during the campaign—but we’ll have to wait and see, and you’ll notice I don’t even bother predicting any particular congressional action.
In the meantime, the primary engine of AI regulation around the country has been and will continue to be at the state level—and many states have already gotten in on the act. Those laws take a variety of forms—reflecting a variety of concerns. Where Democrats hold power, the focus has often been on curtailing the perceived dangers of AI-generated ‘misinformation.’ Washington passed a law last year targeting the use of “synthetic media” in election communication, and other states have since followed suit. One doubts that anyone sentient enough to fill out a ballot actually believed Trump and Biden were an elderly married couple that likes to bake together and visit farmers markets, but there have been less frivolous incidents of AI-generated balderdash designed to mislead voters.
California values its role as innovator so much that Gov. Gavin Newsom signed some 18 different AI bills this year—and that’s after vetoing the most controversial one. Republican-led states, including Virginia, Texas, and Montana, have passed a slate of AI privacy bills—perhaps motivated by the same fear of ‘Big-Tech’ censorship as the social media moderation laws recently considered by the Supreme Court (Texas AG Ken Paxton has explicitly promised to crack down on such).
Political valences aside, states have a great deal of leeway to serve as Justice Brandeis’ federalist laboratories, experimenting with laws and policies and restrictions and subsidies. Georgia’s HB 203 legalizes AI eye exams for contact lenses—which sounds like a great idea if it actually works. State approaches can also innovate in response to local concerns—Tennessee’s new AI law aims to protect country music singers from digital competition; California’s AB 1836 does much the same for Hollywood actors.
Bad laws are of course less desirable than good laws, but the devil between the two is often in the details, particularly in these highly technical contexts. Washington’s synthetic media law’ in theory targets AI, but as written applies to any media “that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or video . . .”; you don’t need to use AI, just “digital technology”—so make sure the memes you photoshop don’t look too “realistic.” Is that a good definition? It’s very broad—and the broader such laws get the greater First Amendment concerns, as they cover more and more forms of speech until the teenager making goofy pictures of Kamala Harris and JD Vance picking out drapes together gets a call from the police. Ideally, states will tread carefully here in ensuring that the rights of citizens to communicate their political preferences extend to creativeuses of machine learning.
Exactly how the courts will extend free speech principles to these applications remains to be seen. On the one hand, the constitutional protections for parody and even for political untruths are well established. On the other, the Supreme Court actually long ago held that ‘virtual’ child pornography can be criminalized under the First Amendment, even though there was no real child who ever actually existed to be harmed, as long as the ‘distributor’ pretends it’s real. The Department of Justice recently succeeded in convicting Douglass “Ricky Vaughn” Mackey for the crime of posting factually inaccurate memes about Hillary Clinton ahead of the 2016 election that prosecutors claim misled voters—it remains to be seen whether that will survive on appeal (disclosure: I filed an amicus brief in support of Mr. Mackey’s appeal).
For better or worse, that is another way the states will serve as laboratories here: in generating litigation through which courts will work out how to apply newsprint-era legal standards to cases about pixels and bits—but that can be valuable too, and well written laws can establish good precedent in being upheld just as bad law laws can when struck down. And in this new area, with the full field in front of us, states that take these issues seriously can give us more of the former than the latter.
Reilly Stephens serves as Counsel at the Liberty Justice Center.
Home > AI Innovation During a New Era of Federalism
AI Innovation During a New Era of Federalism
(American Habits)—Coming out of a presidential election season—and heading into a new presidential administration—it can be easy to think all politics is national politics, and that the solutions (or lack there of) to our problems will be found in the District of Columbia. But we may in fact be entering a new golden age of federalism, with the pace of change outrunning the federal Congress’ output, and executive actions bouncing back and forth between administrations generating no consistent policy. Combined with the Supreme Court’s much-belated rejection of deference to executive branch actions, there are many areas the federal government is simply not going to resolve any time soon where state solutions could well lead the way.
And indeed, the past year or two have seen state laws aplenty regulating, for instance, children’s access to social media—in both red and blue flavors. The Supreme Court’s decision in Dobb’s likewise ‘sent abortion back to the states,’ and different states have, needless to say, gone in very different directions with that mandate. But today let’s focus on perhaps an even newer area with a lot of ground still to tread: state approaches to AI regulation.
At the moment, there isn’t really a federal approach to AI one way or the other. The closest thing to a federal policy here is President Biden’s executive order of a year ago, which is essentially an articulation of various risks and fears about the harms from the technology—everything from the dramatic “Skynet” scenarios’ to the already apparent tendency for these “deep learning” systems to deeply learn some biases of we humans who created them. But all indications are that the incoming Trump administration will toss the lot of that out in favor of what is expected to be a more industry-friendly approach advocated by the sort of tech moguls who flocked to the President-(re)elect during the campaign—but we’ll have to wait and see, and you’ll notice I don’t even bother predicting any particular congressional action.
In the meantime, the primary engine of AI regulation around the country has been and will continue to be at the state level—and many states have already gotten in on the act. Those laws take a variety of forms—reflecting a variety of concerns. Where Democrats hold power, the focus has often been on curtailing the perceived dangers of AI-generated ‘misinformation.’ Washington passed a law last year targeting the use of “synthetic media” in election communication, and other states have since followed suit. One doubts that anyone sentient enough to fill out a ballot actually believed Trump and Biden were an elderly married couple that likes to bake together and visit farmers markets, but there have been less frivolous incidents of AI-generated balderdash designed to mislead voters.
California values its role as innovator so much that Gov. Gavin Newsom signed some 18 different AI bills this year—and that’s after vetoing the most controversial one. Republican-led states, including Virginia, Texas, and Montana, have passed a slate of AI privacy bills—perhaps motivated by the same fear of ‘Big-Tech’ censorship as the social media moderation laws recently considered by the Supreme Court (Texas AG Ken Paxton has explicitly promised to crack down on such).
Political valences aside, states have a great deal of leeway to serve as Justice Brandeis’ federalist laboratories, experimenting with laws and policies and restrictions and subsidies. Georgia’s HB 203 legalizes AI eye exams for contact lenses—which sounds like a great idea if it actually works. State approaches can also innovate in response to local concerns—Tennessee’s new AI law aims to protect country music singers from digital competition; California’s AB 1836 does much the same for Hollywood actors.
Bad laws are of course less desirable than good laws, but the devil between the two is often in the details, particularly in these highly technical contexts. Washington’s synthetic media law’ in theory targets AI, but as written applies to any media “that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or video . . .”; you don’t need to use AI, just “digital technology”—so make sure the memes you photoshop don’t look too “realistic.” Is that a good definition? It’s very broad—and the broader such laws get the greater First Amendment concerns, as they cover more and more forms of speech until the teenager making goofy pictures of Kamala Harris and JD Vance picking out drapes together gets a call from the police. Ideally, states will tread carefully here in ensuring that the rights of citizens to communicate their political preferences extend to creative uses of machine learning.
Exactly how the courts will extend free speech principles to these applications remains to be seen. On the one hand, the constitutional protections for parody and even for political untruths are well established. On the other, the Supreme Court actually long ago held that ‘virtual’ child pornography can be criminalized under the First Amendment, even though there was no real child who ever actually existed to be harmed, as long as the ‘distributor’ pretends it’s real. The Department of Justice recently succeeded in convicting Douglass “Ricky Vaughn” Mackey for the crime of posting factually inaccurate memes about Hillary Clinton ahead of the 2016 election that prosecutors claim misled voters—it remains to be seen whether that will survive on appeal (disclosure: I filed an amicus brief in support of Mr. Mackey’s appeal).
For better or worse, that is another way the states will serve as laboratories here: in generating litigation through which courts will work out how to apply newsprint-era legal standards to cases about pixels and bits—but that can be valuable too, and well written laws can establish good precedent in being upheld just as bad law laws can when struck down. And in this new area, with the full field in front of us, states that take these issues seriously can give us more of the former than the latter.
Reilly Stephens serves as Counsel at the Liberty Justice Center.