Want this news delivered to your inbox? Click here to subscribe and receive updates.
Looking for Lawrence & Bundy making news?
Click to view our most recent media coverage.
Click to view our most recent media coverage.
This article was written by Thomas Bundy and Andrew D. Herman and first appeared in Maryland Matters. Shortly after his inauguration, Maryland Gov. Wes Moore (D) visited a research institute addressing artificial intelligence, machine learning, and virtual and augmented reality. He touted the project as “a perfect example of how Maryland can become more economically competitive by creating opportunities through innovative partnerships.” As the state embraces the promise of AI, however, it must also address the risks presented by the technology. For example, AI is a major element in the current Hollywood strikes. SAG-AFTRA's president, Fran Drescher, summarized the concern: “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”
Other public figures who rely on visual media for promotion will also confront this issue. But, unlike Hollywood talent, this group can address the threat unilaterally. A recent editorial in The Washington Post summarized the problem: “Get ready for lots of literally unbelievable campaign ads. AI could wreak havoc on elections.” As such, Maryland’s elected officials should move decisively on this issue. AI’s threat to political discourse is real. Candidates for the Republican presidential nomination have already shared AI-enabled parodies mocking their opponents, and the Republican National Committee recently aired a fake video depicting a future hellscape under President Biden. Some of these ads disclosed the use of AI, some did not. And things can get worse. As the elections draw closer, the temptation to fabricate more extreme ads may prove too tempting. After all, if an AI-enabled deception is effective it’s far easier to ask for forgiveness afterward, especially if no specific legal constraints exist. The wide latitude courts currently grant to political speech hamper effective responses to these tactics. Victory in a defamation suit months after an election will provide little recompense for a losing candidate smeared by an AI invention. Further, the last decade has provided a raft of foreign attempts to interfere with domestic elections through social media and other venues. It’s not hard to envision foreign actors deploying AI in 2024 to wreak havoc and discredit American candidates and officeholders. The best solution would, of course, be a federal law imposing nationwide standards for the use of AI in political discourse, penalizing violations, and authorizing victims to remove clear violations expeditiously. In May, Sen. Amy Klobuchar (D-Minn.) and Rep. Yvette Clarke (D-N.Y.) introduced bills in their respective chambers. The REAL Political Advertisements Act would require full disclosure of AI-generated content in political ads. Other, more restrictive proposals, including a bill establishing criminal punishment for creation of “fake electronic media that appears realistic,” have fizzled in Congress. Capitol Hill’s current dysfunction makes it unlikely that the Congress will impose effective reforms soon. The chance for regulation in the executive branch is slightly better. In June, the regulator with authority to address this issue, the Federal Election Commission, deadlocked on proposed regulations on political ads using AI. The FEC tried again this August, seeking public comment on a request for a rulemaking specifying that using false AI-generated content, or “deepfakes,” in campaign ads violates the federal prohibition on fraudulent misrepresentation of campaign authority. Although he voted to publish this request, Commissioner Allen Dickerson, said that AI remains an issue for Congress, identifying “serious First Amendment concerns lurking in the background of this effort.” Things are more promising in the states, as California, Minnesota, Texas, Washington have all enacted restrictions on AI use since 2019. While these laws vary in scope, they present a variety of options for Maryland to emulate. Existing state laws establish the pillars of a sound AI policy that will survive First Amendment scrutiny from the federal courts, especially a skeptical Supreme Court. An effective law should include the following elements:
Comments are closed.
|
We contribute to the legal field by sharing our experience and insights in the form of articles and presentations designed to improve your way of doing business. You may search by category below, or contact us if you are interested in a field of study not listed here. Categories
All
|