By: Tatiana Avdienko
Volume X – Issue I – Fall 2024
I. INTRODUCTION
On July 26, 2024, X Chairman Elon Musk reposted a Kamala Harris campaign video on X in which Harris appeared to state that she “did not know the first thing about running a country.” [1] Musk, however, did not disclose that the video was a deepfake created using artificial intelligence (“AI”). Artificial intelligence is a form of technology that allows machines to simulate human creativity, autonomy, comprehension, and learning. [2] A deepfake is an AI-generated audio, video, or photo of someone made to look real, depicting actions or words that someone did not produce. [3] Deepfakes have been present since the late 2010s, with the rise of AI leading to more advanced audio and visual techniques. Misleading deepfake technology poses a threat to not only the lives of individuals but also to democratic processes at the core of American politics. This technology may spread false information about candidates, influencing the people’s vote and the outcome of state and federal elections. While deepfakes have officially made their way into the United States political sphere, no federal law restricts how they are used. Calls for AI legislation from activist groups, congresspeople, and even technology companies have led to the enactment of state laws, such as Alabama’s Distribution of Materially Deceptive Media Act, and federal proposals such as the NO FAKES Act of 2023. On an international scale, groups such as the European Union have even taken action with the EU AI Act. As AI technology continues to develop in the United States, passing effective federal legislation that protects individuals while allowing technological innovation is crucial in preventing the spread of misinformation.
II. HISTORY OF DEEPFAKE TECHNOLOGY
Deepfake technology has evolved alongside machine learning and artificial intelligence. One of machine learning algorithms’ first pioneers was mathematician Alan Turing, who published the paper “Computing Machinery and Intelligence” in 1950. In his paper, Turing determined a method to evaluate whether machines could think, which would later be known as the field of Artificial Intelligence. [4] Artificial Intelligence is “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” [5] Throughout the 1990s and into the 21st century, AI flourished due to the development of new technology that created new neural network architecture, voice assistants, natural language data collection, and machine reading technology. The first general adversarial network, a model that uses two neural networks to create media that appears to be authentic, was introduced in 2014. [6] A neural network is a machine learning model that mimics the way that biological neurons function to make decisions. [7] The term “deepfake” was first used in 2017 by a Reddit user sharing pornographic images that used face-swapping technology. Often, deepfake technology uses generative adversarial networks or variational audio-encoder networks, which are models that “encode images into low-dimensional representations and then decode those representations back into images.” These two separate auto-encoders —one decoder of the deepfake subject’s face and one encoder with a variety of faces — are combined to create a deepfake. [8]
Since its development, deepfake technology has been weaponized in global contexts. At the onset of the Russian invasion of Ukraine in March of 2022, Russian propagandists released a deepfake video of Ukrainian President Volodymyr Zelenskyy, in which he asked citizens to surrender to Russian forces. In reality, Russian propagandists produced this video, which was the first example of deepfake being weaponized during an armed conflict. [9] Generative AI is not only weaponized during international disputes, but is also raising questions in domestic courtrooms. In recent federal cases, defense attorneys accused the prosecution of manipulating audio and video evidence using deepfake technology. For example, defense attorneys for rioters charged in the Capitol insurrection last January 6, 2021 have claimed that “the jury could not trust the videos because there was no assurance they were not fake or had not been altered.” While this “deepfake defense” was unsuccessful, it demonstrates that generative AI blurs the lines between real and fake evidence, causing issues of reliability. [10] The rise of undisclosed deepfakes—deepfakes that are not labeled as AI-generated—could mean that courtrooms lose trust in all forms of digital media, making it difficult for victims to gain justice. The true nature of events could easily be distorted, meaning that the jury would be ill-informed. Due to the dangers that deepfakes generate, legislators across state, federal, and international governments have passed legislation addressing the issue.
III. CURRENT LEGISLATION
i. International Legislation
Foreign governmental bodies such as the European Union and China have taken swift action to prevent the further spread of misinformation. The European Union adopted the EU AI Act, the world's first comprehensive AI bill. The Act identifies a framework for regulating AI, with a defined scale of risks ranging from “minimal” to “unacceptable.” The systems deemed “unacceptable” will be banned, such as “real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems.” [11] These systems are considered unacceptable due to their infringement of personal privacy and their overreach into the everyday lives of citizens. High-risk AI technology, such as systems used in everyday situations, such as toys, cars, medical devices, education, employment, and law enforcement, will be thoroughly assessed before and during their time on the market.
Additionally, generative AI will be subject to EU copyright law, which includes disclosing when content is AI-generated and preventing models from creating illegal content. Deepfakes must be clearly labeled as AI-generated. [12] After passing in March 2024, the EU AI Act will be implemented in segments throughout 2027. [13] The EU has historically taken an aggressive approach to regulating the ethical use of technology by companies, with laws such as the General Data Protection Regulation (GDPR) being one of the toughest consumer privacy protections in the world. The GDPR imposes regulations on any organizations that collect data from EU citizens and promotes data minimization, privacy policy transparency, consent for data collection, and secure data privacy. [14] The EU’s approach focuses on citizen interests, such as privacy and protection against weaponized generative AI. This approach may, however, lead to increased costs for AI companies since they must implement the proper oversight and are unable to profit from some types of generative AI. Meeting industry standards and laws can incur additional costs for AI companies. Ryan Peeler, a member of the Forbes Technology Council, revealed that “Regular review and updates to maintain compliance in a dynamic regulatory landscape can significantly inflate costs over time.” AI models require constant oversight because they are trained by humans and therefore have implicit biases ingrained into their system. [15] With the addition of further restrictions on AI systems, these oversight practices will continue to grow in scope.
China has also passed legislation concerning AI restrictions. The Cyberspace Administration of China (CAC) passed legislation that “prohibits the production of deepfakes without user consent and requires specific identification that the content had been generated using artificial intelligence (AI).” [16] Ensuring that viewers are aware of AI-generated content is a significant step forward in curbing the misinformation that AI can spread. However, China has fallen under scrutiny due to allegations from Graphika, a company that researches online disinformation. The company called out a “state-aligned operation promoting video footage of [artificial intelligence]...” in which Beijing was able to “disseminate disinformation by creating a synthetic avatar posing as a news anchor and reading a story on the divisive issues of gun control in the United States.” [17] China has not been forceful with the implementation of its deepfake prohibitions, leaving room for citizens to send deepfake propaganda across the country. While the Chinese legislation is an excellent example of AI regulation in an age of misinformation, it proves that application and enforcement are equally important. Even if the efforts are aligned with state interests, governmental bodies should avoid bias in enforcing deepfake laws.
ii. Domestic Federal Legislation
The United States has not yet passed a comprehensive federal act regulating AI technology. Rather, the federal government has pressured state governments and AI companies to determine the details of AI restrictions by passing federal bills that outline vague regulatory frameworks. These bills include the Federal Artificial Intelligence Risk Management Act of 2023. The bill requires the National Institute of Standards and Technology (NIST) to create standards, specify cybersecurity strategies, and set developmental requirements that comply with the Artificial Intelligence Risk Management Framework. The Artificial Intelligence Risk Management Framework was developed by the NIST to “equip organizations and individuals…with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.” [18] This newly developed set of standards is meant to be incorporated into AI risk management practices in federal agencies. [19] The close collaboration and compromise between legislators’ and companies’ interests may benefit the American economy. Instead of prohibiting certain behaviors, the federal government has favored the creation of a framework that companies can adapt and incorporate into their policy. However, this form of legislation is not aggressive enough to combat the potentially dangerous and unforeseen outcomes of AI technology. The majority of existing federal bills are focused on overseeing continued innovation rather than specifically restricting current AI capabilities. Due to the lack of preventative legislative measures in the early stages of generative AI, the federal government should focus on preventing the further mishandling of currently unregulated technology.
iii. Domestic State Legislation
State governments have unevenly passed legislation regarding AI, with some states taking more significant strides than others to prevent election interference and protect individuals. For example, in 2024, Alabama enacted the Distribution of Materially Deceptive Media Act, which criminalizes the dissemination of deceptive media concerning an election. Colorado’s Candidate Election Deepfake Disclosures Act similarly criminalizes the disbursement of AI-generated election media without specifying the use of AI in its creation. [20] Many states have gained traction in regulating generative AI, but struggles to balance economic interests with AI regulations remain. In California, politicians have taken steps to regulate election-related deepfake technology through bills such as the Defending Democracy From Deepfake Deception Act of 2024 and the Elections: Deceptive Media in Advertisements Act. However, Governor Gavin Newsom recently vetoed one of the first company-centered AI regulatory bills in the United States. The bill would have subjected most AI models to comprehensive safety testing and created a “kill switch” in case generative AI became too powerful. In his decision, Governor Newsom cited that this bill could encourage AI companies to leave the state and stifle technological advancements. [21] Governor Newsom’s decision to veto the bill demonstrates the economic power that technology corporations hold in states, making it difficult for individual states to pass restrictive AI legislation.
Approximately 25 states, including states like Nevada, Montana, Virginia, and Georgia, have yet to pass legislation on deepfakes in elections. [22] Imbalances in AI regulation mean that state legislatures are largely ineffective because deepfake media can easily cross state lines.
IV. COMPANY REGULATION
It is more common to encounter robust standards at a company level, as opposed to legislative restrictions on company practices. Microsoft, one of the leading companies in developing AI technology, uses the Responsible AI Standard, which is closely in line with the NIST’s AI Risk Management Framework. In this Standard, engineering teams identify and build mitigations to address potential harms of AI, while red teams test and retest AI systems. Red teams are groups of ethical hackers that carry out simulated cybersecurity attacks to test system effectiveness. [23] The Responsible AI Standard is “responsible-by-design” since it is built to address issues before products enter the market. [24] While rigorous, the Standard has failed to prevent the spread of misinformation and the weaponization of AI against individuals, governments, and even the democratic political system.
Some companies embrace the idea of allowing Congress to play a role in AI regulation: “OpenAI’s Sam Altman endorsed the idea of a federal agency dedicated to AI oversight… Microsoft’s Brad Smith and Meta’s Mark Zuckerberg have previously endorsed the concept of a federal digital regulator.” It is important to note that company leaders have endorsed oversight, not regulation, of their AI models. Only having oversight provides companies with more freedom than a set of clear and strict regulatory standards that limit the direction the company can take AI innovation. While companies encourage a governmental role in the development of generative AI, it seems as though some companies and tech industry leaders prefer minimal intervention. Former Google Executive Chairman Eric Schmidt has stated that he “would much rather have the current companies define reasonable boundaries.” [25] If companies were the only entities regulating themselves, the risk of economic greed usurping the importance of AI safety and moderation would be too great a threat. Companies need more involved input from other stakeholders, such as the federal government, which is less concerned with the economic profits of companies and more concerned with the preservation of democracy.
V. PROPOSED SOLUTIONS
Most proposed solutions seek to strike a balance between innovation, economic advancement, and individual safety. The viability of these solutions rests in their ability to balancethe interests of companies, the government, and concerned citizens. At the federal level, Representatives Madeleine Dean and María Elvira Salazar introduced the “Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act,” which would give citizens the federal right to “control digital replicas of their voice and likeness.” [26] Since this bill focuses on citizen empowerment and less on regulating AI, companies will have to work closely with government officials to meet regulatory standards during and after development. Bills such as the NO FAKES Act have been introduced into the House of Representatives but have not gained traction. The lack of federal regulation on AI companies disincentivizes states from creating regulatory measures since companies may relocate to a state with fewer AI restrictions. However, leaving AI regulation to the state governments may allow the federal government to measure the success of legislation when designing and implementing national regulatory frameworks. Instead of imposing restrictions and transparency requirements on companies, state governments have been forced to regulate those using generative AI. With the rapid growth of AI technology, the federal government must work swiftly to implement boundaries around generative AI in the interest of democracy. Continued weaponization and development of such technologies overtakes implemented state regulatory measures. Congress should look to states’ enacted deepfake restrictions when determining policy, since any economic or social consequences of regulation have likely played out in these states and can be treated as a microcosm for the implications of greater regulatory policy.
VI. POLICY RECOMMENDATIONS AND IMPLICATIONS
The United States government must pass comprehensive AI legislation that prioritizes company regulation while allowing freedom for innovation. It would be beneficial to adopt a similar framework as the European Union, which allows lawmakers and companies to identify and regulate high-risk AI technologies which can include weaponized deepfakes.
Arguments against regulating companies state that legislation could stifle innovation and negatively impact the economy. One study has found that since United States regulatory measures often hinder the number of employees a company hires, companies may be hesitant to hire.27 With many regulatory acts increasing the amount of human oversight needed, companies may have to rebalance their resources dedicated to innovation and supervision. However, if the federal government continues to favor the economic growth produced by technological innovation over safety concerns, unforeseen dangers will continue to arise and legislators will lag behind technological innovations at an alarming rate. Tradeoffs such as stifling innovation should be considered in legislation, but so should the recent weaponization of the latest technological advancements.
With the prominence of technological globalization, international regulations may seem like the best solution. However, the United States has already struggled to reach a domestic consensus on regulatory measures. Additionally, as one of the most developed countries in the world, the United States can set an example for other countries when it comes to regulating AI. Ideal legislation should ensure that AI-generated media is clearly labeled or watermarked and it should reduce the incidence of technology that undermines the labeling of AI-generated media. Companies would continue to be subjected to oversight, but more preventative measures would need to occur. In any case, effective AI legislation will create harmony between economic prosperity, innovation, and a safer future for democracy.
Endnotes
[1] Ken Besinger, “Elon Musk Shares Manipulated Harris Video, in Seeming Violation of X's Policies,” The New York Times, July 27, 2024, https://www.nytimes.com/2024/07/27/us/politics/elon-musk-kamala-harris-deepfake.html.
[2] Eda Kavlakoglu and Cole Stryker, “What is AI?,” IBM, https://www.ibm.com/topics/artificial-intelligence.
[3] “deepfake, n. meanings, etymology and more,” Oxford English Dictionary, n.d., https://www.oed.com/dictionary/deepfake_n.
[4] Scott Montheith, Tasha Glenn, John R. Geddes, Peter C. Whybrow, Eric Achtyes, and Michael Bauer, “Artificial intelligence and increasing misinformation,” The British Journal of Psychiatry 224, no. 2 (2024): 33-35, https://www.cambridge.org/core/journals/the-british-journal-of-psychiatry/article/artificial-intelligence-and-increasing-misinformation/DCCE0EB214E3D375A3006AA69FFB210D.
[5] Cole Stryker and Eda Kavlakoglu, “What Is Artificial Intelligence (AI)?” IBM, 2024, https://www.ibm.com/topics/artificial-intelligence.
[6] Keith D Foote, “A Brief History of Generative AI - DATAVERSITY,” Dataversity, March 5, 2024, https://www.dataversity.net/a-brief-history-of-generative-ai/.
[7] “What is a neural network?,” IBM, accessed November 24, 2024, https://www.ibm.com/topics/neural-networks
[8] Meredith Somers, “Deepfakes, explained,” MIT Sloan, 2020, https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained.
[9] “The Rise of Artificial Intelligence and Deepfakes.” Northwestern Buffett Institute for Global Affairs, n. D., https://buffett.northwestern.edu/documents/buffett-brief_the-rise-of-ai-and-deepfake-technology.pdf.
[10] Herbert B. Dixon, “The “Deepfake Defense”: An Evidentiary Conundrum,” American Bar Association, 2024, https://www.americanbar.org/groups/judicial/publications/judges_journal/2024/spring/deepfake-defense-evidentiary-conundrum/.
[11] Ziady, Hanna,“Europe is leading the race to regulate AI. Here’s what you need to know,” CNN, 2023, https://www.cnn.com/2023/06/15/tech/ai-act-europe-key-takeaways/index.html.
[12] “EU AI Act: first regulation on artificial intelligence | Topics,” European Parliament,https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
[13] EU Artificial Intelligence Act (Regulation (EU) 2024/1689) - Updates, Training, Compliance, https://www.artificial-intelligence-act.com/.
[14] Ben Wolford, “What is GDPR, the EU's new data protection law? - GDPR.eu,” GDPR compliance, n.d., https://gdpr.eu/what-is-gdpr/.
[15] Ryan Peeler, “Council Post: The Hidden Costs Of Implementing AI In Enterprise,” Forbes, 2023, https://www.forbes.com/councils/forbestechcouncil/2023/08/31/the-hidden-costs-of-implementing-ai-in-enterprise/.
[16] Asha Hemrajani, “China's New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?,” The Diplomat, 2023, https://thediplomat.com/2023/03/chinas-new-legislation-on-deepfakes-should-the-rest-of-asia-follow-suit/.
[17] Diego Laje, “China's Deep Fake Law Is Fake,” AFCEA International, June 1, 2023, https://www.afcea.org/signal-media/cyber-edge/chinas-deep-fake-law-fake.
[18] “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” NIST Technical Series Publications, January, 2023, https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.
[19] “S.3205 - 118th Congress (2023-2024): Federal Artificial Intelligence Risk Management Act of 2024,” n.d., Congress.gov, https://www.congress.gov/bill/118th-congress/senate-bill/3205.
[20] “Deceptive Audio or Visual Media (“Deepfakes”) 2024 Legislation,” National Conference of State Legislatures, November 22, 2024, https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation.
[21] João de Silva, “California governor Gavin Newsom vetoes landmark AI safety bill,” BBC, September 30, 2024, https://www.bbc.com/news/articles/cj9jwyr3kgeo.
[22] “Tracker: State Legislation on Deepfakes in Elections,” Public Citizen,, https://www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections/.
[23] Evan Anderson, Jim Holdsworth, and Matthew Kosinski, “What is Red Teaming?,” IBM, November 7, 2024, https://www.ibm.com/think/topics/red-teaming.
[24] “Governing AI: A Blueprint for the Future,” Microsoft, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW14Gtw.
[25] Tom Wheeler, “The three challenges of AI regulation,” Brookings Institution, June 15, 2023, https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/.
[26] “Dean, Salazar Introduce Bill to Protect Americans from AI Deepfakes,” Madeleine Dean, September 12, 2024, https://dean.house.gov/2024/9/dean-salazar-introduce-bill-to-protect-americans-from-ai-deepfakes.
[27] Betsy Vereckey, “Does regulation hurt innovation? This study says yes,” MIT Sloan, June 7, 2023, https://mitsloan.mit.edu/ideas-made-to-matter/does-regulation-hurt-innovation-study-says-yes.