Quantcast
Channel: Search Results for “ vipreg2024 1xbet promo code india 2024 slovakia”– Future of Privacy Forum
Viewing all articles
Browse latest Browse all 121

U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond

$
0
0

Standing in front of the U.S. flag and dressed as Uncle Sam, Taylor Swift proudly proclaims that you should vote for Joe Biden for President. She then wants you to vote for Donald Trump in a nearly identical image circulated by former President Trump himself. Both the images, and the purported sentiments, are fabricated, the output of a generative AI tool used for creating and manipulating images. In fact, shortly after Donald Trump circulated his version of the image, and in response to the fear of spreading misinformation, the real Taylor Swift posted a real endorsement to her Instagram account, for Vice President Kamala Harris.

Generative AI is a powerful tool, both in elections and more generally in people’s personal, professional, and social lives. In response, policymakers across the U.S. are exploring ways to mitigate risks associated with AI-generated content, also known as “synthetic” content. As generative AI makes it easier to create and distribute synthetic content that is indistinguishable from authentic or human-generated content, many are concerned about its potential growing use in political disinformation, scams, and abuse. Legislative proposals to address these risks often focus on disclosing the use of AI, increasing transparency around generative AI systems and content, and placing limitations on certain synthetic content. While these approaches may address some challenges with synthetic content, they also face a number of limitations and tradeoffs that policymakers should address going forward.

  1. 1. Legislative proposals to regulate synthetic content have primarily focused on authentication, transparency, and restrictions.

Generally speaking, policymakers have sought to address the potential risks of synthetic content by promoting techniques for authenticating content, establishing requirements for disclosing the use of AI, and/or setting limitations on the creation and distribution of deepfakes. Authentication techniques, which involve verifying the source, history, and/or modifications to a piece of content, are intended to help people determine whether they’re interacting with an AI agent or AI-generated content, and to provide greater insight into how content was created. Authentication often includes requiring the option to embed, attach, or track certain information in relation to content to provide others with more information about where the content came from, such as:

  • Watermarking: embedding information into content for the purpose of verifying the authenticity of the output, determining the identity or characteristics of the content, or establishing provenance (see below). Also referred to as “digital watermarking” in this context, to distinguish from traditional physical watermarks.
  • Provenance tracking: recording and tracking the origins and history of content or data (also known as “provenance”) in order to determine its authenticity or quality.
  • Metadata recording: tracking information about data or content itself, rather than its substance (also known as “metadata”) for the purpose of authenticating the origins and history of content.

A number of bills require or encourage the use of techniques like watermarking, provenance tracking, and metadata recording. Most notably, California AB 3211 regarding “Digital Content Provenance Standards,” which was proposed in 2024 but did not pass, sought to require generative AI providers to embed provenance information in synthetic content and provide a tool to users for detecting synthetic content, as well as for recording device manufacturers to offer users the ability to place authenticity and provenance information in content. At the federal level, a bipartisan bill, the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act, has been introduced that would direct the National Institute of Standards and Technology (NIST) to develop standards for watermarking, provenance, and synthetic content detection, and to require generative AI providers allow content owners to attach provenance information into content. If passed, the COPIED Act would build on NIST’s existing efforts to provide guidelines on synthetic content transparency techniques, as required by the White House Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Relatedly, policymakers are also exploring ways to improve transparency regarding synthetic content through labeling, disclosures, and detection. Some legislation, such as the recently-enacted Colorado AI Act and the pending federal AI Labeling Act of 2023, requires individuals or entities to label AI-generated content (labeling) or disclose the use of AI in certain circumstances (disclosure). Other legislation focuses on synthetic content detection tools, which analyze content to determine whether it’s synthetic, and to provide further insight into the content. Detection tools can include those that evaluate the likelihood a given piece of content is AI-generated, as well as tools that can read watermarks, metadata, or provenance data to inform people about their background. For example, the recently-enacted California AI Transparency Act requires, among other things, generative AI system providers to make an AI detection tool available to its users. Separately, the Federal Communications Commission (FCC) is exploring creating rules around the use of technologies that analyze the content of private phone conversations to alert users that the voice on the other end of the line may be AI-generated.

Another common approach to addressing synthetic content risks has been to place legal restrictions on the production or distribution of certain AI-generated content, particularly “deepfakes” that use AI to appropriate a person’s likeness or voice. In contrast to more technical and organizational approaches, legal restrictions typically involve prohibiting certain uses of deepfakes, providing mechanisms for those affected to seek relief, and potentially placing liability on platforms that distribute or fail to remove prohibited content. Over the past few years, many states have passed laws focused on deepfakes in political and election-related communications, non-consensual intimate imagery (NCII), and child sexual abuse material (CSAM), with some applying to deepfakes more generally. This year at the federal level, a number of similar bills have been introduced, such as the Candidate Voice Fraud Prohibition Act, DEEPFAKES Accountability Act, and Protect Victims of Digital Exploitation and Manipulation Act. The Federal Trade Commission (FTC) has also taken this approach, recently finalizing a rule banning fake reviews and testimonials (including synthetic ones), and exploring rulemaking on AI-driven impersonation of individuals. The FCC has also considered engaging in rulemaking on disclosures for synthetic content in political ads on TV and radio.

  1. 2. Legislative approaches to synthetic content need to be carefully considered to assess feasibility and impact.

While legally-mandated safeguards may help address some of the risks of synthetic content, they also currently involve a number of limitations, and may conflict with other legal and policy requirements or best practices. First, many of the technical approaches to improving transparency are relatively new, and often not yet capable of achieving the goals with which they may be tasked. For example, synthetic content detection tools—which have already been used controversially in schools—are, generally speaking, not currently able to reliably flag when content is meaningfully altered by generative AI. This is particularly true when a given tool is used across different media and content types (e.g., images, audio, text), and across languages and cultures, where they can vary significantly in accuracy. And because they often make mistakes, detection tools may be unable to slow the distribution of misinformation while simultaneously exacerbating skepticism around their own reliability

Even more established techniques may still have technical limitations. Watermarks, for instance, can still be removed, altered, or forged relatively easily, creating a false history for a piece of content. Techniques that are easy to manipulate could end up creating mistrust in the information ecosystem, as synthetic content may appear as non-synthetic, and non-synthetic content may be flagged as synthetic. Additionally, because watermarking only works when the watermark and detection tool are interoperable—and many are not—rolling this technique out at scale without coordination may prove unhelpful and exacerbate confusion. Finally, given that there is no agreement or standard regarding when content has been altered enough to be considered “synthetic,” techniques for distinguishing between synthetic and non-synthetic content are likely to face challenges in drawing a clear line.

Certain techniques that are intended to provide authentication through tracking, like metadata recording and provenance tracking, may also conflict with privacy and data protection principles. Provenance and metadata tracking, for example, may reveal individuals’ personal data, and digital watermarks can be individualized, which could then be used to monitor people’s personal habits or online behavior. These techniques require collecting more data about a piece of content, and keeping records of it for longer periods of time, which may be in tension with mandates to minimize data collection and limit retention. As previously mentioned, the FCC is investigating third-party AI call detection, alerting, and blocking technologies, which require real-time collection and analysis of private phone conversations, often without the other party’s knowledge. Notably, FCC Commissioner Simington has said the notion of the Commission putting its “imprimatur” on “ubiquitous third-party monitoring” tools is “beyond the pale.”

Beyond issues with technical feasibility and privacy, some approaches to addressing synthetic content risks are likely to face legal challenges under the First Amendment. According to some interpretations of the First Amendment, laws prohibiting the creation of deepfakes in certain circumstances—such as in the case of election-related content and digital replicas of deceased people—are a violation of constitutionally-protected free expression. For example, in early October a federal judge enjoined a recently-enacted California law that would prohibit knowingly and maliciously distributing communications with “materially deceptive” content that could harm a political candidate, and which portrays them doing something they did not—such as a deepfake—without a disclosure that the media is manipulated. According to the judge, the law may violate the First Amendment because its disclosure requirement is “overly burdensome and not narrowly tailored,” and given that the law’s over-broad conception of “harm” may stifle free expression.

Finally, some have raised challenges on the intersection between regulation of synthetic content and other regulatory areas, including platform liability and intellectual property. Critics argue that laws holding republishers and online platforms liable for prohibited content run afoul not only of the First Amendment but also Section 230 of the Communications Decency Act, which largely shields interactive computer service providers from liability for third-party content. In the latter argument, exposing platforms to liability for failing to remove or block violative synthetic content that users have not reported to it contradicts Section 230, and would also be an unreasonable logistical expectation to place on platforms. There is also concern that holding platforms responsible for removing “materially deceptive” content—such as in the context of elections and political communications—would put them in a position of determining what information is “accurate,” for which they are not equipped. In recognition of these technical and organizational limitations, some have pushed for legislation to include “reasonable knowledge” and/or “technical feasibility” standards.

  1. 3. More work lies ahead for policymakers intent on regulating synthetic content.

2024 has been called an election “super year,” and by the end of the year up to 3.7 billion people in 72 countries will have voted. This convergence has likely motivated lawmakers to focus on the issues surrounding deepfakes in political and election-related communications. By contrast, there will be significantly fewer elections in the coming years. At the same time, emerging research is challenging the notion that deepfakes have a noticeable impact on either the outcome or integrity of elections. Additionally, the U.S. Federal Election Commission (FEC) recently declined to make rules regarding the use of AI in election ads, stating it doesn’t have the authority to do so, and has clashed with the FCC in its own attempt to regulate AI in election ads

While political and election deepfakes may get less policymaker attention in the U.S. in 2025, deepfakes are only becoming harder to distinguish from authentic content. At the federal level, U.S. regulators and lawmakers have signaled strong interest in continuing to push for the development and implementation of content authentication techniques to allow people to distinguish between AI and humans, or between AI-generated content and human-generated content. NIST, for example, is currently responding to the White House EO on AI and finalizing guidance for synthetic content authentication, to be published by late December 2024. In May 2024 the Bipartisan Senate AI Working Group, led by Sen. Chuck Schumer, published its Roadmap for AI policy, recommending that congressional committees consider the need for legislation regarding deepfakes, NCII, fraud, and abuse. The FTC is also currently considering an expansion of existing rules prohibiting impersonation of businesses and government officials to cover individuals as well, including AI-enabled impersonation. Given generative AI’s increasing sophistication, and integration into more aspects of people’s daily lives, interest in content authentication will likely continue to grow in 2025.

In the same way that age verification and age estimation tools got a boost in response to children’s privacy and safety regulations requiring differential treatment of minors online, there may be a similar effect on authentication tools. The FCC is already interested in exploring real-time call detection, alerting, and blocking technologies to distinguish human callers from AI callers. Other similar solutions, such as “personhood credentials,” are also building on existing techniques like credentialing programs and zero-knowledge proofs to provide assurance that a particular individual online is in fact a human, or that a given online account is the official one and not an imposter.

As generative AI becomes more powerful, and synthetic content more convincing, malicious impersonation, disinformation, and NCII and CSAM may pose even greater risks to safety and privacy. In response, policymakers are likely to ramp up efforts to manage these risks, through a combination of technical, organizational, and legal approaches. In particular, lawmakers may focus on especially harmful uses of deepfakes, such as synthetic NCII and CSAM, as well as encouraging or mandating the use of transparency tools like watermarking, content labeling and disclosure, and provenance tracking. 

To read more about synthetic content’s risks, and policymaker approaches to addressing them, check out FPF’s report Synthetic Content: Exploring the Risks, Technical Approaches, and Regulatory Responses.


Viewing all articles
Browse latest Browse all 121

Trending Articles