Login

Gallup Sun

Saturday, Jul 27th

Last update12:21:18 PM GMT

You are here: Opinions Viewpoints Senators introduce bill to combat A.I. deepfakes

Senators introduce bill to combat A.I. deepfakes

E-mail Print PDF

Legislation would put journalists, artists, songwriters more in control of their creative content

WASHINGTON D.C. — U.S. Sens. Martin Heinrich, D-N.M., co-founder and co-chair of the Senate AI Caucus and a member of the Bipartisan Senate AI Working Group, Maria Cantwell, D-Wash., Chair of the Senate Commerce Committee, and Marsha Blackburn, R-Tenn., member of the Commerce Committee, introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media Act to combat the rise of harmful deepfakes on July 12.

The legislation would set new federal transparency guidelines for marking, authenticating and detecting AI-generated content, protect journalists, actors, and artists against AI-driven theft, and hold violators accountable for abuses.

Cantwell wrote the Act, and provided a little more detail about it.

“The bipartisan COPIED Act I introduced with Senator Blackburn and Senator Heinrich will provide much-needed transparency around AI-generated content,” Cantwell said. “The COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process that I think is very much needed.”

Heinrich showed his support for the bill.

“Deepfakes are a real threat to our democracy and to Americans’ safety and well-being,” Heinrich said. “I’m proud to support Senator Cantwell’s COPIED Act that will provide the technical tools needed to help crack down on harmful and deceptive AI-generated content and better protect professional journalists and artists from having their content used by AI systems without their consent. Congress needs to step up and pass this legislation to protect the American people.”

Specifically, the Transparency and Deepfakes Act:

Creates Transparency Standards: Requires the National Institute of Standards and Technology to develop guidelines and standards for content provenance information, watermarking and synthetic content detection. These standards will promote transparency to identify if content has been generated or manipulated by AI, as well as where AI content originated. The bill also directs NIST to develop cybersecurity measures to prevent tampering with provenance and watermarking on AI content.

Puts Journalists, Artists and Musicians More in Control of Their Content: Requires providers of AI tools used to generate creative or journalistic content to allow owners of that content to attach provenance information to it and prohibits its removal. The bill prohibits the unauthorized use of content with provenance information to train AI models or generate AI content. These measures give content owners—journalists, newspapers, artists, songwriters, and others—the ability to protect their work and set the terms of use for their content, including compensation.

Gives Individuals a Right to Sue Violators: Authorizes the Federal Trade Commission and state attorney generals to enforce the bill’s requirements.  It also gives newspapers, broadcasters, artists, and other content owners the right to bring suit in court against platforms or others who use their content without permission.

Prohibits Tampering with or Disabling AI Provenance Information: Currently, there is no law that prohibits removing, disabling, or tampering with content provenance information. The bill prohibits anyone, including internet platforms, search engines and social media companies, from interfering with content provenance information in these ways.

By Sen. Martin Heinrich