State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


Big-Tech Lobbyists Are Trying to Weaken Copyright Protections in the EU’s AI Act

Written by on July 19, 2023

blank

LONDON — When the European Union announced plans to regulate artificial intelligence in 2021, legislators started focusing on “high risk” systems that could threaten human rights, such as biometric surveillance and predictive policing. Amid increasing concern among artists and rights holders about the potential impact of AI on the creative sector, however, EU legislators are also now looking at the intersection of this new technology and copyright.

The EU’s Artificial Intelligence Act, which is now being negotiated among politicians in different branches of government, is the first comprehensive legislation in the world to regulate AI. In addition to banning “intrusive and discriminatory uses” of the technology, the current version of the legislation addresses generative AI, mandating that companies disclose content that is created by AI to differentiate it from works authored by humans. Other provisions in the law would require companies that use generative AI to provide details of copyrighted works, including music, on which they trained their systems. (The AI Act is a regulation, so it would pass directly into law in all 27 member states.)

Music executives began paying closer attention to the legislation after the November launch of ChatGPT. In April, around the time that “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, drove home the issue posed by AI, industry lobbyists convinced lawmakers to add the transparency provisions.

So far, big technology companies, including Alphabet, Meta and Microsoft, have publicly stated that they, too, support AI regulation, at least in the abstract. Behind the scenes, however, multiple music executives tell Billboard that technology lobbyists are trying to weaken these transparency provisions by arguing that such obligations could put European AI developers at a competitive disadvantage.

“They want codes of conduct” — as opposed to laws — “and very low forms of regulation,” says John Phelan, director general of international music publishing trade association ICMP.

Another argument is that summarizing training data “would basically come down to providing a summary of half, or even the entire, internet,” says Boniface de Champris, Brussels-based policy manager at the Computer and Communications Industry Association Europe, which counts Alphabet, Apple, Amazon and Meta among its members. “Europe’s existing copyright rules already cover AI applications sufficiently.”

In May, Sam Altman, CEO of ChatGPT developer OpenAI, emerged as the highest-profile critic of the EU’s proposals, accusing it of “overregulating” the nascent business. He even said that his company, which is backed by Microsoft, might consider leaving Europe if it could not comply with the legislation, although he walked back this statement a few days later. OpenAI and other companies lobbied — successfully — to have an early draft of the legislation changed so that “general-purpose AI systems” like ChatGPT would no longer be considered high risk and thus subject to stricter rules, according to documents Time magazine obtained from the European Commission. (OpenAI didn’t respond to Billboard’s requests for comment.)

The lobbying over AI echoes some of the other political conflicts between media and technology companies — especially the one over the EU Copyright Directive, which passed in 2019. While that “was framed as YouTube versus the music industry, the narrative has now switched to AI,” says Sophie Goossens, a partner at global law firm Reed Smith. “But the argument from rights holders is much the same: They want to stop tech companies from making a living on the backs of their content.”

Several of the provisions in the Copyright Directive deal with AI, including an exception in the law for text- and data-mining of copyrighted content, such as music, in certain cases. Another exception allows scientific and research institutions to engage in text- and data-mining on works to which they have lawful access.

So far, the debate around generative AI in the United States has focused on whether performers can use state laws on right of publicity to protect their distinctive voices and images — the so-called “output side” of generative AI. In contrast, both the Copyright Directive and the AI Act address the “input side,” meaning ways that rights holders can either stop AI systems from using their content for training purposes or limit which ones can in order to license that right.

Another source of tension created by the Copyright Directive is the potential for blurred boundaries between research institutions and commercial businesses. Microsoft, for example, refers to its Muzic venture as “a research project on AI music,” while Google regularly partners with independent research, academic and scientific bodies on technology developments, including AI. To close potential loopholes, Phelan wants lawmakers to strengthen the bill’s transparency provisions, requiring specific details of all music accessed for training, instead of the “summary” that’s currently called for. IFPI, the global recorded-music trade organization, regards the transparency provisions as “a meaningful step in the right direction,” according to Lodovico Benvenuti, managing director of its European office, and he says he hopes lawmakers won’t water that down.

The effects of the AI Act will be felt far outside Europe, partly because they will apply to any company that does business in the 27-country bloc and partly because it will be the first comprehensive set of rules on the use of the technology. In the United States, the Biden administration has met with technology executives to discuss AI but has yet to lay out a legislation strategy. On June 22, Senate Majority Leader Chuck Schumer, D-N.Y., said that he was working on “exceedingly ambitious” bipartisan legislation on the topic, but political divides in the United States as the next presidential election approaches would make passage difficult. China unveiled its own draft laws in April, although other governments may be reluctant to look at legislation there as a model.

“The rest of the world is looking at the EU because they are leading the way in terms of how to regulate AI,” says Goossens. “This will be a benchmark.”

Related Images:


Reader's opinions

Leave a Reply

Your email address will not be published. Required fields are marked *