State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

1:00 pm 7:00 pm

Current show

State Champ Radio Mix

1:00 pm 7:00 pm


AI

Page: 5

08/23/2023

Here are all the music stars who have spoken out about the growing technology.

08/23/2023

Ask 100 people how they feel about AI-generated songs and you will likely get 100 different answers. But ask Selena Gomez how she feels about someone who cobbled together an AI version of The Weeknd‘s “Starboy” featuring her computer-generated vocals layered next to those of her ex and, well, her answer is swift and succinct. […]

This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: A federal judge rules that works created by A.I. are not covered by copyrights; an appeals court revives abuse lawsuits against Michael Jackson’s companies; Smokey Robinson beats a lawsuit claiming he owed $1 million to a former manager; SoundExchange sues SiriusXM for “gaming the system” on royalties; and much more.

Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.

No Copyrights For A.I. Works – But Tougher Questions Loom

The rise of artificial intelligence will pose many difficult legal questions for the music business, likely requiring some combination of litigation, regulation and legislation before all the dust settles. But on at least one A.I. issue, a federal judge just gave us a clean, straightforward answer.

In a decision issued Friday, U.S. District Judge Beryl Howell ruled that American copyright law does not cover works created entirely by artificial intelligence – full stop. That’s because, the judge said, the essential purpose of copyright law is to encourage human beings to create new works.

“Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them,” the judge wrote.

Though novel, the decision was not entirely surprising. Federal courts have long strictly limited copyrights to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.

But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The issue of copyright protection is crucial to the future role of AI, since works that are not protected would be difficult to monetize.

Trickier legal dilemmas lie ahead. What if an AI-powered tool is used in the studio to create parts of a song, but human artists then add other elements? How much human direction on the use of AI tools is needed for the output to count as “human authorship”? How can a court filter out, in practical terms, elements authored by computers?

On those questions, the current answers are much squishier – something that Judge Howell hinted at in her decision. “Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works. The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”

“This case, however, is not nearly so complex.”

Other top stories this week…

MJ ABUSE CASES REVIVED – A California appeals court revived lawsuits filed by two men who claim Michael Jackson sexually abused them as children, ruling that they can pursue negligence claims against his companies. A lower court dismissed the cases on the grounds that staffers had no power to control Jackson, who was the sole owner of the companies. But the appeals court called such a ruling “perverse” and overturned it: “A corporation that facilitates the sexual abuse of children by one of its employees is not excused from an affirmative duty to protect those children merely because it is solely owned by the perpetrator.”

SMOKEY ROBINSON TRIAL VICTORY – The legendary Motown singer won a jury trial against a former manager who claimed he was owed nearly $1 million in touring profits, capping off more than six years of litigation over the soured partnership. Robinson himself took the stand during the case, telling jurors that the deal was never intended to cover concert revenue.

“GAMING THE SYSTEM” – SoundExchange filed a lawsuit against SiriusXM claiming the satellite radio giant is using bookmaking trickery in order to withhold more than $150 million in royalties owed to artists. The case centers on allegations that SiriusXM is manipulating how it bundles satellite services with web streaming services to “grossly underpay the royalties it owes.”

TIKTOK JUDGE RESPONDS – A judge in New Jersey defended himself against misconduct allegations over TikTok videos in which he lip-synced to Rihanna’s “Jump” and other popular songs, admitting “poor judgment” and “vulgar” lyrics but saying he should receive only a light reprimand for what intended as “silly, harmless, and innocent fun.”

LAWSUIT OVER TAKEOFF SHOOTING – Joshua Washington, an assistant to the rapper Quavo, filed a lawsuit over last year’s shooting in Houston that killed fellow Migos rapper Takeoff. He claims injuries sustained during the attack are the fault of the bowling alley where the shooting took place, which he says failed to provide adequate security, screening or emergency assistance.

GUNPLAY FACING FELONY COUNTS – The rapper Gunplay was arrested in Miami and hit with three felony charges over an alleged domestic violence incident in which he is reportedly accused of drunkenly pointing an AK-47 assault rifle at his wife and child during an argument.

FRENCH DIDN’T CLEAR SAMPLE? – The rapper French Montana was hit with a copyright lawsuit claiming his 2022 song “Blue Chills” features an unlicensed sample from singer-songwriter Skylar Gudasz. She claims he tentatively agreed to pay her for the clip – both in an upfront payment and a 50 percent share of the publishing copyright — but then never actually signed the deal.

YOUTUBE FRAUDSTER SENTENCED – Webster “Yenddi” Batista Fernandez, one of the leaders of the largest-known YouTube music royalty scam in history, was sentenced to nearly four years in prison after pleading guilty to one count of wire fraud and one count of conspiracy. Under the name MediaMuv, Batista and an accomplice fraudulently collected roughly $23 million in royalties from over 50,000 songs by Latin musicians ranging from small artists to global stars like Daddy Yankee.

A federal judge ruled Friday (Aug. 18) that U.S. copyright law does not cover creative works created by artificial intelligence, weighing in on an issue that’s being closely watched by the music industry.

In a 15-page written opinion, Judge Beryl Howell upheld a decision by the U.S. Copyright Office to deny a copyright registration to computer scientist Stephen Thaler for an image created solely by an AI model. The judge cited decades of legal precedent that such protection is only afforded to works created by humans.

“The act of human creation — and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts — was … central to American copyright from its very inception,” the judge wrote. “Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.”

In a statement Friday, Thaler’s attorney Ryan Abbot said he and his client “disagree with the district court’s judgment” and vowed to appeal: “In our view, copyright law is clear that the public is the main beneficiary of the law and this is best achieved by promoting the generation and dissemination of new works, regardless of how they are created.”

Though novel, the decision was not entirely surprising. Federal courts have long strictly limited to content created by humans, rejecting it for works created by animals, by forces of nature, and even those claimed to have been authored by divine spirits, like religious texts.

But the ruling was nonetheless important because it came amid growing interest in the future role that could be played in the creation of music and other content by so-called generative AI tools, similar to the much-discussed ChatGPT. The question of copyright protection is crucial to the future role of AI since works that are not protected would be difficult to monetize.

“Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works,” the judge wrote. “The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions.”

The current case, however — dealing with a work that was admittedly created solely by a computer — “is not nearly so complex,” the judge wrote. Given the lack of any human input at all, she said, Thaler’s case presented a “clear and straightforward answer.”

Though Friday’s ruling came with a clear answer, more challenging legal dilemmas will come in the future from more subtle uses of AI. What if an AI-powered tool is used in the studio to create parts of a song, but human artists add other elements to the final product? How much human direction on the use of those tools is needed for the output to count as “human authorship”?

Earlier this year, a report by the U.S. Copyright Office said that AI-assisted works could still be copyrighted, so long as the ultimate author remains a human being. The report avoided offering easy answers, saying that protection for AI works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances.

Read the full opinion here:

Futureverse — a multi-hyphenate AI company — published a new research paper on Thursday (June 9) to introduce its forthcoming text-to-music generator. Called Jen-1, the unreleased model is designed to improve upon issues found in currently available music generators like Google’s MusicLM, providing higher fidelity audio and longer, more complex musical works than what is on the market today.

“Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool,” says Shara Senderoff, co-founder of Futureverse and co-founder of Raised in Space, about the model in an exclusive first-look with Billboard. Predicted to release in early 2024, Jen can form up-to three minute songs as well as help producers with half-written songs through offering ‘continuation’ and ‘in-painting’ as well.

‘Continuation’ allows a music maker to upload an incomplete song to Jen and direct the model to create a plausible idea of how to finish the song, and ‘in-painting’ refers to a process by which the model can fill in spaces of a song that are damaged or incomplete in the middle of the work. To Aaron McDonald, the company’s co-founder, Jen’s role is to “extend creativity” of human artists.

When asked why Jen is a necessary invention during a time in which producers, songwriters and artists are more bountiful than ever, McDonald replied, “I think musicians throughout the ages have always embraced new technology that expands the way they can create music,” pointing to electronic music as one example of how new tools shape musical evolution. “To imply that music doesn’t need [any new] technology to expand and become better now is kind of silly… and arbitrary.”

He also sees this as a way to “democratize” the “high end of music [quality],” which he says is now only accessible to musicians with the means to record at a well-equipped studio and with trained technicians. With Jen, Johnson and Senderoff hope to satisfy the interests of professional musicians and to encourage newcomers to dabble in songwriting, perhaps for the first time. The two co-founders imagine a world in which everyday people can create music, and have nicknamed the products of this type of user as ‘AIGC,’ a twist on the term User Generated Content (or ‘UGC’).

Futureverse was formed piecemeal over the last 18 months, merging eleven different pre-existing AI and metaverse start-ups together into one company to make a number of creative AI models, including those that produce animations, music, sound effects and more. To power their inventions, the company employs the AI protocol from Altered State Machine, a company that was founded by Johnson and included in the merger.

Senderoff says Jen will also be a superior product because Futureverse created it with the input of some of music’s top business executives and creators, unlike its competitors. Though Senderoff does not reveal who the industry partners are or how Jen will be a more ethical and cooperative model for musicians, but she assures an announcement will be released soon providing more information.

Despite its proposed upgrades, Futureverse’s Jen could face significant challenges from other text-to-music generators named in the new research paper, given some were made by the world’s most established tech giants and have already hit the market, but McDonald is unperturbed. “That forces us to think differently. We don’t have the resources that they do, but we started our process with that in mind. I think we can beat them with a different approach: the key insight is working with the music industry as a way to produce a better product.”

Ed Sheeran is hoping we heed Hollywood’s warnings against artificial intelligence as it becomes more prominent in mainstream media, music and everyday life. The “Shape of You” singer shared his thoughts on the technology while chatting with Audacy Live before his private show at the Hard Rock Hotel in New York recently. “What I don’t […]

“Fake Drake” and similar controversies have gotten most of the attention, but not all uses of artificial intelligence in music are cause for concern.

Meta has launched AudioCraft, a new suite of AI models that generate music and audio based on text prompts, the company announced on Wednesday (Aug. 2). The technology consists of three models: MusicGen (music), AudioGen (sound effects) and EnCodec (higher quality music). It acts as new competition for Google’s MusicLM, a text-to-music generator that launched […]

President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.

Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don’t detail who will audit the technology or hold the companies accountable.

“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.

“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”

A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.

The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.

They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.

The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology. Company executives plan to gather with Biden at the White House on Friday as they pledge to follow the standards.

Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.

“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”

Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He said in a statement that he will work closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.

A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.

Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”

But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.

The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by currently available models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.

A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.

U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.

Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.

The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.

Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.

LONDON — When the European Union announced plans to regulate artificial intelligence in 2021, legislators started focusing on “high risk” systems that could threaten human rights, such as biometric surveillance and predictive policing. Amid increasing concern among artists and rights holders about the potential impact of AI on the creative sector, however, EU legislators are also now looking at the intersection of this new technology and copyright.

The EU’s Artificial Intelligence Act, which is now being negotiated among politicians in different branches of government, is the first comprehensive legislation in the world to regulate AI. In addition to banning “intrusive and discriminatory uses” of the technology, the current version of the legislation addresses generative AI, mandating that companies disclose content that is created by AI to differentiate it from works authored by humans. Other provisions in the law would require companies that use generative AI to provide details of copyrighted works, including music, on which they trained their systems. (The AI Act is a regulation, so it would pass directly into law in all 27 member states.)

Music executives began paying closer attention to the legislation after the November launch of ChatGPT. In April, around the time that “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, drove home the issue posed by AI, industry lobbyists convinced lawmakers to add the transparency provisions.

So far, big technology companies, including Alphabet, Meta and Microsoft, have publicly stated that they, too, support AI regulation, at least in the abstract. Behind the scenes, however, multiple music executives tell Billboard that technology lobbyists are trying to weaken these transparency provisions by arguing that such obligations could put European AI developers at a competitive disadvantage.

“They want codes of conduct” — as opposed to laws — “and very low forms of regulation,” says John Phelan, director general of international music publishing trade association ICMP.

Another argument is that summarizing training data “would basically come down to providing a summary of half, or even the entire, internet,” says Boniface de Champris, Brussels-based policy manager at the Computer and Communications Industry Association Europe, which counts Alphabet, Apple, Amazon and Meta among its members. “Europe’s existing copyright rules already cover AI applications sufficiently.”

In May, Sam Altman, CEO of ChatGPT developer OpenAI, emerged as the highest-profile critic of the EU’s proposals, accusing it of “overregulating” the nascent business. He even said that his company, which is backed by Microsoft, might consider leaving Europe if it could not comply with the legislation, although he walked back this statement a few days later. OpenAI and other companies lobbied — successfully — to have an early draft of the legislation changed so that “general-purpose AI systems” like ChatGPT would no longer be considered high risk and thus subject to stricter rules, according to documents Time magazine obtained from the European Commission. (OpenAI didn’t respond to Billboard’s requests for comment.)

The lobbying over AI echoes some of the other political conflicts between media and technology companies — especially the one over the EU Copyright Directive, which passed in 2019. While that “was framed as YouTube versus the music industry, the narrative has now switched to AI,” says Sophie Goossens, a partner at global law firm Reed Smith. “But the argument from rights holders is much the same: They want to stop tech companies from making a living on the backs of their content.”

Several of the provisions in the Copyright Directive deal with AI, including an exception in the law for text- and data-mining of copyrighted content, such as music, in certain cases. Another exception allows scientific and research institutions to engage in text- and data-mining on works to which they have lawful access.

So far, the debate around generative AI in the United States has focused on whether performers can use state laws on right of publicity to protect their distinctive voices and images — the so-called “output side” of generative AI. In contrast, both the Copyright Directive and the AI Act address the “input side,” meaning ways that rights holders can either stop AI systems from using their content for training purposes or limit which ones can in order to license that right.

Another source of tension created by the Copyright Directive is the potential for blurred boundaries between research institutions and commercial businesses. Microsoft, for example, refers to its Muzic venture as “a research project on AI music,” while Google regularly partners with independent research, academic and scientific bodies on technology developments, including AI. To close potential loopholes, Phelan wants lawmakers to strengthen the bill’s transparency provisions, requiring specific details of all music accessed for training, instead of the “summary” that’s currently called for. IFPI, the global recorded-music trade organization, regards the transparency provisions as “a meaningful step in the right direction,” according to Lodovico Benvenuti, managing director of its European office, and he says he hopes lawmakers won’t water that down.

The effects of the AI Act will be felt far outside Europe, partly because they will apply to any company that does business in the 27-country bloc and partly because it will be the first comprehensive set of rules on the use of the technology. In the United States, the Biden administration has met with technology executives to discuss AI but has yet to lay out a legislation strategy. On June 22, Senate Majority Leader Chuck Schumer, D-N.Y., said that he was working on “exceedingly ambitious” bipartisan legislation on the topic, but political divides in the United States as the next presidential election approaches would make passage difficult. China unveiled its own draft laws in April, although other governments may be reluctant to look at legislation there as a model.

“The rest of the world is looking at the EU because they are leading the way in terms of how to regulate AI,” says Goossens. “This will be a benchmark.”