State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

12:00 am 12:00 pm

Current show

State Champ Radio Mix

12:00 am 12:00 pm


AI

Page: 7

A new song purportedly created with AI that mashes up Bad Bunny and Rihanna was uploaded on Tuesday (April 25), apparently by the same mystery maker behind the controversial AI Drake/Weeknd tune “Heart On My Sleeve” that led to immediate takedown notices earlier this month.

Uploaded directly to SoundCloud, YouTube and TikTok (but not Spotify, Apple Music or other major streaming services) under a different handle, Ghostwrider777, the 1:07 song, “Por Qué,” features a spot-on yearning Bad Bunny vocal over a shuffling reggaeton beat along with RihRih singing “Baby you shine so bright/ Like diamonds in the sky/ I feel alive/ But now you got me thinkin.’”

The message accompanying the clip reads, “they tried to shut us down,” and the video running underneath the song features a character in a white bucket hat, green ski goggles, winter gloves and a white sheet covering their face and upper body. As the song plays out, the on-screen graphics read: “i used AI to make a Bad Bunny song ft. Rihanna… this video will be removed in 24 hours… they tried to quiet my brother, but we will prevail.”

Representatives for Bad Bunny and Rihanna did not immediately respond to Billboard‘s requests for comment.

The Drake/Weeknd fake was quickly pulled from most streaming platforms after their label, Universal Music Group condemned it in a statement on April 17 that said the track was “infringing content created with generative AI.” That song was credited to the anonymous TikTok user Ghostwriter977 and credited to “Ghostwriter” on streaming platforms; at press time it was not clear if both songs were created by the same person. Before it was pulled, “Heart On My Sleeve” racked up millions of views and listens on Spotify, TikTok and YouTube, with the creator saying “This is just the beginning” in a comment beneath the YouTube clip.

The short-lived success of the song amplified a growing worry in the music industry over the potential impact of AI, even as a newly label-free Grimes mused this week about “killing copyright” by offering to split royalties 50/50 with fans who create a successful AI-generated song using her voice.

Click here to listen to “Por Qué.”

Spotify CEO Daniel Ek said Tuesday (April 25) that, contrary to the widespread backlash artificial intelligence (AI) tools have faced, he’s optimistic the technology could actually be a good thing for musicians and for Spotify.

While acknowledging the copyright infringement concerns presented by songs like the AI-generated Drake fake “Heart on My Sleeve” — which racked up 600,000 streams on Spotify before the platform took it down — in comments made on a Spotify conference call and podcast, Ek said AI tools could ease the learning curve for first-time music creators and spark a new era of artistic expression.

“On the positive side, this could be potentially huge for creativity,” Ek said on a conference call discussing the company’s first-quarter earnings. “That should lead to more music [which] we think is great culturally, but it also benefits Spotify because the more creators we have on our service the better it is and the more opportunity we have to grow engagement and revenue.”

Ek’s entrepreneurial confidence that AI can be an industry boon in certain instances stands in contrast to a steady campaign of condemnation for generative machine learning tools coming from Universal Music Group, the National Music Publishers’ Association (NMPA) and others.

At the same time, companies including Spotify, Warner Music Group, HYBE, ByteDance, SoundCloud and a host of start-ups have leaned in on the potential of AI, investing or partnering with machine learning companies.

The industry is still sorting the ways in which AI can be used and attempting to delineate between AI tools that are helpful and those that are potentially harmful. The use case presenting the most consternation uses a machine-learning process to identify patterns and characteristics in songs that make them irresistible and reproduce those patterns and characteristics in new creations.

Functional music — i.e., sounds designed to promote sleep, studying or relaxation — has become a fertile genre for AI, and playlists featuring AI-enhanced or generated music have racked up millions of followers on Spotify and other streaming services. This has led to concerns by some record executives who have noted that functional music eats into major-label market share.

For Spotify’s part, in February the platformSpotify launched an “AI DJ,” which uses AI technology to gin up song recommendations for premium subscribers based on their listening history, narrated by commentary delivered by an AI voice platform. 

“I’m very familiar with the scary parts … the complete generative stuff or even the so-called deep fakes that pretend to be someone they’re not,” Ek said on Tuesday’s episode of Spotify’s For the Record podcast. “I choose to look at the glass as more half-full than half-empty. I think if it’s done right, these AIs will be incorporated into almost every product suite to enable creativity to be available to many more people around the world.”

When Universal Music Group emailed Spotify, Apple Music and other streaming services in March asking them to stop artificial-intelligence companies from using its labels’ recordings to train their machine-learning software, it fired the first Howitzer shell of what’s shaping up as the next conflict between creators and computers. As Warner Music Group, HYBE, ByteDance, Spotify and other industry giants invest in AI development, along with a plethora of small startups, artists and songwriters are clamoring for protection against developers that use music created by professionals to train AI algorithms. Developers, meanwhile, are looking for safe havens where they can continue their work unfettered by government interference.

To someday generate music that rivals the work of human creators, AI models use a process of machine-learning to identify patterns in and mimic the characteristics that make a song irresistible, like that sticky verse-chorus structure of pop, the 808 drums that define the rhythm of hip-hop or that meteoric drop that defines electronic dance. These are distinctions human musicians have to learn during their lives either through osmosis or music education.

Machine-learning is exponentially faster, though; it’s usually achieved by feeding millions, even billions of so-called “inputs” into an AI model to build its musical vocabulary. Due to the sheer scale of data needed to train current systems that almost always includes the work of professionals, and to many copyright owners’ dismay, almost no one asks their permission to use it.

Countries around the world have various ways of regulating what’s allowed when it comes to what’s called the text and data mining of copyrighted material for AI training. And some territories are concluding that fewer rules will lead to more business.

China, Israel, Japan, South Korea and Singapore are among the countries that have largely positioned themselves as safe havens for AI companies in terms of industry-friendly regulation. In January, Israel’s Ministry of Justice defined its stance on the issue, saying that “lifting the copyright uncertainties that surround this issue [of training AI generators] can spur innovation and maximize the competitiveness of Israeli-based enterprises in both [machine-learning] and content creation.”

Singapore also “certainly strives to be a hub for AI,” says Bryan Tan, attorney and partner at Reed Smith, which has an office there. “It’s one of the most permissive places. But having said that, I think the world changes very quickly,” Tan says. He adds that even in countries where exceptions in copyright for text and data mining are established, there is a chance that developments in the fast-evolving AI sector could lead to change.

In the United States, Amir Ghavi, a partner at Fried Frank who is representing open-source text-to-image developer Stability AI in a number of upcoming landmark cases, says that though the United States has a “strong tradition of fair use … this is all playing out in real time” with decisions in upcoming cases like his setting significant precedents for AI and copyright law.

Many rights owners, including musicians like Helienne Lindevall, president of the European Composers and Songwriters Alliance, are hoping to establish consent as a basic practice. But, she asks, “How do you know when AI has used your work?”

AI companies tend to keep their training process secret, but Mat Dryhurst, a musician, podcast host and co-founder of music technology company Spawning, says many rely on just a few data sets, such as Laion 5B (as in 5 billion data points) and Common Crawl, a web-scraping tool used by Google. To help establish a compromise between copyright owners and AI developers, Spawning has created a website called HaveIBeenTrained.com, which helps creators determine whether their work is found in these common data sets and, free of charge, opt out of being used as fodder for training.

These requests are not backed by law, although Dryhurst says, “We think it’s in every AI organization’s best interest to respect our active opt-outs. One, because this is the right thing to do, and two, because the legality of this varies territory to territory. This is safer legally for AI companies, and we don’t charge them to partner with us. We do the work for them.”

The concept of opting out was first popularized by the European Union’s Copyright Directive, passed in 2019. Though Sophie Goossens, a partner at Reed Smith who works in Paris and London on entertainment, media and technology law, says the definition of “opt out” was initially vague, its inclusion makes the EU one of the most strict in terms of AI training.

There is a fear, however, that passing strict AI copyright regulations could result in a country missing the opportunity to establish itself as a next-generation Silicon Valley and reap the economic benefits that would follow. Russian President Vladimir Putin believes the stakes are even higher. In 2017, he stated that the nation that leads in AI “will be the ruler of the world.” The United Kingdom’s Intellectual Property Office seemed to be moving in that direction when it published a statement last summer recommending that text and data mining be exempt from opt-outs in hopes of becoming Europe’s haven for AI. In February, however, the British government put the brakes on the IPO’s proposal, leaving its future uncertain.

Lindevall and others in the music industry say they are fighting for even better standards. “We don’t want to opt out, we want to opt in,” she says. “Then we want a clear structure for remuneration.”

The lion’s share of U.S.-based music and entertainment organizations — more than 40, including ASCAP, BMI, RIAA, SESAC and the National Music Publisher’s Association — are in agreement and recently launched the Human Artistry Campaign, which established seven principles advocating AI’s best practices intended to protect creators’ copyrights. No. 4: “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”

Today, the idea that rights holders could one day license works for machine-learning still seems far off. Among the potential solutions for remuneration are blanket licenses something like the blank-tape levies used in parts of Europe. But given the patchwork of international law on this subject, and the complexities of tracking down and paying rights holders, some feel these fixes are not viable.

Dryhurst says he and the Spawning team are working on a concrete solution: an “opt in” tool. Stability AI has signed on as its first partner for this innovation, and Dryhurst says the newest version of its text-to-image AI software, Stable Diffusion 3, will not include any of the 78 million artworks that opted out prior to this advancement. “This is a win,” he says. “I am really hopeful others will follow suit.”

A song featuring AI-generated fake vocals from Drake and The Weeknd might be a scary moment for artists and labels whose livelihoods feel threatened, but does it violate the law? It’s a complicated question.

The song “Heart on My Sleeve,” which also featured Metro Boomin’s distinctive producer tag, racked up hundreds of thousands of spins on streaming services before it was pulled down on Monday evening, powered to viral status by uncannily similar vocals over a catchy instrumental track. Millions more have viewed shorter snippets of the song that the anonymous creator posted to TikTok.

It’s unclear whether only the soundalike vocals were created with AI tools – a common trick used for years in internet parody videos and deepfakes – or if the entire song was created solely by a machine based purely on a prompt to create a Drake track, a more novel and potentially disruptive development. 

For an industry already on edge about the sudden growth of artificial intelligence, the appearance of a song that convincingly replicated the work product of two of music’s biggest stars and one of its top producers and won over likely millions of listeners has set off serious alarm bells.

“The ability to create a new work this realistic and specific is disconcerting, and could pose a range of threats and challenges to rightsowners, musicians, and the businesses that invest in them,” says Jonathan Faber, the founder of Luminary Group and an attorney who specializes in protecting the likeness rights of famous individuals. “I say that without attempting to get into even thornier problems, which likely also exist as this technology demonstrates what it may be capable of.”

“Heart On My Sleeve” was quickly pulled down, disappearing from most streaming services by Monday evening. Representatives for Drake, The Weeknd and Spotify all declined to comment when asked about the song on Monday. And while the artists’ label, Universal Music Group, issued a strongly worded statement condemning “infringing content created with generative AI,” a spokesperson would not say whether the company had sent formal takedown requests over the song. 

A rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was removed because it used a copyrighted music sample.

Highlighted by the debacle is a monumental legal question for the music industry that will likely be at the center of legal battles for years to come: To what extent do AI-generated songs violate the law? Though “Heart on My Sleeve” was removed relatively quickly, it’s a more complicated question than it might seem.

For starters, the song appears to be an original composition that doesn’t directly copy any of Drake or the Weeknd’s songs, meaning that it could be hard to make a claim that it infringes their copyrights, like when an artist uses elements of someone else’s song without permission. While Metro Boomin’s tag may have been illegally sampled, that element likely won’t exist in future fake songs.

By mimicking their voices, however, the track represents a clearer potential violation of Drake and Weeknd’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Such rights are more typically invoked when someone’s name or visual likeness is stolen, but they can extend to someone’s voice if it’s particularly well-known – think Morgan Freeman or James Earl Jones.

“The right of publicity provides recourse for rights owners who would otherwise be very vulnerable to technology like this,” Faber said. “It fits here because a song is convincingly identifiable as Drake and the Weeknd.”

Whether a right of publicity lawsuit is legally viable against this kind of voice mimicry might be tested in court soon, albeit in a case dealing with decidedly more old school tech.

Back in January, Rick Astley sued Yung Gravy over the rapper’s breakout 2022 hit that heavily borrowed from the singer’s iconic “Never Gonna Give You Up.” While Yung Gravy had licensed the underlying composition, Astley claimed Yung Gravy violated his right of publicity when he hired a singer who mimicked his distinctive voice.

That case has key differences from the situation with “Heart on My Sleeve,” like the allegation that Gravy falsely suggested to his listeners that Astley had actually endorsed his song. In the case of “Heart on My Sleeve,” the anonymous creator Ghostwriter omitted any reference to Drake and The Weeknd on streaming platforms; on TikTok, he directly stated that he, and not the two superstars, had created his song using AI.

But for Richard Busch of the law firm King & Ballow, a veteran music industry litigator who brought the lawsuit on behalf of Astley, the right of publicity and its protections for likeness still provides the most useful tool for artists and labels confronted with such a scenario in the future.

“If you are creating a song that sounds identical to, let’s say, Rihanna, regardless of what you say people are going to believe that it was Rihanna. I think there’s no way to get around that,” Busch said. “The strongest claim here would be the use of likeness.”

But do AI companies themselves break the law when they create programs that can so effectively mimic Drake and The Weeknd’s voices? That would seem to be the far larger looming crisis, and one without the same kind of relatively clear legal answers.

The fight ahead will likely be over how AI platforms are “trained” – the process whereby machines “learn” to spit out new creations by ingesting millions of existing works. From the point of view of many in the music industry, if that process is accomplished by feeding a platform copyrighted songs — in this case, presumably, recordings by Drake and The Weeknd — then those platforms and their owners are infringing copyrights on a mass scale.

In UMG’s statement Monday, the label said clearly that it believes such training to be a “violation of copyright law,” and the company previously warned that it “will not hesitate to take steps to protect our rights and those of our artists.” The RIAA has said the same, blasting AI companies for making “unauthorized copies of our members works” to train their machines.

While the training issue is legally novel and unresolved, it could be answered in court soon. A group of visual artists has filed a class action over the use of their copyrighted images to train AI platforms, and Getty Images has filed a similar case against AI companies that allegedly “scraped” its database for training materials. 

And after this week’s incident over “Heart on My Sleeve,” a similar lawsuit against AI platforms filed by artists or music companies gets more likely by the day.

National Association of Broadcasters president and CEO Curtis LeGeyt spoke out on the potential dangers of Artificial Intelligence on Monday at the NAB Show in Las Vegas. “This is an area where NAB will absolutely be active,” he asserted of AI, which is one of the buzziest topics this week at the annual convention. “It is just amazing how quickly the relevance of AI to our entire economy — but specifically, since we’re in this room, the broadcast industry — has gone from amorphous concept to real.”

LeGeyt warned of several concerns that he has for local broadcasters, the first being issues surrounding “big tech” taking broadcast content and not fairly compensating broadcasters for its use. “We have been fighting for legislation to put some guardrails on it,” LeGeyt said. “AI has the potential to put that on overdrive. We need to ensure that our stations, our content creators are going to be fairly compensated.”

He added that he worries for journalists. “We’re already under attack for any slip-up we might have with regard to misreporting on a story. Well, you’re gonna have to do a heck of a lot more diligence to ensure that whatever you are reporting on is real, fact-based information and not just some AI bot that happens to look like Joe Biden.” Finally, he warned of images and likenesses being misappropriated where AI is involved.

“I want to wave the caution flag on some of these areas,” he said. “I think this could be really damaging for local broadcast.”

During his talk, he also outlines was he sees as potential opportunities. “My own view is there are some real potentially hyperlocal benefits to AI,” he said, citing as examples translation services and the ability to speed up research at “resource-constrained local stations.” He asserted, “Investigative journalism is never going to be replaced by AI. Our role at local community events, philanthropic work, is never going to be replaced by AI. But to the degree that we can leverage AI to do some of the things that are time-consuming and take away your ability to be boots on the ground doing the things that only you can do well, I think that’s a positive.”

Also addressed during the session was the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which may include capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a lot of moving parts and has a long way to go before its potential can be realized.

At NAB, FCC chairwoman Jessica Rosenworcel was on hand to announce the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0. “With over 60 percent of Americans already in range of a Next Gen TV signal, we are excited to work closely with all stakeholders, including the FCC, to bring Next Gen TV and all of its benefits to all viewers,” said LeGeyt.

During his session, LeGeyt also addressed “fierce competition for the dashboard” as part of a discussion of connected cars. “It’s not enough for any one [broadcaster] to innovate. If we are all not rowing in the same direction as an industry, … we are going to lose this arms race,” he warned.

Citing competition from the likes of Spotify, he contends that the local content offered by broadcasters gives them a “competitive advantage.”

The NAB Show runs through Wednesday.

This article was originally published by The Hollywood Reporter.

A new song believed to feature AI-generated fake vocals from Drake and The Weeknd that went viral over the weekend has been pulled from most streaming platforms after their label, Universal Music Group, released a statement Monday (April 17) condemning “infringing content created with generative AI.”
Released by an anonymous TikTok user called Ghostwriter977 and credited as Ghostwriter on steaming platforms where it racked up hundreds of thousands of streams, the track “Heart On My Sleeve” features uncannily similar voices to the two superstars — a trick that the creator says was accomplished by using artificial intelligence. It’s unclear if the entire song was created with AI, or just the soundalike vocals.

By Monday afternoon, the song had generated more 600,000 spins on Spotify, and Ghostwriter977’s TikTok videos had been viewed more than 15 million times. A YouTube video had another 275,000 views, with an ominous comment from the creator below it: “This is just the beginning.”

Many music fans seemed impressed. One comment on TikTok with more than 75,000 likes said it was the “first AI song that has actually impressed me.” Another said Ghostwriter was “putting out better drake songs than drake himself.” A third said AI was “getting dangerously good.”

But the end could already be in sight. At time of publishing on Monday evening, “Heart On My Sleeve” had recently been pulled from Spotify, as well as Apple Music, Deezer and TIDAL before it.

Even if short-lived, the sensational success of “Heart On My Sleeve” will no doubt underscore growing concerns over the impact of AI on the music industry. Last week, UMG urged streaming platforms like Spotify to block AI companies from accessing the label’s songs to “train” their machines, and the RIAA has warned that doing so infringes copyrights on a mass scale. Last month, a large coalition of industry organizations warned that AI technology should not be used to “replace or erode” human artistry.

Representatives for Drake and The Weeknd declined to comment on Monday. But in a statement to Billboard, UMG said the viral postings “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

“The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” a UMG spokesman said in a statement. “We’re encouraged by the engagement of our platform partners on these issues – as they recognize they need to be part of the solution.”

UMG declined to comment on whether it had sent formal takedown requests to streaming services and social media websites.

Drake is in his feelings.
On Friday (April 14), the chart-topping artist took to Instagram to voice his opinion about AI-generated versions of his voice, particularly a video that features him rapping Bronx artist Ice Spice’s “Munch.”

“This is the last straw,” he wrote on his story, along with a post about the AI clip. The pairing of Drake with Ice Spice is particularly interesting, given the rappers’ history. While Drake was an early advocate of Ice Spice, born Isis Gaston, he unfollowed her on Instagram, something Gaston had no explanation for in interviews. However, shortly after, he re-followed her.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Drake’s complaint comes after Universal Music Group asked streaming services including Spotify and Apple Music to prevent artificial intelligence companies from accessing their copyrighted songs. AI companies would use the music to “train” their machines, something that is becoming a cause for concern within the music industry.

In an email sent to Spotify, Apple Music and other streaming platforms, UMG said that it had become aware that certain AI services had been trained on copyrighted music “without obtaining the required consents” from those who own the songs.

“We will not hesitate to take steps to protect our rights and those of our artists,” UMG warned in the email, first obtained by the Financial Times. Billboard confirmed the details with sources on both sides. Although there isn’t clarity on what those steps would be or what streaming platforms can do to stop it, labels and artists alike seem aligned about a needed change.

UMG later issued a statement regarding the email sent to DSPs. “We have a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators. We expect our platform partners will want to prevent their services from being used in ways that harm artists,” it read.

Other AI covers making the rounds include Rihanna singing Beyoncé’s “Cuff It,” which sounded relatively believable, aside from a glitch during a melodic run.

While the implications of artificial intelligence poking its head into music can be scary for artists and labels alike, it’s hard not to smirk at Drizzy rapping, “A– too fat, can’t fit in no jeans.”

President Joe Biden said Tuesday it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public.

Biden met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence pose for individual users and national security.

“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group, which includes academics as well as executives from Microsoft and Google.

Artificial intelligence burst to the forefront in the national and global conversation in recent months after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about technology that can generate convincing prose or imagery that looks like it’s the work of humans.

The White House said the Democratic president was using the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.

Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating the passage of new rules to limit high-risk AI products across the 27-nation bloc.

The U.S. so far has taken a different approach. The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.

Biden’s council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.

Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”

In a new open letter signed by Elon Musk, Steve Wozniak, Andrew Yang and more on Wednesday (March 29), leaders in technology, academia and politics came together to call for a moratorium on training AI systems “more advanced than Chat GPT-4” for “at least 6 months.”

The letter states that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” including the increased spread of propaganda and fake news as well as automation leading to widespread job loss. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks.

By drawing the line at AI models “more advanced than Chat GPT-4,” the signees are likely pointing to generative artificial intelligence — a term encompassing a subset of AI that can create new content after being trained via the input of millions or even billions of pieces of data. While some companies license or create their own training data, a large number of AIs are trained using data sets scraped from the web that contain copyright-protected material, including songs, books, articles, images and more. This practice has sparked widespread debate over whether or not AI companies should be required to obtain consent or to compensate the rights holders, and whether the fast-evolving models will endanger the livelihoods of musicians, illustrators and other creatives.

Before late 2022, generative AI was little discussed outside of tech-savvy circles, but it has gained national attention over the last six months. Popular examples of generative AI today include image generators like DALLE-2, Stable Diffusion and Midjourney, which use simple text prompts to conjure up realistic pictures. Chatbots (also called Large Language Models or “LLMs”) like Chat GPT are also considered generative, as are machines that can create new music at the touch of a button. Though generative AI models in music have yet to make as many headlines as chatbots and image generators, companies like Boomy, Soundful, Beatlab, Google’s Magenta, Open AI and others are already building them, leading to fears that their output could one day threaten human-made music.

The letter urging the pause in AI training was signed by some of AI’s biggest executives. They notably include Stability AI CEO Emad Mostaque, Conjecture AI CEO Connor Leahy, Unanimous AI CEO and chief scientist Louis Rosenberg and Scale AI CEO Julien Billot. It was also signed by Pinterest co-founder Evan Sharp, Skype co-founder Jaan Tallinn and Ripple CEO Chris Larsen.

Other signees include several engineers and researchers at Microsoft, Google and Meta, though it notably does not include any names from Open AI, the firm behind the creation of Chat GPT-4.

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter continues. Rather, the industry must “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter comes only a few weeks after several major organizations in the entertainment industry, including in music, came together to release a list of seven principles, detailing how they hope to protect and support “human creativity” in the wake of the AI boom. “Policymakers must consider the interests of human creators when crafting policy around AI,” the coalition wrote. “Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table.”

A wide coalition of music industry organizations have joined together to release a series of core principles regarding artificial intelligence — the first collective stance the entertainment business has taken surrounding the topic. Announced during the panel “Welcome to the Machine: Art in the Age of A.I.” held on Thursday (March 16) at South by Southwest (SXSW) and moderated by Billboard deputy editorial director Robert Levine, the principles reveal a growing sense of urgency by entertainment industry leaders to address the quickly-evolving issue.

“Over the past few months, I think [generative artificial intelligence] has gone from a ‘someday’ issue to a today issue,” said Levine. “It’s coming much quicker than anyone thought.”

In response to the fast-approaching collision of generative AI and the entertainment business, the principles detail the need for using the new technology to “empower human expression” while also asserting the importance of representing “creators’ interests…in policymaking” regarding the technology. Principles geared toward the latter include ensuring that AI developers acquire licenses for artistic works used in the “development and training of AI models” — and keep records of which works are used — and that governments refrain from creating “copyright or other IP exemptions” for the technology.

Among the 40 different groups that have joined the coalition — dubbed the Human Artistry Campaign — are music industry leaders including the Recording Industry Association of America (RIAA), National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), SoundExchange, ASCAP, BMI and more.

Read the full list of principles below and get more information, including the full list of groups involved in the effort, here.

Core Principles for Artificial Intelligence Applications in Support of Human Creativity and Accomplishments:

Technology has long empowered human expression, and AI will be no different.

For generations, various technologies have been used successfully to support human creativity. Take music, for example… From piano rolls to amplification to guitar pedals to synthesizers to drum machines to digital audio workstations, beat libraries and stems and beyond, musical creators have long used technology to express their visions through different voices, instruments, and devices. AI already is and will increasingly play that role as a tool to assist the creative process, allowing for a wider range of people to express themselves creatively.

Moreover, AI has many valuable uses outside of the creative process itself, including those that amplify fan connections, hone personalized recommendations, identify content quickly and accurately, assist with scheduling, automate and enhance efficient payment systems – and more. We embrace these technological advances.

Human-created works will continue to play an essential role in our lives.

Creative works shape our identity, values, and worldview. People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others. Only humans can create and fully realize works written, recorded, created, or performed with such specific meaning. Art cannot exist independent of human culture.

Use of copyrighted works, and use of the voices and likenesses of professional performers, requires authorization, licensing, and compliance with all relevant state and federal laws.

We fully recognize the immense potential of AI to push the boundaries for knowledge and scientific progress. However, as with predecessor technologies, the use of copyrighted works requires permission from the copyright owner. AI must be subject to free-market licensing for the use of works in the development and training of AI models. Creators and copyright owners must retain exclusive control over determining how their content is used. AI developers must ensure any content used for training purposes is approved and licensed from the copyright owner, including content previously used by any pre-trained AIs they may adopt. Additionally, performers’ and athletes’ voices and likenesses must only be used with their consent and fair market compensation for specific uses.

Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.

AI must not receive exemptions from copyright law or other intellectual property laws and must comply with core principles of fair market competition and compensation. Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.

Copyright should only protect the unique value of human intellectual creativity.

Copyright protection exists to help incentivize and reward human creativity, skill, labor, and judgment -not output solely created and generated by machines. Human creators, whether they use traditional tools or express their creativity using computers, are the foundation of the creative industries and we must ensure that human creators are paid for their work.

Trustworthiness and transparency are essential to the success of AI and protection of creators.

Complete recordkeeping of copyrighted works, performances, and likenesses, including the way in which they were used to develop and train any AI system, is essential. Algorithmic transparency and clear identification of a work’s provenance are foundational to AI trustworthiness. Stakeholders should work collaboratively to develop standards for technologies that identify the input used to create AI-generated output. In addition to obtaining appropriate licenses, content generated solely by AI should be labeled describing all inputs and methodology used to create it — informing consumer choices, and protecting creators and rightsholders.

Creators’ interests must be represented in policymaking.

Policymakers must consider the interests of human creators when crafting policy around AI. Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table in any conversations regarding legislation, regulation, or government priorities regarding AI that would impact their creativity and the way it affects their industry and livelihood.