artificial intelligence
LONDON — The music business is calling on the U.K. government to robustly protect copyright and “safeguard against misuse” by technology companies in any future regulations governing the use of artificial intelligence (AI).
On Tuesday (Dec. 17), the British government launched a 10-week consultation on how copyright protected content, such as music, can lawfully be used by developers to train generative AI models.
The proposals include introducing a new data mining exception to copyright law that would allow AI developers to use copyrighted songs for AI training, including commercial purposes, but only in instances where rights holders have not reserved their rights. Such an opt out mechanism, says the government proposal, gives creators and rights holders the ability to control, licence and monetize the use of their content – or prevent their works being used by AI developers entirely.
The consultation also recommends new transparency requirements for AI developers around what content they have used to train their models and how it was obtained, as well as the labelling of AI-generated material.
Trending on Billboard
Policymakers will additionally seek views from stakeholders on the protection of personality and image rights, and whether the current legal framework provides sufficient protection against AI-generated deepfake imitations.
“Currently, uncertainty about how copyright law applies to AI is holding back both sectors from reaching their full potential,” said the Department for Culture, Media and Sport (DCMS) in a statement announcing the consultation. “It can make it difficult for creators to control or seek payment for the use of their work, and creates legal risks for AI firms, stifling AI investment, innovation, and adoption.”
The government said that its proposed changes to copyright law will give clarity to AI developers over what content they are legally allowed to use when training generative AI models and “enhance” creators’ ability to be paid for the use of their work.
Before any new exceptions to copyright law can be introduced, further work would need to take place to ensure transparency standards and the mechanisms for rights holders to reserve their rights are “effective, accessible and widely adopted,” said DCMS.
“This government firmly believes that our musicians, writers, artists and other creatives should have the ability to know and control how their content is used by AI firms and be able to seek licensing deals and fair payment,” said Lisa Nandy, Secretary of State for Culture, Media and Sport, in a statement. “Achieving this, and ensuring legal certainty, will help our creative and AI sectors grow and innovate together in partnership.”
The start of the government’s long-awaited consultation on AI policy comes amid heightened lobbying from both the creative and technology industries. On Monday, a coalition of rights holders, including record labels, music publishers and artist groups, came together to call for copyright protection to be at the heart of any U.K. AI legislation.
The newly formed Creative Rights in AI Coalition, whose members include U.K. record labels trade body BPI, umbrella organization UK Music and the Music Publishers Association, wants policymakers to draw up AI laws that permit a “mutually beneficial, dynamic licensing market” built around “robust protections for copyright.”
The creative industries coalition said any future AI legislation must ensure accountability and compliance from AI developers and tech companies, who it said have thus far been exploiting copyright protected works “without permission, ignoring copyright protections and clear reservations of rights.”
The U.K. creative industries generated around £125 billion ($158 billion) for the country’s economy last year, according to government figures, with the music industry contributing a record £7.6 billion, up 13% year-on-year, of that total, according to UK Music research.
The U.K. is the world’s third-biggest recorded music market behind the U.S. and Japan with sales of $1.9 billion in 2023, according to IFPI. It is also the second-largest exporter of recorded music worldwide behind the U.S.
“Without proper control and remuneration for creators, investment in high-quality content will fall,” said the coalition, which also includes the Association of Independent Music (AIM) and British collecting societies PRS for Music and PPL, as well as trade groups representing photographers, illustrators, journalists, authors and filmmakers.
“Just as tech firms are content to pay for the huge quantity of electricity that powers their data centres, they must be content to pay for the high-quality copyright-protected works which are essential to train and ground accurate generative AI models.”
In a separate statement, BPI CEO Jo Twist said the organization was looking forward to working with the government on developing its AI policy but said it remains the BPI’s “firm view” that introducing a new exception to copyright for AI training “would weaken the U.K’s copyright system and offer AI companies permission to take – for their own profit, and without authorisation or compensation – the product of U.K. musicians’ hard work, expertise, and investment.”
“It would amount to a wholly unnecessary subsidy, worth billions of pounds, to overseas tech corporations at the expense of homegrown creators,” said Twist in a statement. She went on to say that opt-out schemes in other markets similar to what is being proposed by the U.K. government have been shown to increase legal uncertainty, “are unworkable in practice, and are woefully ineffective” in protecting creators’ rights.
The government’s recommendation to introduce a new copyright exception for AI training is an idea that it has floated before – and received strong push back from the music industry to. In 2021, the Intellectual Property Office (IPO) was heavily criticized by artists, labels and publishers for suggesting a new text and data mining (TDM) exception that would have allowed AI developers to freely use copyright-protected works for commercial purposes (albeit with certain restrictions).
Those proposals were quietly shelved by the government the following year but progress on any U.K. legislation governing the use of AI has been slow to arrive. In contrast, the 27-member block European Union, which the United Kingdom officially left in 2020, passed its world-first Artificial Intelligence Act – requiring transparency and accountability from AI developers – in March.
Meanwhile, other major music markets, including the United States, Japan and China are advancing their own attempts to regulate the nascent technology amid loud opposition from creators and rights holders over the unauthorized use of their work to train generative AI systems.
Earlier this year, the three major record companies – Universal Music Group, Sony Music Entertainment and Warner Music Group — filed lawsuits against AI music firms Suno and Udio alleging the widespread infringement of copyrighted sound recordings “at an almost unimaginable scale” Sony Music and Warner Music have additionally issued public notices to AI companies warning them against scraping their copyrighted data.
More recently, in October, thousands of musicians, composers, actors and authors from across the creative industries – as well as all three major record labels – signed a statement opposing the unlicensed use of creative works for training generative AI. The number of signatories has since risen to more than 37,000 people, including ABBA’s Björn Ulvaeus, all five members of Radiohead and The Cure’s Robert Smith.
Merlin, which oversees digital licensing for the independent sector, has outlined its position on the use of music in training artificial intelligence in a new memo.
Like many organizations in the music industry, Merlin supports “AI products that aid human creativity, or provide new opportunities for artists to create and collaborate in developing new original works,” wrote the organization in a mission statement shared with Billboard. But it strongly opposes “any product, regardless of its purpose, that has been trained on Merlin members’ music without permission.”
Sony Music and Warner Music Group, among others, made similar announcements earlier this year, warning AI companies not to harvest their data for training purposes. But Merlin’s statement on Friday suggests that its members are even more vulnerable than the major labels.
Trending on Billboard
“These are not multinational corporations,” Merlin notes. “They are often small businesses operating in support of artists who shape contemporary culture around the world, and who are trying to earn a living in an increasingly challenging environment. Unlicensed use of these artists’ work creates a genuine and imminent threat to artists’ livelihoods and the livelihoods of those who work to support them.”
Like Merlin, most — if not all — music industry rightsholders believe that AI companies should license their music if they want to use those catalogs of recordings to develop song generation technology. Statements from a number of AI companies, however, indicate that they aren’t interested in paying. They often argue that training their models fall under “fair use,” the U.S. legal doctrine that allows for the unlicensed use of copyrighted works in certain situations.
But Merlin hit back against this argument on Friday: “Taking someone else’s creative work — without permission, without compensation, and with the specific purpose of using that work to create new works that are substitutional for the original — is inherently not fair use,” the statement reads.
“AI companies and their investors would, we assume, look to copyright and other IP law to protect against any unauthorized uses of their technology,” Merlin continued. But ironically, “AI companies are rightly protective over their models and proprietary software, yet some seem to view other people’s intellectual property as ‘free data’ to feed their algorithm.”
Read Merlin’s full memo below.
Merlin is the independents’ digital licensing partner. Merlin’s primary function is to enable innovative and properly-compensated uses of its members’ music. This is clearly demonstrated by the partnerships Merlin has in place with so many of the world’s leading digital services.
All of our partnerships have one thing in common: our partners value music. While our partnerships have evolved over the years, they respect the human artistry involved in creating music and the financial investment needed to nurture, distribute and market it. The rapid evolution of artificial intelligence (AI) does nothing to change that.
Merlin and its members have always embraced and adapted to technological change, while ensuring that the value of human creativity is respected. Artistic expression is a fundamental part of what makes us human. The ability to create, appreciate, and enjoy art, in all its forms, is foundational to the human experience. Music, in particular, brings people together, evokes emotions, and helps us express thoughts and feelings.
Merlin recognises the enormous power of AI and its benefits to the creative community and society as a whole; but, if AI is left unregulated, the impact on the creative industries and, by extension, global culture will be devastating.
Merlin believes that, when developed and implemented responsibly, AI technologies can be additive to the creative landscape. AI products that aid human creativity, or provide new opportunities for artists to create and collaborate in developing new original works, are products that Merlin supports. Merlin and its members are ready to partner with AI companies that want to be on the right side of history – those that are willing properly to compensate Merlin members for use of their repertoire and to include appropriate guardrails to protect Merlin members’ rights.
However, Merlin cannot support any product, regardless of its purpose, that has been trained on Merlin members’ music without permission.
Merlin’s members, and the independent labels they represent, number in the thousands. These are not multinational corporations. They are often small businesses operating in support of artists who shape contemporary culture around the world, and who are trying to earn a living in an increasingly challenging environment. Unlicensed use of these artists’ work creates a genuine and imminent threat to artists’ livelihoods and the livelihoods of those who work to support them.
It has been suggested that training of AI models on artists’ work without permission should somehow be considered “fair use”. We believe it is the exact opposite of fair, both morally and legally.
The legal test for fair use involves four criteria, relating to the purpose and character of the use, the nature of the copyright work, the amount used, and the effect upon the market or value of the copyright work. Unlicensed commercial AI models fail on all four. Any AI company that trains its models by scraping the internet for copyright-protected sound recordings is making unauthorized reproductions of entire copyright works. Invariably, these copies are used for commercial purposes, and the AI-generated sound recordings resulting from the models pose a significant threat to the market for Merlin artists’ copyrighted sound recordings by creating directly competitive digital music files. There is much talk about these uses being fair merely because the outputs are “transformative”, but even transformative uses need to take into account the impact on the original works, and the extent to which they are substitutional for the original. In the case of AI-generated music, the substitutional impact is obvious.
Taking someone else’s creative work – without permission, without compensation, and with the specific purpose of using that work to create new works that are substitutional for the original – is inherently not fair use.
In seeking to license their music, Merlin members and their artists are not leveraging their copyrights to gain an unfair advantage. They are doing their best to earn a living and to protect rights in their expressive works. This is no different to how AI companies and their investors would, we assume, look to copyright and other IP law to protect against any unauthorized uses of their technology. AI companies are rightly protective over their models and proprietary software, yet some seem to view other people’s intellectual property as “free data” to feed their algorithm.
It is Merlin’s position, and that of its members, that any and all uses of Merlin member repertoire for training, development or implementation of AI models and related purposes requires explicit written authorization from Merlin or the applicable Merlin member. Merlin’s policy is clearly displayed on its website at https://merlinnetwork.org/policy-on-ai/.
If you are a responsible AI company that seeks to use independent music to train a model, or to offer a product or service that is additive to the music ecosystem and has intrinsic creative benefit to music creators, please contact us at ResponsibleAI@merlinnetwork.org.
12/10/2024
Between the the majors suing Suno and Udio, the ELVIS Act protecting voices against deepfakes and “BBL Drizzy” setting legal precedent, it’s been a big year for AI music.
12/10/2024
Music made by generative AI has been on the horizon as an issue for a couple of years, and the industry started playing close attention when the “Fake Drake” track hit streaming services in April 2023. The part of the issue that gets the most attention is, of course, that part that involves celebrities — especially Drake, who objected when his voice was spoofed by AI and then used AI software to spoof the voices of other rappers.
That’s just the tip of a particularly dangerous iceberg, though, according to a new study commissioned by global collective management trade organization CISAC and conducted by PMP Strategy. Vocal imitations are fun, but how much time can you really spend listening to Frank Sinatra sing Lil Jon? The bigger issue is what generative AI means for new music — first for passive listening, presumably, and eventually for the entire business.
Trending on Billboard
Using quantitative and qualitative research, PMP concluded that a full $4 billion could be lost to composers and publishing rightsholders in 2028 — 24% of the revenue they collect through CISAC-member organizations. (This gets complicated: The study measures revenue that comes to them through collective management organizations, which includes performing rights and, in most cases, mechanical rights royalties.) This doesn’t even count the recording business, or revenue from synch licensing. By 2028, generative AI music will be worth $16 billion and the services that create it will bring in $4 billion in revenue. One previous study commissioned by SACEM and GEMA reached somewhat similar conclusions — in that case, that 27% of revenue would be at risk.
The music business is driven by new pop music, so there’s a tendency to focus on stars like Drake. But publishers and songwriters also depend on background music played in public — at bars and in stores — and streaming services have made a business out of utilitarian music, sounds to help listeners focus, relax or sleep. That might be where the impact of generative AI is felt first — the restaurant that plays AI music to avoid paying a performing rights organization, a playlist that avoids copyrighted music, a low-budget film that uses music generated by an algorithm. By 2028, the study predicts that generative AI could cut into 30% of digital revenue, 22% of TV and radio revenue and 22% of compositions played in public. Eventually, presumably, AI could generate hits as well.
To some extent, AI is inevitable.
“There’s no way we can or should stand against AI — it can be a wonderful tool,” said songwriter and ABBA frontman Björn Ulvaeus, who serves as president of CISAC, at a Nov. 3 online press conference to announce the results of the study. Many composers already use it as a tool, mostly for specific purposes. (I thought of using AI to write this column but I got nervous that it would do a poor job — and terrified that it would do a good one.) “Creators should be at the negotiating table,” Ulvaeus said. “The success of AI isn’t based on public content — it’s based on copyrighted works. We need to negotiate a fair deal.”
That’s only possible if generative AI companies are required to license the rights to ingest works, which by definition involves copying them. That seems to be the case already in the European Union, although the AI Act says rightsholders need to opt out in order to prevent the unauthorized use of their work to train AI software. In the U.S., until Congress turns its attention to AI, this is a matter for the courts, and in June the RIAA, on behalf of the major labels, sued the generative AI companies Suno and Udio for allegedly infringing their copyrights.
The study lays out a picture of how generative AI will develop between now and 2028, in both the music and audiovisual sectors — it will affect streaming revenue the most, but also change the market for music in TV and film. And although few people spend much time thinking about background music, it provides a living, or part of a living, for many musicians. If that business declines, will those musicians still be able to play on albums that demand their skills? Will studios that are booked for all kinds of music survive without the background music business?
The study predicts that the generative AI business will grow as the creative sector shrinks, as some of the money from music goes to software — presumably software trained on copyrighted works. “In an unchanged regulatory framework, creators will not benefit from the Gen AI revolution,” the study says. Instead, they will suffer “the loss of revenues due to the unauthorized use of their works” to train AI software and the “replacement of their traditional revenue streams due to the substitution effect of AI-generated outputs.”
Making sure composers and other creators are compensated fairly for the use of their works in training generative AI programs will not be easy. It would be hard for creators and other rightsholders to license a onetime right to use material for training purposes, after which an AI model can use it forever. The current thinking is that it makes more sense to license these rights, then require AI programs to operate with a certain level of transparency to track the works they reference in response to a given prompt. Then the owners of those works can be paid.
This is going to be hard. Getting it right means starting immediately — and the obvious first step is clarifying creators’ rights to be compensated when their work is used to train an AI.
Nvidia, the computer chip giant, has entered the AI music race by announcing its new model, Fugatto, on Tuesday (Nov. 26). The company calls Fugatto, short for Foundational Generative Audio Transformer Opus 1, a “Swiss Army knife for sound.”
Using text or audio prompts, Fugatto can generate new music at the click of a button and edit existing audio, including removing or adding instruments from a song or changing the accent and emotion in a voice, in seconds.
With Fugatto, Nvidia aims to take on today’s top AI music models, including Suno, Udio and many more. Though it is a late entrant in the race to create the best music AI model, Fugatto appears to have crisp audio quality and a number of capabilities that could change the music-making process for producers and composers.
According to the announcement on Nvidia’s blog, “One of the hardest parts of the effort was generating a blended dataset that contains millions of audio samples used for training,” which the company says it worked on for more than a year to get right. “The team employed a multifaceted strategy to generate data and instructions that considerably expanded the range of tasks the model could perform, while achieving more accurate performance and enabling new tasks without requiring additional data.” It is unclear whether or not this dataset included copyrighted material. Nvidia has not responded to Billboard’s request for comment.
Trending on Billboard
Nvidia proposes a number of use cases for Fugatto, including generating a score for visual media; editing certain parts of a score; and altering a voice to have different accents, emotions and timbres. “Fugatto can make a trumpet bark or a saxophone meow. Whatever users can describe, the model can create,” says Rafael Valle, a manager of applied audio research at Nvidia.
“The history of music is also a history of technology,” says Ido Zmishlany, a producer/songwriter and co-founder of One Take Audio, a member of Nvidia Inception, its program for cutting-edge startups. “With AI we’re writing the next chapter of music. We have a new instrument, a new tool for making music — and that’s super exciting.”
Nvidia claims this is the first AI music model that showcases “emergent properties — capabilities that arise from the interaction of its varous trained abilities — and the ability to combine free-form instructions.” Valle adds that Fugatto is “our first step toward a future where unsupervised multitask learning in audio synthesis and transformation emerges from data and model scale.”
So far, Nvidia has not provided a release date for Fugatto.
Senator Peter Welch (D-Vt.) introduced the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act on Monday in the latest effort to shield songwriters, musicians and other creators from the unauthorized use of their works in training generative AI models.
If successful, the legislation would grant copyright holders access to training records, enabling them to verify if their creations were used — a process similar to methods combating internet piracy.
Trending on Billboard
“This is simple: if your work is used to train A.I., there should be a way for you, the copyright holder, to determine that it’s been used by a training model, and you should get compensated if it was,” said Welch. “We need to give America’s musicians, artists, and creators a tool to find out when A.I. companies are using their work to train models without artists’ permission.”
Creative industry leaders have long voiced concerns about the opaque practices of AI companies regarding the use of copyrighted materials. Many of these startups and firms do not disclose their training methods, leaving creators unable to determine whether their works have been incorporated into AI systems. The TRAIN Act directly addresses this so-called “black box” problem, aiming to introduce transparency and accountability into the AI training process.
Welch’s bill is just the latest development in the battle between rights holders and generative AI. In May, Sony Music released a statement warning more than 700 AI companies not to scrape the company’s copyrighted data, while Warner Music released a similar statement in July. That same month in the U.S. Senate, an anti-AI deepfakes bill dubbed the No FAKES Act was introduced by a bipartisan group of senators. In October, thousands of musicians, composers, international organizations and labels — including all three majors — signed a statement opposing AI companies and developers using their work without a license for training generative AI systems.
During a Senate Judiciary Committee hearing earlier this month, U.S. Copyright Director Shira Perlmutter emphasized the importance of transparency to protect copyrighted materials, saying that without insight into how AI systems are trained, creators are left in the dark about potential misuse of their work, undermining their rights and earnings.
Sen. Welch has been active in promoting consumer protections and safety around emerging technologies, including AI. His previous initiatives include the AI CONSENT Act, which mandates that online platforms obtain informed consent from users before utilizing their data for AI training, and the Digital Platform Commission Act, which proposes the establishment of a federal regulatory agency for digital platforms.
The TRAIN Act left the station with immediate widespread support from creative organizations, including the RIAA, ASCAP, BMI, SESAC, SoundExchange and the American Federation of Musicians, among others.
Several music industry leaders praised the TRAIN Act for its potential to balance innovation with an eye on respecting creators’ rights. Mitch Glazier, RIAA chairman & CEO, highlighted its role in ensuring creators can pursue legal recourse when their works are used without permission. Todd Dupler, the Recording Academy’s chief advocacy and public policy officer, and Mike O’Neill, the CEO of BMI, echoed these sentiments, stressing the bill’s importance in preventing misuse and enabling creators to hold AI companies accountable.
David Israelite, president & CEO of the National Music Publishers’ Association, pointed to the TRAIN Act as a vital measure to close regulatory gaps and ensure transparency in AI practices, while John Josephson, chairman and CEO of SESAC Music Group, praised its dual approach of promoting responsible innovation while protecting creators.
Additional endorsements came from SoundExchange CEO Michael Huppe, who stressed the need for creators to understand how their works are being utilized in AI systems, Elizabeth Matthews, CEO of ASCAP, who stressed the need for artists to be fairly compensated, and Ashley Irwin, president of the Society of Composers & Lyricists, who emphasized the bill’s role in safeguarding the rights of composers and songwriters.
Select Music Industry Reactions to the TRAIN Act:
Mitch Glazier, RIAA: “Senator Welch’s carefully calibrated bill will bring much needed transparency to AI, ensuring artists and rightsholders have fair access to the courts when their work is copied for training without authorization or consent. RIAA applauds Senator Welch’s leadership and urges the Senate to enact this important, narrow measure into law.”
David Israelite, NMPA: “We greatly appreciate Senator Welch’s leadership on addressing the complete lack of regulation and transparency surrounding songwriters’ and other creators’ works being used to train generative AI models. The TRAIN Act proposes an administrative subpoena process that enables rightsholders to hold AI companies accountable. The process necessitates precise record-keeping standards from AI developers and gives rightsholders the ability to see whether their copyrighted works have been used without authorization. We strongly support the bill which prioritizes creators who continue to be exploited by unjust AI practices.”
Elizabeth Matthews, ASCAP: “The future of America’s vibrant creative economy depends upon laws that protect the rights of human creators. By requiring transparency about when and how copyrighted works are used to train generative AI models, the TRAIN Act paves the way for creators to be fairly compensated for the use of their work. On behalf of ASCAP’s more than one million songwriters, composer and music publisher members, we applaud Senator Welch for his leadership.”
Mike O’Neill, BMI: “Some AI companies are using creators’ copyrighted works without their permission or compensation to ‘train’ their systems, but there is currently no way for creators to confirm that use or require companies to disclose it. The TRAIN Act will provide a legal avenue for music creators to compel these companies to disclose those actions, which will be a step in the right direction towards greater transparency and accountability. BMI thanks Senator Welch for introducing this important legislation.”
John Josephson, SESAC: “SESAC applauds the TRAIN Act, which clears an efficient path to court for songwriters whose work is used by AI developers without authorization or consent. Senator Welch’s narrow approach will promote responsible innovation and AI while protecting the creative community from unlawful scraping and infringement of their work.”
Michael Huppe, SoundExchange: ”As artificial intelligence companies continue to train their generative AI models on copyrighted works, it is imperative that music creators and copyright owners have the ability to know where and how their works are being used. The Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act would provide creators with an important and necessary tool as they fight to ensure their works are not exploited without the proper consent, credit, or compensation.”
Todd Dupler, The Recording Academy: “The TRAIN Act would empower creators with an important tool to ensure transparency and prevent the misuse of their copyrighted works. The Recording Academy® applauds Sen. Welch for his leadership and commitment to protecting human creators and creativity.”
The Artist Rights Symposium returns for a fourth year on Wednesday (Nov. 20) at a new location — American University’s Kogood School of Business. This year the day-long event will feature panels like “The Trouble with Tickets,” “Overview of Current Issues in Artificial Intelligence Litigation,” and “Name, Image and Likeness Rights in the Age of AI.” Plus, the symposium will feature a keynote with Digital Media Association (DiMA) president and CEO Graham Davies.
Founded by University of Georgia professor, musician and activist Dr. David C. Lowery, the event has been held at the university in Athens, Georgia for the last three years. Now that the event has moved to Washington, D.C., the Artist Rights Symposium can take advantage of the wealth of music professionals in the city. This includes D.C.-based panelists like Davies, Stephen Parker (executive director, National Independent Venue Association), Ken Doroshow (Chief Legal Officer, Recording Industry Association of America), Jalyce E. Mangum (attorney-advisor, U.S. Copyright Office), Jen Jacobsen (executive director, Artist Rights Alliance), Jeffrey Bennett (general counsel, SAG-AFTRA) and more.
The Artist Rights Symposium is supported by the Artist Rights Institute.
Trending on Billboard
See the schedule of events below:
9:15-10:15 – THE TROUBLE WITH TICKETS: The Challenges of Ticket Resellers and Legislative SolutionsKevin Erickson, Director, Future of Music Coalition, Washington DCDr. David C. Lowery, Co-founder of Cracker and Camper Van Beethoven, University of Georgia Terry College of Business, Athens, GeorgiaStephen Parker, Executive Director, National Independent Venue Association, Washington DCMala Sharma, President, Georgia Music Partners, Atlanta, GeorgiaModerator: Christian L. Castle, Esq., Director, Artist Rights Institute, Austin, Texas
10:15-10:30: NIVA Speculative Ticketing Project Presentation by Kogod students
10:45-11:00: OVERVIEW OF CURRENT ISSUES IN ARTIFICIAL INTELLIGENCE LITIGATIONKevin Madigan, Vice President, Legal Policy and Copyright Counsel, Copyright Alliance
11:00-12 pm: SHOW ME THE CREATOR – Transparency Requirements for AI TechnologyDanielle Coffey, President & CEO, News Media Alliance, Arlington, VirginiaDahvi Cohen, Legislative Assistant, U.S. Congressman Adam Schiff, Washington DCKen Doroshow, Chief Legal Officer, Recording Industry Association of America, Washington DCModerator: Linda Bloss-Baum, Director of the Kogod School of Business’s Business & Entertainment Program
12:30-1:30: KEYNOTEGraham Davies, President and CEO of the Digital Media Association, Washington DC.
1:45-2:45: CHICKEN AND EGG SANDWICH: Bad Song Metadata, Unmatched Funds, KYC and What You Can Do About ItRichard James Burgess, MBE, President & CEO, American Association of Independent Music, New YorkHelienne Lindvall, President, European Composer & Songwriter Alliance, London, EnglandAbby North, President, North Music Group, Los AngelesAnjula Singh, Chief Financial Officer and Chief Operating Officer, SoundExchange, Washington DCModerator: Christian L. Castle, Esq, Director, Artist Rights Institute, Austin, Texas
3:15-3:30: OVERVIEW OF INTERNATIONAL ARTIFICIAL INTELLIGENCE LEGISLATIONGeorge York, Senior Vice President International Policy from RIAA.
3:30-4:30: NAME, IMAGE AND LIKENESS RIGHTS IN THE AGE OF AI: Current initiatives to protect creator rights and attributionJeffrey Bennett, General Counsel, SAG-AFTRA, Washington, DCJen Jacobsen, Executive Director, Artist Rights Alliance, Washington DCJalyce E. Mangum, Attorney-Advisor, U.S. Copyright Office, Washington DCModerator: John Simson, Program Director Emeritus, Business & Entertainment, Kogod School of Business, American University
SoundCloud announced the roll out of a number of new AI partnerships on Tuesday (Nov. 19), underscoring its intent to integrate emerging technology into the platform — as long as it is used ethically.
Now, SoundCloud users will have access to six new assistive AI tools, including Tuney, Tuttii, Beatz, TwoShot, Starmony and ACE Studio. The company is also using Audible Magic and Pex to ensure that these new AI integrations are backed up by strong content identification tools that provide rights holders with proper credit and compensation.
These new partners join a list of existing AI integrations — Fadr, Soundful and Voice-Swap — SoundCloud has already worked into its platform. Now, any artist can use these tools and then easily share them to SoundCloud through a built-in “Upload to SoundCloud” option within each tool. Songs uploaded directly to SoundCloud will be automatically tagged to show the tool used (i.e. “Made with Tuney”) and artists can edit their newly uploaded tracks directly from their SoundCloud profile page.
Trending on Billboard
Additionally, SoundCloud has signed on to AI For Music’s “Principles for Music Creation with AI,” which was founded by Roland and Universal Music Group. Its principles include five points, like “we believe that human-created works must be respected and protected,” and “we believe music is central to humanity.”
SoundCloud Next Pro creators can access exclusive discounts and free trials for its nine partnered tools through SoundCloud for Artists.
“SoundCloud is paving the way for a future where AI unlocks creative potential and makes music creation accessible to millions, while upholding responsible and ethical practices,” said Eliah Seton, CEO of SoundCloud. “We’re proud to be the platform that supports creators at every level, fuels experimentation and empowers fandom.”
Learn more about the partnerships below:
Tuney: SoundCloud users can now use Tuney’s AI-powered tools to reinterpret original songs they have posted to SoundCloud (including private ones) without having to know the complexities of using a digital audio workstation (DAW). Using the new “Upload to SoundCloud” button, users can then share their creations quickly and easily back to the platform. Tuney’s Beat Swap feature is among the tools available to SoundCloud users, which can generate new remixes of a song by using a vocal stem and filling in the rest of the blanks.
In a statement provided to Billboard, Tuney CEO/co-founder Antony Demekhin said of the collaboration: “At a time when the major music companies are fighting tech platforms that illegally train on copyrighted works, we see this integration as paving the path for the ethical application of generative tech where rightsholders, artists and fans all benefit from innovation.”
Tuttii: SoundCloud fans can now use this AI-powered app to remix and mash up songs to share on social media with greater ease than using a DAW.
AlBeatz: SoundCloud users can now generate and customize professional-grade beats to work off of in their own original creations.
TwoShot: Created to help music producers kick start their creativity, TwoShot now offers SoundCloud users its massive sample library of AI generated sounds. The company also offers an AI co-producer tool, called Aiva, who can talk through musical ideas with users and help users search through TwoShot’s library.
Starmony: Tailored for singers and rappers, Starmony will now let SoundCloud users upload a vocal they’ve composed, and then the platform will provide professional-sounding production to fill in the instrumental elements of the song.
ACE Studio: With ACE Studio’s platform, musicians using SoundCloud can create their own AI voice models for use in the studio. For one, musicians can convert a melody, written out in MIDI, and convert it into a realistic sounding voice. This can also allow users to generate AI choirs of voices and edit vocals generated by Suno.
Pioneering producer and singer Imogen Heap has partnered with Jen, an ethical AI music creation platform, to launch two new models inspired by her musical stylings. The partnership was announced Thursday (Nov. 14) at the Web Summit conference in Lisbon, Portugal.
First, Heap is launching her own StyleFilter model, Jen’s patented tool that allows users to create original tracks that infuse the distinct musical styles of of an artist or producer into their new works. Specifically for Heap’s collaboration, the StyleFilter model was trained on her new singles “What Have You Done To Me” and “Last Night of an Empire.” Importantly, StyleFilter is said to do this while still “maintaining transparency, protection and compensation” for Heap. Secondly, Heap and Jen have also announced a new AI voice model trained on Heap’s distinct vocals.
Jen co-founder and CEO, Shara Senderoff, and Heap took the stage at Web Summit’s Centre Stage to demonstrate how StyleFilter works, transforming prompts into compositions that weave Heap’s style into a user’s original works. Watch their explanation below:
Trending on Billboard
Over her decades-long career, Heap has been viewed as an innovator, pushing the boundaries of art and technology. Since the early days of her career, she has popularized the use of vocoders. Later, she developed her own products, like the Mi.Mu gloves, a wearable tool that allows her to record loops and edit vocals with small hand movements, and The Creative Passport, a service that combines all of an artist’s information in one place from a bio, press photos, royalty accounting, set lists and more.
Last month, in an interview with The Guardian, Heap explained her new AI assistant, called Mogen, which is trained on Heap’s interviews, speeches and TK to act as essentially a living autobiography that can answer questions for fans in her persona. Later, she hopes to expand Mogen to be trained on her musical improvisation and to become a live collaborator at gigs.
Imogen Heap and Shara Senderoff at Web Summit
Jen is an AI music making platform that puts transparency at the forefront. Its Jen-1 model, launched in June, is a text-to-music model trained on 40 different licensed catalogs (and then verified against 150 million songs). It is also backed by APG founder/CEO Mike Caren, who came on as a founding partner in fall 2023. As Senderoff explained in a August 2023 interview with Billboard, “Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool.”
Jen uses blockchain technology to ensure transparency and the ability to track its works after they are generated and put out in the world. Each of the works created with Heap’s StyleFilter will be authorized for use through Auracles — an upcoming non-profit platform, designed by Jen, that uses data provenance to give artists have more access, control and permission for what is made using their StyleFilter model.
While other AI companies have worked on creating personalized AI music models, trained on a specific producer or artists catalog before, like Soundful Collabs, the team at Jen believes StyleFilter is different because “it can learn and apply the style of an artist by training on a single song, establishing a new level of creative precision and efficiency,” says a spokesperson for the company.
“Shara’s integrity shines an outstanding light at this pivotal moment in our human story,” says Heap. “The exponential curve of innovation in and with AI attracts opportunists primarily focused on filling their pockets in the gold rush or those racing at speed to stick their ‘technological flag’ in the sand to corner a marketplace. Alongside the clear innovation in products and new revenue streams for musicians at Jen, Shara’s inspiring strength and determination to get the ethical foundations right from the start are inspiring. An all-too-rare example of a service, contributing to a future where humans are empowered, valued and credited, within and for our collective global tools and knowledge.”
“At Jen, we are determined to create innovative products that invite artists to participate as AI reshapes the music industry, enabling their artistry to take new forms as technology evolves while ensuring they are respected and fairly compensated,” says Senderoff. “Our StyleFilter is a testament to this vision, introducing a groundbreaking way for users to collaborate with the musical essence of artists they might never have the chance to work with directly. Premiering this product with Imogen Heap, a pioneer at the intersection of music and technology, exemplifies our commitment to build with respect and reverence for those who paved the way. She’s also an incomparable human that I’m honored to call my friend.”
By the time Elvis Presley’s Comeback Special was taped in 1968, The King was not just on the ropes but nearly down for the count. A lengthy period in the wilderness starring in commercially successful but critically derided musicals throughout the 1960s had left his reputation in tatters as a new wave of musicians rose to prominence. There was hope, however, that a stellar performance at the special — for which a new song, “If I Can Dream,” was written — could help him win back the hearts of the American people.
This moment in time is where Elvis Evolution, an upcoming experiential installation in London, will begin for its audience. Set to debut at the recently-opened Immerse LDN in May 2025, the show’s creators view this as the moment when Presley was at his most vulnerable and authentic, making it the perfect jumping-off point for an odyssey that will trace the arc of his musical journey — from his upbringing in rural Mississippi to Memphis’ iconic Sun Studios and Beale Street to the backlots of NBC Studios in Burbank, California, where the Comeback Special was shot.
Trending on Billboard
To bring Presley’s musical journey to life, Elvis Evolution will utilize archival material and cutting-edge technology, including generative artificial intelligence, holograms and projections, alongside live music performances from a house band and themed set designs. The show comes from Layered Reality, a production company that fuses digital technology with live theater, and Academy Award-winning special effects company The Mill. In 2023, the former secured the rights from the Presley estate and Authentic Brand Group to license the icon’s image and likeness.
The announcement of Elvis Evolution came amid renewed interest in Presley’s life and music. In 2022, Baz Luhrmann’s jukebox epic Elvis told the story of the singer and his rocky relationship with manager Colonel Tom Parker. And Sofia Coppola’s 2023 movie Priscilla examined his first and only marriage from the perspective of his wife.
[embedded content]
Contrary to those interpretations, Elvis Evolution’s director Jack Pirie — who shares a co-writing credit with playwright Jessica Siân — says the show will remain focused strictly on the music. “What I hope we can do with this show is to move away from the myth of what Elvis represented and the image of him in his later years in a white jumpsuit in Las Vegas,” he says. “We want to go back to who he was as a kid, and look at the music he was listening to and how that shaped him.”
Pirie says Elvis Evolution resulted from the success of other, similarly tech-heavy experiences to debut in the U.K. recently. In 2023, cutting-edge digital art venue Outernet in central London attracted more visitors (6.25 million) than the British Museum (5.83 million). The wildly popular Abba Voyage experience, which started in 2022 and features performing avatars of the Swedish pop group, just extended its run into May 2025.
Even Taylor Swift‘s Eras Tour concert film, which was released in cinemas to big grosses last year, was cited by Pirie as an example of music fans being willing to celebrate in “a non-traditional environment.”
“Elvis didn’t sit at home listening to music on his phone, he had to go out and seek and experience it,” says Andrew McGuinness, founder/CEO of Layered Reality, which previously produced an immersive version of Jeff Wayne’s War of The Worlds musical and The Gunpowder Plot starring Tom Felton. “The fact that live music runs through the DNA of the story makes it a great property to do it this way.”
Elvis Evolution initially caused controversy when it was revealed that the show would use generative artificial intelligence to help recreate Presley in hologram form. But McGuinness and Pirie tell Billboard that the technology is being used only to enhance authentic moments in Presley’s career. For example, they say the technology will help bring new perspectives and sightlines to the ‘68 Comeback Special, for which only limited camera angles exist. As with any powerful tool, says McGuinness, you need to be “bloody careful” with how AI is employed: “We’re not trying to confect something and we take the responsibility with the utmost importance,” he says.
Tickets went on sale for Elvis Evolution in October, and only a limited number remain available through the show’s opening weeks — specifically, May 10 to June 1. The 110-minute experience will have timed entry, with several performances set to take place each day. Tickets start at £75 ($97), while VIP packages are also available, including the “Burning Love” experience, which includes additional merchandise and VIP seating, and the “If I Can Dream” package, which features tickets to the show, commemorative merch and premium access to the show’s daily after-party.
The show’s venue, Immerse LDN, opened at the Royal Docks’ ExCel convention center in July 2024 and is currently hosting both the Formula 1 Exhibition and The Friends Experience: The One In London, both of which use multi-sensory technology and set design. The immersive venue, which will total 160,000 square feet once completed, is part of a £300 million ($387 million) investment in ExCel.
McGuinness and Pirie’s hope is that Elvis Evolution will be successful enough to be toured globally, and they’re particularly excited about the prospect of taking it to some of the U.S. locations that have strong roles in his story, from Las Vegas to Memphis and beyond.
The show’s success could also create opportunities for similar experiences around other music icons; while McGuinness notes that the commercial demands and scale of such events make only a small group of artists “suitable,” he says discussions have already begun between Layered Reality and other artists’ estates.
Though Presley’s outsized legend makes him one of the few artists, living or dead, to be well-suited for such an elaborate and expensive production, McGuinness adds that one of the goals of the project was to strip away the iconography and get to the root of the person he was.
“There’s a humanity that can get lost with any musician or celebrity, and before I started this project, I was prone to seeing Elvis just as an ‘icon,’” he says. “But within this experience, you get to see him as a man, too.”