State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm


Music AI FAQs: Experts Answer the Most-Asked Questions About the Growing Sector

Written by on September 10, 2024

blank

From Ghostwriter’s “fake Drake” song to Metro Boomin‘s “BBL Drizzy,” a lot has happened in a very short time when it comes to the evolution of AI’s use in music. And it’s much more prevalent than the headlines suggest. Every day, songwriters are using AI voices to better target pitch records to artists, producers are trying out AI beats and samples, film/TV licensing experts are using AI stem separation to help them clean up old audio, estates and catalog owners are using AI to better market older songs, and superfans are using AI to create next-level fan fiction and UGC about their favorite artists.

For those just starting out in the brave new world of AI music, and understanding all the buzzwords that come with it, Billboard contacted some of the sector’s leading experts to get answers to top questions.

Trending on Billboard

What are some of the most common ways AI is already being used by songwriters and producers?

TRINITY, music producer: As a producer and songwriter, I use AI and feel inspired by AI tools every day. For example, I love using Splice Create Mode. It allows me to search through the Splice sample catalog while coming up with ideas quickly, and then I export it into my DAW Studio One. It keeps the flow of my sessions going as I create. I heard we’ll soon be able to record vocal ideas into Create Mode, which will be even more intuitive and fun. Also, the Izotope Ozone suite is great. The suite has mastering and mixing assistant AI tools built into its plug-ins. These tools help producers and songwriters mix and master tracks and song ideas.

I’ve also heard other songwriters and producers using AI to get started with song ideas. When you feel blocked, you have AI tools like Jen, Melody Studio and Lemonaide to help you come up with new chord progressions. Also, Akai MPC AI and LALA AI are both great for stem splitting, which allows you to separate [out] any part of the music. For example, if I just want to solo and sample the drums in a record, I can do that now in minutes.

AI is not meant to replace us as producers and songwriters. It’s meant to inspire and push our creativity. It’s all about your perspective and how you use it. The future is now; we should embrace it. Just think about how far we have come from the flip phones to the phones we have now that feel more limitless every day. I believe the foundation and heart of us as producers and songwriters will never get lost. We must master our craft to become the greatest producers and songwriters. AI in music creation is meant to assist and free [up] more mental space while I create. I think of AI as my J.A.R.V.I.S. and I’m Iron Man.

How can a user tell if a generative AI company is considered “ethical” or not?

Michael Pelczynski, chief strategy and impact officer, Voice-Swap: If you’re paying for services from a generative AI company, ask yourself, “Where is my money going?” If you’re an artist, producer or songwriter, this question becomes even more crucial: “Why?” Because as a customer, the impact of your usage directly affects you and your rights as a creator. Not many companies in this space truly lead by example when it comes to ethical practices. Doing so requires effort, time and money. It’s more than just marketing yourself as ethical. To make AI use safer and more accessible for musicians, make sure the platform or company you choose compensates everyone involved, both for the final product and for the training sources.

Two of the most popular [ways to determine whether a company is ethical] are the Fairly Trained certification that highlights companies committed to ethical AI training practices, and the BMAT x Voice-Swap technical certification that sets new standards for the ethical and legal utilization of AI-generated voices.

When a generative AI company says it has “ethically” sourced the data it trained on, what does that usually mean? 

Alex Bestall, founder and CEO, Rightsify and Global Copyright Exchange (GCX): [Ethical datasets] require [an AI company to] license the works and get opt-ins from the rights holders and contributors… Beyond copyright, it is also important for vocalists whose likeness is used in a dataset to have a clear opt-in.

What are some examples of AI that can be useful to music-makers that are not generative?

Jessica Powell, CEO, AudioShake: There are loads of tools powered by AI that are not generative. Loop and sample suggestion are a great way to help producers and artists brainstorm the next steps in a track. Stem separation can open up a recording for synch licensing, immersive mixing or remixing. And metadata tagging can help prepare a song for synch-licensing opportunities, playlisting and other experiences that require an understanding of genre, BPM and other factors.

In the last year, several lawsuits have been filed between artists of various fields and generative AI companies, primarily concerning the training process. What is the controversy about?

Shara Senderoff, co-founder, Futureverse and Raised in Space: The heart of the controversy lies in generative AI companies using copyrighted work to train their models without artists’ permission. Creators argue that this practice infringes on their intellectual property rights, as these AI models can produce content closely resembling their original works. This raises significant legal and ethical questions about creative ownership and the value of human artistry in the digital age. The creator community is incensed [by] seeing AI companies profit from their efforts without proper recognition or compensation.

Are there any tools out there today that can be used to detect generative AI use in music? Why are these tools important to have?

Amadea Choplin, COO, Pex: The more reliable tools available today use automated content recognition (ACR) and music recognition technology (MRT) to identify uses of existing AI-generated music. Pex can recognize new uses of existing AI tracks, detect impersonations of artists via voice identification and help determine when music is likely to be AI-generated. Other companies that can detect AI-generated music include Believe and Deezer; however, we have not tested them ourselves. We are living in the most content-dense period in human history where any person with a smartphone can be a creator in an instant, and AI-powered technology is fueling this growth. Tools that operate at mass scale are critical to correctly identifying creators and ensuring they are properly compensated for their creations.

Romain Simiand, chief product officer, Ircam Amplify: Most AI detection tools provide only one side of the coin. As an example, tools such as aivoicedetector.com are primarily meant to detect deepfakes for speech. IRCAM Amplify focuses primarily on prompt-based tools used widely. Yet, because we know this approach is not bulletproof, we are currently supercharging our product to highlight voice clones and identify per-stem AI-generated content. Another interesting contender is resemble.ai, but while it seems their approach is similar, the methodology described diverges greatly.

Finally, we have pex.com, which focuses on voice identification. I haven’t tested the tool but this approach seems to require the original catalog to be made available, which is a potential problem.

AI recognition tools like the AI Generated Detector released by IRCAM Amplify and the others mentioned above help with the fair use and distribution of AI-generated content.

We think AI can be a creativity booster in the music sector, but it is as important to be able to recognize those tracks that have been generated with AI [automatically] as well as identifying deepfakes — videos and audio that are typically used maliciously or to spread false information.

In the United States, what laws are currently being proposed to protect artists from AI vocal deepfakes?

Morna Willens, chief policy officer, RIAA: Policymakers in the U.S. have been focused on guardrails for artificial intelligence that promote innovation while protecting all of us from unconsented use of our images and voices to create invasive deepfakes and voice clones. Across legislative efforts, First Amendment speech protections are expressly covered and provisions are in place to help remove damaging AI content that would violate these laws.

On the federal level, Reps. María Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nathaniel Moran (R-TX), Joe Morelle (D-NY) and Rob Wittman (R-VA) introduced the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act to create a national framework that would safeguard Americans from their voice and likeness being used in nonconsensual AI-generated imitations.

Sens. Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (R-NC) released a discussion draft of a bill called Nurture Originals, Foster Art and Keep Entertainment Safe Act with similar aims of protecting individuals from AI deepfakes and voice clones. While not yet formally introduced, we’re hopeful that the final version will provide strong and comprehensive protections against exploitive AI content.

Most recently, Sens. Blackburn, Maria Cantwell (D-WA) and Martin Heinrich (D-NM) introduced the Content Origin Protection and Integrity From Edited and Deepfaked Media Act, offering federal transparency guidelines for authenticating and detecting AI-generated content while also holding violators accountable for harmful deepfakes.

In the states, existing “right of publicity” laws address some of the harms caused by unconsented deepfakes and voice clones, and policymakers are working to strengthen and update these. The landmark Ensuring Likeness Voice and Image Security Act made Tennessee the first state to update its laws to address the threats posed by unconsented AI deepfakes and voice clones. Many states are similarly considering updates to local laws for the AI era.

RIAA has worked on behalf of the artists, rights holders and the creative community to educate policymakers on the impact of AI — both challenges and opportunities. These efforts are a promising start, and we’ll continue to advocate for artists and the entire music ecosystem as technologies develop and new issues emerge.

What legal consequences could a user face for releasing a song that deepfakes another artist’s voice? Could that user be shielded from liability if the song is clearly meant to be parody?

Joseph Fishman, music law professor, Vanderbilt University: The most important area of law that the user would need to worry about is publicity rights, also known as name/image/likeness laws, or NIL. For now, the scope of publicity rights varies state by state, though Congress is working on enacting an additional federal version whose details are still up for grabs. Several states include voice as a protected aspect of the rights holder’s identity. Some companies in the past have gotten in legal trouble for mimicking a celebrity’s voice, but so far those cases have involved commercial advertisements. Whether one could get in similar trouble simply for using vocal mimicry in a new song, outside of the commercial context, is a different and largely untested question. This year, Tennessee became the first state to expand its publicity rights statute to cover that scenario expressly, and other jurisdictions may soon follow. We still don’t know whether that expansion would survive a First Amendment challenge.

If the song is an obvious parody, the user should be on safer ground. There’s pretty widespread agreement that using someone’s likeness for parody or other forms of criticism is protected speech under the First Amendment. Some state publicity rights statutes even include specific parody exemptions.

Related Images:


Reader's opinions

Leave a Reply

Your email address will not be published. Required fields are marked *