State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

12:00 am 12:00 pm

Current show

State Champ Radio Mix

12:00 am 12:00 pm


AI

Page: 8

A wide coalition of music industry organizations have joined together to release a series of core principles regarding artificial intelligence — the first collective stance the entertainment business has taken surrounding the topic. Announced during the panel “Welcome to the Machine: Art in the Age of A.I.” held on Thursday (March 16) at South by Southwest (SXSW) and moderated by Billboard deputy editorial director Robert Levine, the principles reveal a growing sense of urgency by entertainment industry leaders to address the quickly-evolving issue.

“Over the past few months, I think [generative artificial intelligence] has gone from a ‘someday’ issue to a today issue,” said Levine. “It’s coming much quicker than anyone thought.”

In response to the fast-approaching collision of generative AI and the entertainment business, the principles detail the need for using the new technology to “empower human expression” while also asserting the importance of representing “creators’ interests…in policymaking” regarding the technology. Principles geared toward the latter include ensuring that AI developers acquire licenses for artistic works used in the “development and training of AI models” — and keep records of which works are used — and that governments refrain from creating “copyright or other IP exemptions” for the technology.

Among the 40 different groups that have joined the coalition — dubbed the Human Artistry Campaign — are music industry leaders including the Recording Industry Association of America (RIAA), National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), SoundExchange, ASCAP, BMI and more.

Read the full list of principles below and get more information, including the full list of groups involved in the effort, here.

Core Principles for Artificial Intelligence Applications in Support of Human Creativity and Accomplishments:

Technology has long empowered human expression, and AI will be no different.

For generations, various technologies have been used successfully to support human creativity. Take music, for example… From piano rolls to amplification to guitar pedals to synthesizers to drum machines to digital audio workstations, beat libraries and stems and beyond, musical creators have long used technology to express their visions through different voices, instruments, and devices. AI already is and will increasingly play that role as a tool to assist the creative process, allowing for a wider range of people to express themselves creatively.

Moreover, AI has many valuable uses outside of the creative process itself, including those that amplify fan connections, hone personalized recommendations, identify content quickly and accurately, assist with scheduling, automate and enhance efficient payment systems – and more. We embrace these technological advances.

Human-created works will continue to play an essential role in our lives.

Creative works shape our identity, values, and worldview. People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others. Only humans can create and fully realize works written, recorded, created, or performed with such specific meaning. Art cannot exist independent of human culture.

Use of copyrighted works, and use of the voices and likenesses of professional performers, requires authorization, licensing, and compliance with all relevant state and federal laws.

We fully recognize the immense potential of AI to push the boundaries for knowledge and scientific progress. However, as with predecessor technologies, the use of copyrighted works requires permission from the copyright owner. AI must be subject to free-market licensing for the use of works in the development and training of AI models. Creators and copyright owners must retain exclusive control over determining how their content is used. AI developers must ensure any content used for training purposes is approved and licensed from the copyright owner, including content previously used by any pre-trained AIs they may adopt. Additionally, performers’ and athletes’ voices and likenesses must only be used with their consent and fair market compensation for specific uses.

Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.

AI must not receive exemptions from copyright law or other intellectual property laws and must comply with core principles of fair market competition and compensation. Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.

Copyright should only protect the unique value of human intellectual creativity.

Copyright protection exists to help incentivize and reward human creativity, skill, labor, and judgment -not output solely created and generated by machines. Human creators, whether they use traditional tools or express their creativity using computers, are the foundation of the creative industries and we must ensure that human creators are paid for their work.

Trustworthiness and transparency are essential to the success of AI and protection of creators.

Complete recordkeeping of copyrighted works, performances, and likenesses, including the way in which they were used to develop and train any AI system, is essential. Algorithmic transparency and clear identification of a work’s provenance are foundational to AI trustworthiness. Stakeholders should work collaboratively to develop standards for technologies that identify the input used to create AI-generated output. In addition to obtaining appropriate licenses, content generated solely by AI should be labeled describing all inputs and methodology used to create it — informing consumer choices, and protecting creators and rightsholders.

Creators’ interests must be represented in policymaking.

Policymakers must consider the interests of human creators when crafting policy around AI. Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table in any conversations regarding legislation, regulation, or government priorities regarding AI that would impact their creativity and the way it affects their industry and livelihood.

A new policy report from the U.S. Copyright Office says that songs and other artistic works created with the assistance of artificial intelligence can sometimes be eligible for copyright registration, but only if the ultimate author remains a human being.

The report, released by the federal agency on Wednesday (March 15), comes amid growing interest in the future role that could be played in the creation of music by so-called generative AI tools — similar to the much-discussed ChatGPT.

Copyright protection is strictly limited to content created by humans, leading to heated debate over the status of AI-generated works. In a closely-watched case last month, the Copyright Office decided that a graphic novel featuring AI-generated images was eligible for protection, but that the individual images couldn’t be protected.

In Wednesday’s report, the agency said that the use of AI tools was not an automatic ban on copyright registration, but that it would be closely scrutinized and could not play a dominant role in the creative process.

“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” the agency wrote. “For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the traditional elements of authorship are determined and executed by the technology — not the human user.”

The report listed examples of AI-aided works that might still be worthy of protection, like one that creatively combined AI-generated elements into something new, or a work that was AI-generated that an artist then heavily modified after the fact. And it stressed that other technological tools were still fair game.

“A visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording,” the report said. “In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘actually formed’ the traditional elements of authorship.”

Under the rules laid out in the report, the Copyright Office said that anyone submitting such works must disclose which elements were created by AI and which were created by a human. The agency said that any AI-inclusive work that was previously registered without such a disclosure must be updated — and that failure to do so could result in the cancellation of the copyright registration.

Though aimed at providing guidance, Wednesday’s report avoided hard-and-fast rules. It stressed that analyzing copyright protection for AI-assisted works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances, including “how the AI tool operates” and “how it was used to create the final work.”

And the report didn’t even touch on a potentially thornier legal question: whether the creators of AI platforms infringe the copyrights of the vast number of earlier works that are used to “train” the platforms to spit out new works. In October, the Recording Industry Association of America (RIAA) warned that such providers were violating copyrights en masse by using existing music to train their machines.

“To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members works,” the RIAA said at the time.

Though Wednesday’s report did not offer guidance on that question, the Copyright Office said it had plans to weigh in soon.

“[The Office] has launched an agency-wide initiative to delve into a wide range of these issues,” the agency wrote. “Among other things, the Office intends to publish a notice of inquiry later this year seeking public input on additional legal and policy topics, including how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.”

Read the entire report here:

In the recent article “What Happens To Songwriters When AI Can Generate Music,” Alex Mitchell offers a rosy view of a future of AI-composed music coexisting in perfect barbershop harmony with human creators — but there is a conflict of interest here, as Mitchell is the CEO of an app that does precisely that. It’s almost like cigarette companies in the 1920s saying cigarettes are good for you.

Yes, the honeymoon of new possibilities is sexy, but let’s not pretend this is benefiting the human artist as much as corporate clients who’d rather pull a slot machine lever to generate a jingle than hire a human.

While I agree there are parallels between the invention of the synthesizer and AI, there are stark differences, too. The debut of the theremin — the first electronic instrument — playing the part of a lead violin in an orchestra was scandalous and fear-evoking. Audiences hated its sinusoidal wave lack of nuance, and some claimed it was “the end of music.” That seems ludicrous and pearl-clutching now, and I worship the chapter of electrified instruments afterward (thank you sister Rosetta Tharpe and Chuck Berry), but in a way, they were right. It was the closing of a chapter, and the birth of something new.

Is new always better, though? Or is there a sweet spot ratio of machine to human? I often wonder this sitting in my half analog, half digital studio, as the stakes get ever higher from flirting with the event horizon of technology.

In this same article, Diaa El All (another CEO of an A.I. music generation app), claims that drummers were pointlessly scared of the drum machine and sample banks replacing their jobs because it’s all just another fabulous tool. (Guess he hasn’t been to many shows where singers perform with just a laptop.) Since I have spent an indecent portion of my modeling money collecting vintage drum machines (cuz yes, they’re fabulous), I can attest to the fact I do indeed hire fewer drummers. In fact, since I started using sample libraries, I hire fewer musicians altogether. While this is a great convenience for me, the average upright bassist who used to be able to support his family with his trade now has to remain childless or take two other jobs.

Should we halt progress for maintaining placebo usefulness for obsolete craftsmen? No, change and competition are good, if not inevitable ergonomics. But let’s not be naive about the casualties.

The gun and the samurai come to mind. For centuries, samurai were part of an elite warrior class who rigorously trained in kendo (the way of the sword) and bushido (a moral code of honor and indifference to pain) since childhood. As a result, winning wars was a meritocracy of skill and strategy. Then a Chinese ship with Portuguese sailors showed up with guns.

When feudal lord Nobunaga saw the potential in these contraptions, he ordered hundreds be made for his troops. Suddenly a farmer boy with no skill could take down an archer or swordsman who had trained for years. Once more coordinated marching and reloading formations were developed, it was an entirely new power dynamic.

During the economic crunch of the Napoleonic wars, a similar tidal shift occurred. Automated textile equipment allowed factory owners to replace loyal employees with machines and fewer, cheaper, less skilled workers to oversee them. As a result of jobless destitution, there was a region-wide rebellion of weavers and Luddites burning mills, stocking frames and lace-making machines, until the army executed them and held show trials to deter others from acts of “industrial sabotage.”

The poet Lord Byron opposed this new legislation, which called machine-breaking a capital crime — ironic considering his daughter, Ada Lovelace, would go on to invent computers with Charles Babbage. Oh, the tangled neural networks we weave.

Look what Netflix did to Blockbuster rentals. Or what Napster did to the recording artist. Even what the democratization of homemade porn streaming did to the porn industry. More recently, video games have usurped films. You cannot add something to an ecosystem without subtracting something else. It would be like smartphone companies telling fax machine manufacturers not to worry. Only this time, the fax machines are humans.

Later in the article, Mac Boucher (creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes) adds another glowing review of bot- and button-based composition: “We will all become creators now.”

If everyone is a creator, is anyone really a creator?

An eerie vision comes to mind of a million TikTokers dressed as opera singers on stage, standing on the blueish corpses of an orchestra pit, singing over each other in a vainglorious cacophony, while not a single person sits in the audience. Just rows of empty seats reverberating the pink noise of digital narcissism back at them. Silent disco meets the Star Gate sequence’s death choir stack.

While this might sound like the bitter gatekeeping of a tape machine purist (only slightly), now might be a good time to admit I was one of the early projects to incorporate AI-generated lyrics and imagery. My band, Uni and The Urchins, has a morbid fascination with futurism and the wild west of Web 3.0. Who doesn’t love robots?

But I do think in order to make art, the “obstacles” actually served as a filtration device. Think Campbell’s hero’s journey. The learning curve of mastering an instrument, the physical adventure of discovering new music at a record shop or befriending the cool older guy to get his Sharpie-graffitied mix CD, saving up to buy your first guitar, enduring ridicule, the irrational desire to pursue music against the odds (James Brown didn’t own a pair of shoes until he 8 years old, and now is canonized as King.)

Meanwhile, in 2022, surveys show that many kids feel valueless unless they’re an influencer or “artist,” so the urge toward content creation over craft has become criminally easy, flooding the markets with more karaoke, pantomime and metric-based mush, rooted in no authentic movement. (I guess Twee capitalist-core is a culture, but not compared to the Vietnam war, slavery, the space race, the invention of LSD, the discovery of the subconscious, Indian gurus, the sexual revolution or the ’90s heroin epidemic all inspiring new genres.)

Not to sound like Ted Kaczynski’s manifesto, but technology is increasingly the hand inside the sock puppet, not the other way around.

Do I think AI will replace a lot of jobs? Yes, though not immediately, it’s still crude. Do I think this upending is a net loss? In the long term, no, it could incentivize us to invent entirely new skills to front-run it. (Remember when “learn to code” was an offensive meme?) In fact, I’m very eager to see how we co-evolve or eventually merge into a transhuman cyber Seraphim, once Artificial General Intelligence goes quantum.

But this will be a Faustian trade, have no illusions.

Charlotte Kemp Muhl is the bassist for NYC art-rock band UNI and the Urchins. She has directed all of UNI and The Urchins’ videos and mini-films and engineered, mixed and mastered their upcoming debut album Simulator (out Jan. 13, 2023, on Chimera Music) herself. UNI and the Urchins’ AI-written song/AI-made video for “Simulator” is out now.

If you think 100,000 songs a day going into the market is a big number, “you have no idea what’s coming next,” says Alex Mitchell, founder/CEO of Boomy, a music creation platform that can compose an instrumental at the click of an icon.
Boomy is one of many so-called “generative artificial intelligence” music companies — others include Soundful, BandLab’s SongStarter and Authentic Artists — founded to democratize songwriting and production even more than the synthesizer did in the 1970s, the drum machine in the ’80s and ’90s, digital audio workstations in the 2000s and sample and beat libraries in the 2010s.

In each of those cases, however, trained musicians were required to operate this technology in order to produce songs. The selling point of generative AI is that no musical knowledge or training is necessary. Anyone can potentially create a hit song with the help of computers that evolve with each artificially produced guitar lick or drumbeat.

Not surprisingly, the technology breakthrough has also generated considerable anxiety among professional musicians, producers, engineers and others in the recorded-music industry who worry that their livelihoods could potentially be threatened.

“In our pursuit of the next best technology, we don’t think enough about the impact [generative AI] could have on real people,” says Abe Batshon, CEO of BeatStars, a subscription-based platform that licenses beats. “Are we really helping musicians create, or are we just cutting out jobs for producers?”

Not so, say the entrepreneurs who work in the emerging business. From their perspective, generative AI tools are simply the next step in technology’s long legacy of shaping the way music is created and recorded.

“When the drum machine came out, drummers were scared it would take their jobs away,” says Diaa El All, founder/CEO of Soundful, another AI music-generation application that was tested by hit-makers such as Caroline Pennell, Madison Love and Matthew Koma at a recent songwriting camp in Los Angeles. “But then they saw what Prince and others were able to create with it.”

El All says the music that Soundful can instantly generate, based on user-set parameters, like beats per minute or genre, is simply meant to be a “jumping-off point” for writers to build songs. “The human element,” he says, “will never be replaced.”

BandLab CEO Meng Ru Kuok says that having tools to spark song creation makes a huge difference for young music-makers, who, so far, seem to be the biggest adopters of this technology. Meng claims his AI-powered SongStarter tool, which generates a simple musical loop over which creators can fashion a song, makes new BandLab users “80% more likely to actually share their music as opposed to writing from zero.” (Billboard and BandLab collaborated on Bringing BandLab to Billboard, a portal that highlights emerging artists.)

Other applications for generative AI include creating “entirely new formats for listening,” as Endel co-founder/CEO Oleg Stavitsky says. This includes personalized music for gaming, wellness and soundtracks. Lifescore modulates human-made scores in real time, which can reflect how a player is faring in a video game, for example; Endel generates soundscapes, based on user biometrics, to promote sleep, focus or other states (Lifescore also has a similar wellness application); and Tuney targets creators who need dynamic, personalized background music for videos or podcasts but do not have a budget for licensing.

These entrepreneurs contend that generative AI will empower the growth of the “creator economy,” which is already worth over $100 billion and counting, according to influencer Marketing Hub. “We’re seeing the blur of the line between creator and consumer, audience and performer,” says Mitchell. “It’s a new creative class.”

In the future Mitchell and El All both seem to imagine, every person can have the ability to create songs, much like the average iPhone user already has the ability to capture high-quality photos or videos on the fly. It doesn’t mean everyone will be a professional, but it could become a similarly common pastime.

The public’s fascination with — and fear of — generative AI reached a new milestone this year with the introduction of DALL-E 2, a generator that instantaneously creates images based on text inputs and with a surprising level of precision.

Musician Holly Herndon, who has used AI tools in her songwriting and creative direction for years, says that in the next decade, it will be as easy to generate a great song as it is to generate an image. “The entertainment industries we are familiar with will change radically when media is so easy and abundant,” she says. “The impact is going to be dramatic and very alien to what we are used to.”

Mac Boucher, creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes, agrees. “We will all become creators and be capable of creating anything.”

If these predictions are fulfilled, the music business, which is already grappling with oversaturation, will need to recalibrate. Instead of focusing on consumption and owning intellectual property, more companies may shift to artist services and the development of tools that aid song creation — similar to Downtown Music Holdings’ decision to sell off its 145,000-song catalog over the last two years and focus on serving the needs of independent talent.

Major music companies are also investing in and establishing relationships with AI startups. Hipgnosis, Reservoir, Concord and Primary Wave are among those that have worked with AI stem separation company Audioshake, while Warner Music Group has invested in Boomy, Authentic Artists and Lifescore.

The advancement of AI-generated music has understandably sparked a debate over its ethical and legal use. Currently, the U.S. Copyright Office will not register a work created solely by AI, but it will register works created with human input. However, what constitutes that input has yet to be clearly defined.

Answers to these questions are being worked out in court. In 2019, industry leader Open AI issued a comment to the U.S. Patent and Trademark Office, arguing that using copyrighted material for training an AI program should be considered fair use, although many copyright owners and some other AI companies disagree.

Now one of Open AI’s projects, which was made in collaboration with Microsoft and Github, is battling a class-action suit over a similar issue. Copilot, which is AI designed to generate computer code, was accused of often replicating copyrighted code because it was trained on billions of lines of protected material made by human developers.

The executives interviewed for this story say they hire musicians to create training material for their programs and do not touch copyright-protected songs.

“I don’t think songwriters and producers are talking about [AI] enough,” says music attorney Karl Fowlkes. “This kind of feels like a dark, impending thing coming our way, and we need to sort out the legal questions.”

Fowlkes says the most important challenge to AI-generated music will come when these tools begin creating songs that emulate specific musicians, much like DALL-E 2 can generate images clearly inspired by copyright works from talents like Andy Warhol or Jean Michel Basquiat.

Mitchell says that Boomy may cross that threshold in the next year. “I don’t think it would be crazy to say that if we can line up the right framework to pay for the rights [to copyrighted music], to see something from us sooner than people might think on that front,” he says. “we’re looking at what it’s going to take to produce at the level of DALL-E 2 for music.”