State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


google

Page: 2

The American Society of Composers, Authors and Publishers (ASCAP) argued that AI companies need to license material from copyright owners to train their models and that “a new federal right of publicity… is necessary to address the unprecedented scale on which AI tools facilitate the improper use of a creator’s image, likeness, and voice” in a document filed to the Copyright Office on Wednesday (Dec. 6). 
The Copyright Office announced that it was studying “the copyright issues raised by generative artificial intelligence” in August and solicited written comments from relevant players in the space. Initial written comments had to be submitted by October 30, while reply comments — which give organizations like ASCAP the chance to push back against assertions made by AI companies like Anthropic and Open AI — were due December 6.

Generative AI models require training: They ingest large amounts of data to identify patterns. “AI training is a computational process of deconstructing existing works for the purpose of modeling mathematically how [they] work,” Google wrote in its reply comments for the Copyright Office. “By taking existing works apart, the algorithm develops a capacity to infer how new ones should be put together” — hence the “generative” part of this. 

ASCAP represents songwriters, composers, and music publishers, and its chief concern is that AI companies will be allowed to train models on its members’ works without coming to some sort of licensing arrangement ahead of time. “Numerous comments from AI industry members raise doubts about the technical or economic feasibility of licensing as a model for the authorized use of protected content,” ASCAP writes. “But armchair speculations about the efficiency of licensing do not justify a rampant disregard for creators’ rights.”

ASCAP adds that “numerous large-scale AI tools have already been developed exclusively on the basis of fully licensed or otherwise legally obtained materials” — pointing to Boomy, Stable Audio, Generative AI by Getty Images, and Adobe Firefly — “demonstrating that the development of generative AI technologies need not come at the expense of creators’ rights.”

ASCAP also calls for the implementation of a new federal right-of-publicity law, worried that voice-cloning technology, for example, can threaten artists’ livelihood. “Generative AI technology introduces unprecedented possibilities for the unauthorized use of a creator’s image, likeness, and voice,” ASCAP argues. “The existing patchwork of state laws were not written with this technology in view, and do not adequately protect creators.”

“Without allowing the artists and creators to control their voice and likeness,” ASCAP continues, “this technology will create both consumer confusion and serious financial harm to the original music creators.”

While Google usually takes a 15% cut of customer payments for app subscriptions in its Play Store on Android devices, Spotify obtained a deal that allowed it to pay a drastically reduced commission, according to The Verge.
The details of the business arrangement were divulged by Google head of global partnerships Don Harrison on Monday (Nov. 20) in testimony during the Epic Games vs. Google trial: Spotify paid no commission if users bought subscriptions through Spotify, and 4% if users selected Google as their payment processor.

Harrison said in court that Spotify landed this “bespoke” agreement because “if we don’t have Spotify working properly across Play services and core services, people will not buy Android phones.” His testimony also indicated that the deal entailed a $50 million investment by both Google and Spotify in a “success fund.”

In a statement to The Verge, a Google spokesperson said that “a small number of developers that invest more directly in Android and Play may have different service fees as part of a broader partnership that includes substantial financial investments and product integrations across different form factors. These key investment partnerships allow us to bring more users to Android and Play by continuously improving the experience for all users and create new opportunities for all developers.”

A rep for Spotify did not respond to Billboard’s request for comment.

Epic Games, which is known for furnishing the world with the popular game Fortnite, has been battling Google since way back in 2020 over the 30% fee the search giant charges app developers for purchases made on its Play Store on Android devices. Epic tried to circumvent Google’s system by putting its own payment system into the Fortnite app and charging a reduced price; Google hit back by yanking Fortnite off the Play Store.

Epic then sued Google. “Google… is using its size to do evil upon competitors, innovators, customers and users in a slew of markets it has grown to monopolize,” Epic wrote in its complaint. The New York Times reported that Epic’s CEO, Tim Sweeney, said in court on Monday (Nov. 20) that Google “exercises de facto control over the availability of apps on Android.”

Wilson White, a Google vp of public policy, told reporters earlier this month that “Epic wants all the benefits of Android and Google Play without having to pay for them,” according to The New York Times. “The lawsuit [from Epic] would upend a business model that has lowered prices and increased choices,” White argued.

Google had tried to avoid revealing the nature of its relationship with Spotify in court, The Verge reported earlier this month. “Disclosure of the Spotify deal would be very, very detrimental for the negotiation we’d be having with… other parties,” Google attorney Glenn Pomerantz told the judge overseeing the case.

YouTube will introduce the ability for labels and others music rights holders “to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” according to a blog post published on Tuesday (Nov. 14). 

Access to the request system will initially be limited: “These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments.” However, the blog, written by vice presidents of of product management Jennifer Flannery O’Connor and Emily Moxley, noted that YouTube will “continue to expand access to additional labels and distributors over the coming months.”

This marks the latest step by YouTube to try to assuage music industry fears about new AI-powered technologies — and also position itself as a leader in the space. 

In August, YouTube published its “principles for partnering with the music industry on AI technology.” Chief among them: “it must include appropriate protections and unlock opportunities for music partners who decide to participate,” wrote CEO Neil Mohan.

YouTube also partnered with a slew of artists from Universal Music Group on an “AI music incubator.” “Artists must play a central role in helping to shape the future of this technology,” the Colombian star Juanes said in a statement at the time. “I’m looking forward to working with Google and YouTube… to assure that AI develops responsibly as a tool to empower artists.”

In September, at the annual Made on YouTube event, the company announced a new suite of AI-powered video and audio tools for creators. Creators can type in an idea for a backdrop, for example, and a new feature dubbed “Dream Screen” will generate it for them. Similarly, AI can assist creators in finding the right songs for their videos.

In addition to giving labels the ability to request the takedown of unauthorized imitations, YouTube promised on Tuesday to roll out enhanced labels so that viewers know they are interacting with content that “is synthetic”: “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.” 

TikTok announced a similar feature in September. Of course, self disclosure has its limits — especially as it is already reported that many creators experiment with AI without admitting it.

According to YouTube, “creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”

A judge has overturned a $32.5 million judgment against Google in the tech giant’s long-running case against Sonos over smart speaker patents.
In an Oct. 6 decision, U.S. District Court Judge William Alsup ruled that the jury verdict from May that found Google had infringed one of Sonos’ smart speaker patents was invalid because the patents at issue in the case were “unenforceable.”

In a nutshell, Alsup claims that Sonos improperly linked its 2019 patent application, which was ultimately approved, with an earlier, rejected 2006 application for the same patents in an effort to show that its patents pre-dated Google’s products incorporating similar multi-room audio technology. The judge alleges the link is invalid because Sonos “deceptively” inserted new material into the 2019 application without alerting the patent examiner of the changes. He notes that when a continuation application for a patent — as was the case with the 2019 application, which was filed as a “continuation” of the one filed in 2006 — includes material not included in the original application, the two cannot rightly be connected.

“When new matter is added to a specification of a continuation application by way of amendment, the effective filing date should be the date of the amendment that added the new matter,” Alsup wrote. This effectively means that Sonos’ “priority date” for the patent would be Aug. 2019, when the amended application was approved — not 2006.

Alsup additionally accuses Sonos of “an unreasonable, inexcusable, and prejudicial delay” in filing suit against Google. He states that in 2014, five years prior to Sonos’ 2019 patent application, Google had shared with Sonos “a plan for a product that would practice what would become [Sonos’] claimed invention” as part of an exploration of a potential collaboration. When that partnership failed to come to fruition, Alsup adds, Google began rolling out its own products that utilized the invention in 2015.

“Even so, Sonos waited until 2019 to pursue claims on the invention (and until 2020 to roll out the invention in its own product line),” he writes.

“This was not a case of an inventor leading the industry to something new,” Alsup continues. “This was a case of the industry leading with something new and, only then, an inventor coming out of the woodwork to say that he had come up with the idea first — wringing fresh claims to read on a competitor’s products from an ancient application.”

“Judge Alsup’s ruling invalidating the jury’s verdict is wrong on both the facts and law, and Sonos will appeal,” a Sonos spokesperson told Billboard in a statement. “The same is true of earlier rulings narrowing our case. While an unfortunate result, it does not change the fact that Google is a serial infringer of our patent portfolio, as the International Trade Commission has already ruled with respect to five other patents. In the end, we expect this to be a temporary setback in our efforts to hold Google financially accountable for misappropriating Sonos’s patented inventions.”

Google did not respond to a request for comment at publishing time.

Sonos first sued Google in January 2020, claiming the tech giant had infringed multiple patents for its smart speaker technology after gaining access to it through a 2013 partnership under which Sonos integrated Google Play Music into its products. Just two years after that partnership was reached, Sonos alleged that Google then “flooded the market” with cheaper competing products (under the now-defunct Chromecast Audio line) that willfully infringed its patented multi-room technology. Sonos additionally claimed that Google had since expanded its use of Sonos technology in more than a dozen other products, including the Google Home, Nest and Pixel lines.

The legal battle between the two tech companies has been protracted, with both sides going on the offensive at different points. In June 2020, Google filed suit against Sonos, alleging the smart speaker maker had actually infringed several of its own patents. Sonos subsequently filed two more lawsuits alleging that Google had infringed several additional patents it held.

Sonos filed one of those two cases with the U.S. International Trade Commission, which ruled in January 2022 that Google had infringed a total of five of Sonos’ audio technology patents and barred it from importing the infringing products from China. However, the commission also found that Google had successfully redesigned its products to avoid the Sonos patents and could continue selling those reworked versions in U.S. stores — an allowance Sonos had fought to prevent.

In August 2022, Google fired another volley with two additional lawsuits, claiming the smaller company used seven different patented Google technologies to instill the so-called “magic” in Sonos software.

The Warner Music Group announced former longtime Google executive Carletta Higginson as its new executive vp/chief digital officer today (Oct. 10). Higginson, who will join the company Oct. 16, replaces outgoing evp of business development/chief digital officer Oana Ruxandra, who announced her departure Oct. 4 after five years with the label group. Higginson is the […]

Universal Music Group is in the early stages of talks with Google about licensing artists’ voices for songs created by artificial intelligence, according to The Financial Times. Warner Music Group has also discussed this possibility, The Financial Times reported.

Advances in artificial-intelligence-driven technology have made it relatively easy for a producer sitting at home to create a song involving a convincing facsimile of a superstar’s voice — without that artist’s permission. Hip-hop super-fans have been using the technology to flesh out unfinished leaks of songs from their favorite rappers. 

One track in particular grabbed the industry’s attention in March: “Heart On My Sleeve,” which masqueraded as a new collaboration between Drake and the Weeknd. At the time, a Universal Music spokesperson issued a statement saying that “stakeholders in the music ecosystem” have to choose “which side of history… to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.” 

“In our conversations with the labels, we heard that the artists are really pissed about this stuff,” Geraldo Ramos, co-founder and CEO of the music technology company Moises, told Billboard recently. (Moises has developed its own AI-driven voice-cloning technology, along with the technology to detect whether a song clones someone else’s voice.) “How do you protect that artist if you’re a label?” added Matt Henninger, Moises’ vp of sales and business development.

The answer is probably licensing: Develop a system in which artists who are fine with having their voices cloned clear those rights — in exchange for some sort of compensation — while those acts who are uncomfortable with being replicated by technology can opt out. Just as there is a legal framework in place that allows producers to sample 1970s soul, for example, by clearing both the master and publishing rights, in theory there could be some sort of framework through which producers obtain permission to clone a superstar’s voice.

AI-driven technology could “enable fans to pay their heroes the ultimate compliment through a new level of user-driven content,” Warner CEO Robert Kyncl told financial analysts this week. (“There are some [artists] that may not like it,” he continued, “and that’s totally fine.”)

On the same investor call, Kyncl also singled out “one of the first official and professionally AI-generated songs featuring a deceased artist, which came through our ADA Latin division:” A new Pedro Capmany track featuring AI-generated vocals from his father Jose, who died in 2001. “After analyzing hundreds of hours of interviews, acappellas, recorded songs, and live performances from Jose’s career, every nuance and pattern of his voice was modeled using AI and machine learning,” Kyncl explained. 

After the music industry’s initial wave of alarm about AI, the conversation has shifted, according to Henninger. With widely accessible voice-cloning technology available, labels can’t really stop civilians from making fake songs accurately mimicking their artists’ vocals. But maybe there’s a way they can make money from all the replicants.

Henninger is starting to hearing different questions around the music industry. “How can [AI] be additive?” he asks. “How can it help revenue? How can it build someone’s brand?”

Reps for Universal and Warner did not respond to requests for comment.

President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.

Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don’t detail who will audit the technology or hold the companies accountable.

“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.

“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”

A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.

The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.

They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.

The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology. Company executives plan to gather with Biden at the White House on Friday as they pledge to follow the standards.

Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.

“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”

Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He said in a statement that he will work closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.

A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.

Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”

But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.

The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by currently available models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.

A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.

U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.

Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.

The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.

Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.

From ChatGPT writing code for software engineers to Bing’s search engine sliding in place of your bi-weekly Hinge binge, we’ve become obsessed with the capacity for artificial intelligence to replace us.

Within creative industries, this fixation manifests in generative AI. With models like DALL-E generating images from text prompts, the popularity of generative AI challenges how we understand the integrity of the creative process: When generative models are capable of materializing ideas, if not generating their own, where does that leave artists?

Google’s new text-based music generative AI, MusicLM, offers an interesting answer to this viral terminator-meets-ex-machina narrative. As a model that produces “high-fidelity music from text descriptions,” MusicLM embraces moments lost in translation that encourages creative exploration. It sets itself apart from other music generation models like Jukedeck and MuseNet by inviting users to verbalize their original ideas rather than toggle with existing music samples.

Describing how you feel is hard

AI in music is not new. But between recommending songs for Spotify’s Discover Weekly playlists to composing royalty free music with Jukedeck, applications of AI in music have evaded the long-standing challenge of directly mapping words to music.

This is because, as a form of expression on its own, music resonates differently to each listener. The same way that different languages struggle to perfectly communicate nuances of respective cultures, it is difficult (if not impossible) to exhaustively capture all dimensions of music in words.

MusicLM takes on this challenge by generating audio clips from descriptions like “a calming violin melody backed by a distorted guitar riff,” even accounting for less tangible inputs like “hypnotic and trance-like.” It approaches this thorny question of music categorization with a refreshing sense of self awareness. Rather than focusing on lofty notions of style, MusicLM grounds itself in more tangible attributes of music with tags such as “snappy”, or “amateurish.” It broadly considers where an audio clip may come from (eg. “Youtube Tutorial”), the general emotional responses it may conjure (eg. “madly in love”), while integrating more widely accepted concepts of genre and compositional technique.

What you expect is (not) what you get

Piling onto this theoretical question of music classification is the more practical shortage of training data. Unlike its creative counterparts (e.g. DALL-E), there isn’t an abundance of text-to-audio captions readily available.

MusicLM was trained by a library of 5,521 music samples captioned by musicians called ‘MusicCaps.’ Bound by the very human limitation of capacity and the almost-philosophical matter of style, MusicCaps offers finite granularity in its semantic interpretation of musical characteristics. The result is occasional gaps between user inputs and generated outputs: the “happy, energetic” tune you asked for may not turn out as you expect.

However, when asked about this discrepancy, MusicLM researcher Chris Donahue and research software engineer Andrea Agostinelli celebrate the human element of the model. They describe primary applications such as “[exploring] ideas more efficiently [or overcoming] writer’s block,” quick to note that MusicLM does offer multiple interpretations of the same prompt — so if one generated track fails to meet your expectations, another might.

“This [disconnect] is a big research direction for us, there isn’t a single answer,” Andrea admits. Chris attributes this disconnect to the “abstract relationship between music and text” insisting that “how we react to music is [even more] loosely defined.”

In a way — by fostering an exchange that welcomes moments lost in translation — MusicLM’s language-based structure positions the model as a sounding board: as you prompt the model with a vague idea, the generation of approximates help you figure out what you actually want to make.

Beauty is in breaking things

With their experience producing Chain Tripping (2019) — a Grammy-nominated album entirely made with MusicVAE (another music generative AI developed by Google) — the band YACHT chimes in on MusicLM’s future in music production. “As long as it can be broken apart a little bit and tinkered with, I think there’s great potential,” says frontwoman Claire L. Evans.

To YACHT, generative AI exists as a means to an end, rather than the end in itself. “You never make exactly what you set out to make,” says founding member Jona Bechtolt, describing the mechanics of a studio session. “It’s because there’s this imperfect conduit that is you” Claire adds, attributing the alluring and evocative process of producing music to the serendipitous disconnect that occurs when artists put pen to paper.

The band describes how the misalignment of user inputs and generated work inspires creativity through iteration. “There is a discursive quality to [MusicLM]… it’s giving you feedback… I think it’s the surreal feeling of seeing something in the mirror, like a funhouse mirror,” says Claire. “A computer accent,” band member Rob Kieswetter jokes, referencing a documentary about the band’s experience making Chain Tripping.

However, in discussing the implications of this move to text-to-audio generation, Claire cautions the rise of taxonomization in music: “imperfect semantic elements are great, it’s the precise ones that we should worry about… [labels] create boundaries to discovery and creation that don’t need to exist… everyone’s conditioned to think about music as this salad of hyper-specific genre references [that can be used] to conjure a new song.”

Nonetheless, both YACHT and the MusicLM team agrees that MusicLM — as it currently is — holds promise. “Either way there’s going to be a whole new slew of artists fine-tuning this tool to their needs,” Rob contends.

Engineer Andrea recalls instances where creative tools weren’t popularized for its intended purpose: “the synthesizer eventually opened up a huge wave of new genres and ways of expression. [It unlocked] new ways to express music, even for people who are not ‘musicians.’” “Historically, it has been pretty difficult to predict how each piece of music technology will play out,” researcher Chris concludes.

Happy accidents, reinvention, and self-discovery

Back to the stubborn, unforgiving question: Will generative AI replace musicians? Perhaps not.

The relationship between artists and AI is not a linear one. While it’s appealing to prescribe an intricate and carefully intentional system of collaboration between artists and AI, as of right now, the process of using AI in producing art resembles more of a friendly game of trial and error.

In music, AI gives room for us to explore the latent spaces between what we describe and what we really mean. It materializes ideas in a way that helps shape creative direction. By outlining these acute moments lost in translation, tools like MusicLM sets us up to produce what actually ends up making it to the stage… or your Discover Weekly.

Tiffany Ng is an art & tech writer based in NYC. Her work has been published in i-D Vice, Vogue, South China Morning Post, and Highsnobiety.

The U.S. Supreme Court said Monday that it would not take up a lawsuit claiming Google stole millions of song lyrics from the music database Genius.

Genius — a popular platform that lets users add and annotate lyrics — had asked the justice to revive allegations that Google improperly used the site’s carefully-transcribed content for its search results. The company argued that a ruling dismissing the case last year had been “unjust” and “absurd.”

But in an order dated Monday, the court denied Genius’s petition to hear the case, cementing Google’s victory. As is typical, the court did not issue a written ruling explaining the denial. Such petitions are always a long shot, as the Supreme Court takes less than 2% of the 7000 cases it receives each year.

Genius sued the tech giant in 2019, claiming Google had stolen the site’s carefully-transcribed content for its own “information boxes” that appear alongside search results — essentially free-riding on the “time, labor, systems and resources” that go into creating such a service. In a splashy twist, Genius said it had used a secret code buried within lyrics that spelled out REDHANDED to prove Google’s wrongdoing.

Though it sounds like a copyright case, Genius didn’t actually accuse Google of stealing any intellectual property. That’s because it doesn’t own any; songwriters and publishers own the rights to lyrics, and both Google and Genius pay for the same licenses to display them. Instead, Genius argued it had spent time and money transcribing and compiling “authoritative” versions of lyrics, and that Google had breached the site’s terms of service by “exploiting” them without permission.

In March 2022, that distinction proved fatal for Genius. The U.S. Court of Appeals for the Second Circuit dismissed the case, ruling that only the actual copyright owners — songwriters or publishers — could have filed such a case, not a site that merely transcribed the lyrics. In technical terms, the court said the case was “preempted” by federal copyright law, meaning that the accusations from Genius were so similar to a copyright claim that they could only have been filed that way.

In taking the case to the Supreme Court, Genius argued the ruling would be a disaster for websites that spend time and money to aggregate user-generated content online. Such companies should be allowed to protect that effort against clear copycats, the company said, even if they don’t hold the copyright. “Big-tech companies like Google don’t need any assists from an overly broad view of copyright preemption,” the company wrote.

But last month, the U.S. Solicitor General advised the Supreme Court to steer clear of the case. It said Genius’s lawsuit was a “poor vehicle” for reviewing the issues in the case, and that the lower court did not appear to have done anything particularly novel when it dismissed the case against Google. Such recommendations are usually very influential on whether the justices decide to tackle a particular case.

Google has been ordered to pay Sonos $32.5 million for infringing one of its smart speaker patents, marking a significant development in a long-fought legal war between the two companies that’s spanned more than three years and multiple lawsuits.

Filed in a San Francisco court on Friday (May 26), the jury verdict awarded Sonos $2.30 for each of the more than 14 million Google devices that were sold incorporating the patented technology.

The jury found that Google had not infringed a second patent at issue in the case.

Sonos first sued Google in January 2020, claiming the tech giant had infringed multiple patents for its smart speaker technology after gaining access to it through a 2013 partnership under which Sonos integrated Google Play Music into its products. Just two years after that partnership was reached, Sonos alleged that Google then “flooded the market” with cheaper competing products (under the now-defunct Chromecast Audio line) that willfully infringed its patented multi-room technology. Sonos additionally claimed that Google had since expanded its use of Sonos technology in more than a dozen other products, including the Google Home, Nest and Pixel lines.

“We are deeply grateful for the jury’s time and diligence in upholding the validity of our patents and recognizing the value of Sonos’s invention of zone scenes,” said Sonos in a statement on the verdict. “This verdict re-affirms that Google is a serial infringer of our patent portfolio, as the International Trade Commission has already ruled with respect to five other Sonos patents. In all, we believe Google infringes more than 200 Sonos patents and today’s damages award, based on one important piece of our portfolio, demonstrates the exceptional value of our intellectual property. Our goal remains for Google to pay us a fair royalty for the Sonos inventions it has appropriated.”

In its own statement, a Google spokesperson said, “This is a narrow dispute about some very specific features that are not commonly used. Of the six patents Sonos originally asserted, only one was found to be infringed, and the rest were dismissed as invalid or not infringed. We have always developed technology independently and competed on the merit of our ideas. We are considering our next steps.”

The legal battle between the two tech companies has been protracted, with both sides going on the offensive at different points. In June 2020, Google filed suit against Sonos, alleging the smart speaker maker had actually infringed several of its own patents. Sonos subsequently filed two more lawsuits alleging that Google had infringed several additional patents it held.

Sonos filed one of those two cases with the U.S. International Trade Commission, which ruled in January 2022 that Google had infringed a total of five of Sonos’ audio technology patents and barred it from importing the infringing products from China. However, the commission also found that Google had successfully redesigned its products to avoid the Sonos patents and could continue selling those reworked versions in U.S. stores — an allowance Sonos had fought to prevent.

In August 2022, Google fired another volley with two additional lawsuits, claiming the smaller company used seven different patented Google technologies to instill the so-called “magic” in Sonos software.