Page: 3
A judge has overturned a $32.5 million judgment against Google in the tech giant’s long-running case against Sonos over smart speaker patents.
In an Oct. 6 decision, U.S. District Court Judge William Alsup ruled that the jury verdict from May that found Google had infringed one of Sonos’ smart speaker patents was invalid because the patents at issue in the case were “unenforceable.”
In a nutshell, Alsup claims that Sonos improperly linked its 2019 patent application, which was ultimately approved, with an earlier, rejected 2006 application for the same patents in an effort to show that its patents pre-dated Google’s products incorporating similar multi-room audio technology. The judge alleges the link is invalid because Sonos “deceptively” inserted new material into the 2019 application without alerting the patent examiner of the changes. He notes that when a continuation application for a patent — as was the case with the 2019 application, which was filed as a “continuation” of the one filed in 2006 — includes material not included in the original application, the two cannot rightly be connected.
“When new matter is added to a specification of a continuation application by way of amendment, the effective filing date should be the date of the amendment that added the new matter,” Alsup wrote. This effectively means that Sonos’ “priority date” for the patent would be Aug. 2019, when the amended application was approved — not 2006.
Alsup additionally accuses Sonos of “an unreasonable, inexcusable, and prejudicial delay” in filing suit against Google. He states that in 2014, five years prior to Sonos’ 2019 patent application, Google had shared with Sonos “a plan for a product that would practice what would become [Sonos’] claimed invention” as part of an exploration of a potential collaboration. When that partnership failed to come to fruition, Alsup adds, Google began rolling out its own products that utilized the invention in 2015.
“Even so, Sonos waited until 2019 to pursue claims on the invention (and until 2020 to roll out the invention in its own product line),” he writes.
“This was not a case of an inventor leading the industry to something new,” Alsup continues. “This was a case of the industry leading with something new and, only then, an inventor coming out of the woodwork to say that he had come up with the idea first — wringing fresh claims to read on a competitor’s products from an ancient application.”
“Judge Alsup’s ruling invalidating the jury’s verdict is wrong on both the facts and law, and Sonos will appeal,” a Sonos spokesperson told Billboard in a statement. “The same is true of earlier rulings narrowing our case. While an unfortunate result, it does not change the fact that Google is a serial infringer of our patent portfolio, as the International Trade Commission has already ruled with respect to five other patents. In the end, we expect this to be a temporary setback in our efforts to hold Google financially accountable for misappropriating Sonos’s patented inventions.”
Google did not respond to a request for comment at publishing time.
Sonos first sued Google in January 2020, claiming the tech giant had infringed multiple patents for its smart speaker technology after gaining access to it through a 2013 partnership under which Sonos integrated Google Play Music into its products. Just two years after that partnership was reached, Sonos alleged that Google then “flooded the market” with cheaper competing products (under the now-defunct Chromecast Audio line) that willfully infringed its patented multi-room technology. Sonos additionally claimed that Google had since expanded its use of Sonos technology in more than a dozen other products, including the Google Home, Nest and Pixel lines.
The legal battle between the two tech companies has been protracted, with both sides going on the offensive at different points. In June 2020, Google filed suit against Sonos, alleging the smart speaker maker had actually infringed several of its own patents. Sonos subsequently filed two more lawsuits alleging that Google had infringed several additional patents it held.
Sonos filed one of those two cases with the U.S. International Trade Commission, which ruled in January 2022 that Google had infringed a total of five of Sonos’ audio technology patents and barred it from importing the infringing products from China. However, the commission also found that Google had successfully redesigned its products to avoid the Sonos patents and could continue selling those reworked versions in U.S. stores — an allowance Sonos had fought to prevent.
In August 2022, Google fired another volley with two additional lawsuits, claiming the smaller company used seven different patented Google technologies to instill the so-called “magic” in Sonos software.
The Warner Music Group announced former longtime Google executive Carletta Higginson as its new executive vp/chief digital officer today (Oct. 10). Higginson, who will join the company Oct. 16, replaces outgoing evp of business development/chief digital officer Oana Ruxandra, who announced her departure Oct. 4 after five years with the label group. Higginson is the […]
Universal Music Group is in the early stages of talks with Google about licensing artists’ voices for songs created by artificial intelligence, according to The Financial Times. Warner Music Group has also discussed this possibility, The Financial Times reported.
Advances in artificial-intelligence-driven technology have made it relatively easy for a producer sitting at home to create a song involving a convincing facsimile of a superstar’s voice — without that artist’s permission. Hip-hop super-fans have been using the technology to flesh out unfinished leaks of songs from their favorite rappers.
One track in particular grabbed the industry’s attention in March: “Heart On My Sleeve,” which masqueraded as a new collaboration between Drake and the Weeknd. At the time, a Universal Music spokesperson issued a statement saying that “stakeholders in the music ecosystem” have to choose “which side of history… to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”
“In our conversations with the labels, we heard that the artists are really pissed about this stuff,” Geraldo Ramos, co-founder and CEO of the music technology company Moises, told Billboard recently. (Moises has developed its own AI-driven voice-cloning technology, along with the technology to detect whether a song clones someone else’s voice.) “How do you protect that artist if you’re a label?” added Matt Henninger, Moises’ vp of sales and business development.
The answer is probably licensing: Develop a system in which artists who are fine with having their voices cloned clear those rights — in exchange for some sort of compensation — while those acts who are uncomfortable with being replicated by technology can opt out. Just as there is a legal framework in place that allows producers to sample 1970s soul, for example, by clearing both the master and publishing rights, in theory there could be some sort of framework through which producers obtain permission to clone a superstar’s voice.
AI-driven technology could “enable fans to pay their heroes the ultimate compliment through a new level of user-driven content,” Warner CEO Robert Kyncl told financial analysts this week. (“There are some [artists] that may not like it,” he continued, “and that’s totally fine.”)
On the same investor call, Kyncl also singled out “one of the first official and professionally AI-generated songs featuring a deceased artist, which came through our ADA Latin division:” A new Pedro Capmany track featuring AI-generated vocals from his father Jose, who died in 2001. “After analyzing hundreds of hours of interviews, acappellas, recorded songs, and live performances from Jose’s career, every nuance and pattern of his voice was modeled using AI and machine learning,” Kyncl explained.
After the music industry’s initial wave of alarm about AI, the conversation has shifted, according to Henninger. With widely accessible voice-cloning technology available, labels can’t really stop civilians from making fake songs accurately mimicking their artists’ vocals. But maybe there’s a way they can make money from all the replicants.
Henninger is starting to hearing different questions around the music industry. “How can [AI] be additive?” he asks. “How can it help revenue? How can it build someone’s brand?”
Reps for Universal and Warner did not respond to requests for comment.
President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.
Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don’t detail who will audit the technology or hold the companies accountable.
“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.
“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”
A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.
The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.
They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.
The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology. Company executives plan to gather with Biden at the White House on Friday as they pledge to follow the standards.
Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”
Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He said in a statement that he will work closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.
A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”
But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by currently available models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.
U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.
The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.
Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.
From ChatGPT writing code for software engineers to Bing’s search engine sliding in place of your bi-weekly Hinge binge, we’ve become obsessed with the capacity for artificial intelligence to replace us.
Within creative industries, this fixation manifests in generative AI. With models like DALL-E generating images from text prompts, the popularity of generative AI challenges how we understand the integrity of the creative process: When generative models are capable of materializing ideas, if not generating their own, where does that leave artists?
Google’s new text-based music generative AI, MusicLM, offers an interesting answer to this viral terminator-meets-ex-machina narrative. As a model that produces “high-fidelity music from text descriptions,” MusicLM embraces moments lost in translation that encourages creative exploration. It sets itself apart from other music generation models like Jukedeck and MuseNet by inviting users to verbalize their original ideas rather than toggle with existing music samples.
Describing how you feel is hard
AI in music is not new. But between recommending songs for Spotify’s Discover Weekly playlists to composing royalty free music with Jukedeck, applications of AI in music have evaded the long-standing challenge of directly mapping words to music.
This is because, as a form of expression on its own, music resonates differently to each listener. The same way that different languages struggle to perfectly communicate nuances of respective cultures, it is difficult (if not impossible) to exhaustively capture all dimensions of music in words.
MusicLM takes on this challenge by generating audio clips from descriptions like “a calming violin melody backed by a distorted guitar riff,” even accounting for less tangible inputs like “hypnotic and trance-like.” It approaches this thorny question of music categorization with a refreshing sense of self awareness. Rather than focusing on lofty notions of style, MusicLM grounds itself in more tangible attributes of music with tags such as “snappy”, or “amateurish.” It broadly considers where an audio clip may come from (eg. “Youtube Tutorial”), the general emotional responses it may conjure (eg. “madly in love”), while integrating more widely accepted concepts of genre and compositional technique.
What you expect is (not) what you get
Piling onto this theoretical question of music classification is the more practical shortage of training data. Unlike its creative counterparts (e.g. DALL-E), there isn’t an abundance of text-to-audio captions readily available.
MusicLM was trained by a library of 5,521 music samples captioned by musicians called ‘MusicCaps.’ Bound by the very human limitation of capacity and the almost-philosophical matter of style, MusicCaps offers finite granularity in its semantic interpretation of musical characteristics. The result is occasional gaps between user inputs and generated outputs: the “happy, energetic” tune you asked for may not turn out as you expect.
However, when asked about this discrepancy, MusicLM researcher Chris Donahue and research software engineer Andrea Agostinelli celebrate the human element of the model. They describe primary applications such as “[exploring] ideas more efficiently [or overcoming] writer’s block,” quick to note that MusicLM does offer multiple interpretations of the same prompt — so if one generated track fails to meet your expectations, another might.
“This [disconnect] is a big research direction for us, there isn’t a single answer,” Andrea admits. Chris attributes this disconnect to the “abstract relationship between music and text” insisting that “how we react to music is [even more] loosely defined.”
In a way — by fostering an exchange that welcomes moments lost in translation — MusicLM’s language-based structure positions the model as a sounding board: as you prompt the model with a vague idea, the generation of approximates help you figure out what you actually want to make.
Beauty is in breaking things
With their experience producing Chain Tripping (2019) — a Grammy-nominated album entirely made with MusicVAE (another music generative AI developed by Google) — the band YACHT chimes in on MusicLM’s future in music production. “As long as it can be broken apart a little bit and tinkered with, I think there’s great potential,” says frontwoman Claire L. Evans.
To YACHT, generative AI exists as a means to an end, rather than the end in itself. “You never make exactly what you set out to make,” says founding member Jona Bechtolt, describing the mechanics of a studio session. “It’s because there’s this imperfect conduit that is you” Claire adds, attributing the alluring and evocative process of producing music to the serendipitous disconnect that occurs when artists put pen to paper.
The band describes how the misalignment of user inputs and generated work inspires creativity through iteration. “There is a discursive quality to [MusicLM]… it’s giving you feedback… I think it’s the surreal feeling of seeing something in the mirror, like a funhouse mirror,” says Claire. “A computer accent,” band member Rob Kieswetter jokes, referencing a documentary about the band’s experience making Chain Tripping.
However, in discussing the implications of this move to text-to-audio generation, Claire cautions the rise of taxonomization in music: “imperfect semantic elements are great, it’s the precise ones that we should worry about… [labels] create boundaries to discovery and creation that don’t need to exist… everyone’s conditioned to think about music as this salad of hyper-specific genre references [that can be used] to conjure a new song.”
Nonetheless, both YACHT and the MusicLM team agrees that MusicLM — as it currently is — holds promise. “Either way there’s going to be a whole new slew of artists fine-tuning this tool to their needs,” Rob contends.
Engineer Andrea recalls instances where creative tools weren’t popularized for its intended purpose: “the synthesizer eventually opened up a huge wave of new genres and ways of expression. [It unlocked] new ways to express music, even for people who are not ‘musicians.’” “Historically, it has been pretty difficult to predict how each piece of music technology will play out,” researcher Chris concludes.
Happy accidents, reinvention, and self-discovery
Back to the stubborn, unforgiving question: Will generative AI replace musicians? Perhaps not.
The relationship between artists and AI is not a linear one. While it’s appealing to prescribe an intricate and carefully intentional system of collaboration between artists and AI, as of right now, the process of using AI in producing art resembles more of a friendly game of trial and error.
In music, AI gives room for us to explore the latent spaces between what we describe and what we really mean. It materializes ideas in a way that helps shape creative direction. By outlining these acute moments lost in translation, tools like MusicLM sets us up to produce what actually ends up making it to the stage… or your Discover Weekly.
Tiffany Ng is an art & tech writer based in NYC. Her work has been published in i-D Vice, Vogue, South China Morning Post, and Highsnobiety.
The U.S. Supreme Court said Monday that it would not take up a lawsuit claiming Google stole millions of song lyrics from the music database Genius.
Genius — a popular platform that lets users add and annotate lyrics — had asked the justice to revive allegations that Google improperly used the site’s carefully-transcribed content for its search results. The company argued that a ruling dismissing the case last year had been “unjust” and “absurd.”
But in an order dated Monday, the court denied Genius’s petition to hear the case, cementing Google’s victory. As is typical, the court did not issue a written ruling explaining the denial. Such petitions are always a long shot, as the Supreme Court takes less than 2% of the 7000 cases it receives each year.
Genius sued the tech giant in 2019, claiming Google had stolen the site’s carefully-transcribed content for its own “information boxes” that appear alongside search results — essentially free-riding on the “time, labor, systems and resources” that go into creating such a service. In a splashy twist, Genius said it had used a secret code buried within lyrics that spelled out REDHANDED to prove Google’s wrongdoing.
Though it sounds like a copyright case, Genius didn’t actually accuse Google of stealing any intellectual property. That’s because it doesn’t own any; songwriters and publishers own the rights to lyrics, and both Google and Genius pay for the same licenses to display them. Instead, Genius argued it had spent time and money transcribing and compiling “authoritative” versions of lyrics, and that Google had breached the site’s terms of service by “exploiting” them without permission.
In March 2022, that distinction proved fatal for Genius. The U.S. Court of Appeals for the Second Circuit dismissed the case, ruling that only the actual copyright owners — songwriters or publishers — could have filed such a case, not a site that merely transcribed the lyrics. In technical terms, the court said the case was “preempted” by federal copyright law, meaning that the accusations from Genius were so similar to a copyright claim that they could only have been filed that way.
In taking the case to the Supreme Court, Genius argued the ruling would be a disaster for websites that spend time and money to aggregate user-generated content online. Such companies should be allowed to protect that effort against clear copycats, the company said, even if they don’t hold the copyright. “Big-tech companies like Google don’t need any assists from an overly broad view of copyright preemption,” the company wrote.
But last month, the U.S. Solicitor General advised the Supreme Court to steer clear of the case. It said Genius’s lawsuit was a “poor vehicle” for reviewing the issues in the case, and that the lower court did not appear to have done anything particularly novel when it dismissed the case against Google. Such recommendations are usually very influential on whether the justices decide to tackle a particular case.
Google has been ordered to pay Sonos $32.5 million for infringing one of its smart speaker patents, marking a significant development in a long-fought legal war between the two companies that’s spanned more than three years and multiple lawsuits.
Filed in a San Francisco court on Friday (May 26), the jury verdict awarded Sonos $2.30 for each of the more than 14 million Google devices that were sold incorporating the patented technology.
The jury found that Google had not infringed a second patent at issue in the case.
Sonos first sued Google in January 2020, claiming the tech giant had infringed multiple patents for its smart speaker technology after gaining access to it through a 2013 partnership under which Sonos integrated Google Play Music into its products. Just two years after that partnership was reached, Sonos alleged that Google then “flooded the market” with cheaper competing products (under the now-defunct Chromecast Audio line) that willfully infringed its patented multi-room technology. Sonos additionally claimed that Google had since expanded its use of Sonos technology in more than a dozen other products, including the Google Home, Nest and Pixel lines.
“We are deeply grateful for the jury’s time and diligence in upholding the validity of our patents and recognizing the value of Sonos’s invention of zone scenes,” said Sonos in a statement on the verdict. “This verdict re-affirms that Google is a serial infringer of our patent portfolio, as the International Trade Commission has already ruled with respect to five other Sonos patents. In all, we believe Google infringes more than 200 Sonos patents and today’s damages award, based on one important piece of our portfolio, demonstrates the exceptional value of our intellectual property. Our goal remains for Google to pay us a fair royalty for the Sonos inventions it has appropriated.”
In its own statement, a Google spokesperson said, “This is a narrow dispute about some very specific features that are not commonly used. Of the six patents Sonos originally asserted, only one was found to be infringed, and the rest were dismissed as invalid or not infringed. We have always developed technology independently and competed on the merit of our ideas. We are considering our next steps.”
The legal battle between the two tech companies has been protracted, with both sides going on the offensive at different points. In June 2020, Google filed suit against Sonos, alleging the smart speaker maker had actually infringed several of its own patents. Sonos subsequently filed two more lawsuits alleging that Google had infringed several additional patents it held.
Sonos filed one of those two cases with the U.S. International Trade Commission, which ruled in January 2022 that Google had infringed a total of five of Sonos’ audio technology patents and barred it from importing the infringing products from China. However, the commission also found that Google had successfully redesigned its products to avoid the Sonos patents and could continue selling those reworked versions in U.S. stores — an allowance Sonos had fought to prevent.
In August 2022, Google fired another volley with two additional lawsuits, claiming the smaller company used seven different patented Google technologies to instill the so-called “magic” in Sonos software.
The U.S. Department of Justice is urging the U.S. Supreme Court to avoid a case alleging Google stole millions of song lyrics from the music database Genius, calling it a “poor vehicle” for a high court showdown.
Genius — a platform that lets users add and annotate lyrics — wants the justices to revive its lawsuit, which claims that Google improperly used the site’s carefully-transcribed content for its search results, after the case was dismissed by a lower court last year.
But in a brief filed Tuesday (May 23), the U.S. Solicitor General told the Supreme Court to steer clear. It said the case was a “poor vehicle” for reviewing the issues in the case, and that the lower court did not appear to have done anything particularly novel when it dismissed the case against Google.
“In the view of the United States, the petition for a writ of certiorari should be denied,” the government wrote.
Genius sued the tech giant in 2019, claiming Google had stolen the site’s carefully-transcribed content for its own “information boxes” in search results, essentially free-riding on the “time, labor, systems and resources” that go into creating such a service. In a splashy twist, Genius said it had used a secret code buried within lyrics that spelled out REDHANDED to prove Google’s wrongdoing.
Though it sounds like a copyright case, Genius didn’t actually accuse Google of stealing any intellectual property. That’s because it doesn’t own any; songwriters and publishers own the rights to lyrics, and both Google and Genius pay for the same licenses to display them. Instead, Genius argued it had spent time and money transcribing and compiling “authoritative” versions of lyrics, and that Google had breached the site’s terms of service by “exploiting” them without permission.
But in March, that distinction proved fatal for Genius. The U.S. Court of Appeals for the Second Circuit dismissed the case, ruling that only the actual copyright owners — songwriters or publishers — could have filed such a case, not a site that merely transcribed the lyrics. In technical terms, the court said the case was “preempted” by federal copyright law, meaning that the accusations from Genius were so similar to a copyright claim that they could only have been filed that way.
In taking the case to the Supreme Court, Genius argued the ruling would be a disaster for websites that spend time and money to aggregate user-generated content online. Such companies should be allowed to protect that effort against clear copycats, the company said, even if they don’t hold the copyright. “Big-tech companies like Google don’t need any assists from an overly broad view of copyright preemption,” the company wrote.
Such petitions are always a long shot since the Supreme Court takes less than 2% of the 7,000 cases it receives each year. But in December, the justices asked the DOJ to weigh in on whether it should take the Genius case.
In Tuesday’s filing, the DOJ said firmly that it should not — arguing, among other things, that the lower court’s ruling for Google had been largely correct. Though the agency had quibbles with some of the lower court’s analysis, it said Genius was essentially using contract law to claim the same rights as a copyright owner — the exact scenario in which such claims can be “preempted” by federal law.
“In substance, petitioner asserts a right to prevent commercial copying of its lyric transcriptions by all persons who gain access to them, without regard to any express manifestation of consent by website visitors,” the agency wrote.
The Supreme Court will now decide whether or not to hear the case; a decision on that question should arrive in the next several months. A spokesperson for Genius did not immediately return a request for comment on the DOJ’s filing.
Read the DOJ’s entire brief HERE.
Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.
Over his decades-long career, Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.
There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools — including Google’s “Bard.”
Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Since announcing his departure, Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there’s also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.
“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.
HipHopWired Featured Video
Source: JASON REDMOND / Getty / Microsoft
ChatGPT, the OpenAi-developed chatbot that has some folks in the education sector spooked is not going away anytime soon. Two of tech’s biggest companies, Google and Microsoft, are also jumping into the game.
Today in a huge surprise, Microsoft announced the arrival of its ChatGPT-powered version of its less-than-popular search engine Bing during a surprise event at the company’s Redmond headquarters, and it’s available as a limited preview on desktop.
Visiting Bing.com, you will be presented with some example searches that you can try out. The Bing search will show traditional results on the left and a chat window on the right with AI-generated answers.
Since it’s in preview mode, you will not be able to ask follow-up questions or get clarification of the results. If you’re interested, you can join a waitlist by going here.
If you sign in with your Microsoft account, download the Bing app, and set Microsoft to default on your PC, you will get priority, the tech giant announced. An email will be sent to you notifying you when you can access the new chat feature.
Microsoft also announced a new version of its web browser Edge that will feature OpenAI integration, allowing the browser to summarize PDFs, generate code, and compose posts on social media.
Google Is Coming With Bard
Microsoft’s announcement comes one day after Google revealed it is working on a ChatGPT rival called Bard. Unlike Microsoft’s, Google’s “experimental conversational AI service” will be only tested by a limited group, The Verge reports.
“Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills,” Google CEO, Sundar Pichai said in a blog post.
It’s quickly looking like ChatGPT and other OpenAI chatbots will soon become the norm.
—
Photo: ASON REDMOND / Getty