When I heard last week that NBC had cloned the voice of legendary play-by-play sports announcer Al Michaels to deliver recaps from the Paris Olympics to subscribers of the network’s Peacock streaming service, Michaels’ famous call during the 1980 Olympics — when the US hockey team upset the Soviet Union and he said, “Do you believe in miracles? Yes!” — wasn’t actually the first thing that came to mind.

What was? That AI is going to replace a lot of human talent as content creators and other businesses look to generative AI technology to save time and money and boost profits.

AI Atlas art badge tag AI Atlas art badge tag

To be sure, Michaels, who has called plays for Monday Night Football and Sunday Night Football for an impressive 36 years, agreed to the licensing deal. And he’s being paid an undisclosed amount to have a “high-quality” AI voice replica deliver daily personalized summaries as part of a first–of-its-kind feature NBC is calling “Your Daily Olympic Recap on Peacock.”

Pulling from 5,000 hours of live Olympics coverage, NBC said, the feature will deliver nearly 7 million unique packages of around 10 minutes’ worth of highlights for each subscriber. Michaels’ voice will start with recaps of opening day coverage on July 27, before moving on to the 40 Olympic events to be held each day. 

NBC’s large language model, or LLM, will analyze “subtitles and metadata to summarize clips from NBC’s coverage, and then adapt those summaries to fit Michaels’s signature style. The resulting text is then fed to a voice AI model — based on Michaels’s previous NBC appearances — that was trained to learn the unique pronunciations and intonations of certain words and phrases,” Vanity Fair reported.

“It was astonishing. It was amazing,” Michaels told the publication, noting that he was at first “skeptical” that NBC’s LLM could capture his unique vocal stylings, including the hints of his charming Brooklyn accent. “It was a little bit frightening. … It was not only close, it was almost 2% off perfect. I’m thinking, Whoa.”

Whoa indeed. Though NBC said it’s hired more than 150 commentators to help cover the 32 sports and 300-plus medal events, only artificial intelligence made it possible to deliver individualized summaries at scale, John Jelley, senior vice president of product and user experience at Peacock, told Vanity Fair. “It would be impossible to deliver a personalized experience with a legendary sportscaster to millions of fans without it.” 

Being able to deliver personalized packaged content at scale does seem like a good use of AI. 

And as a fan of Michaels, I’m happy that he gets, as he told Vanity Fair, to stay “somewhat attached to the Olympic Games, which I’ve always loved.” 

But I’m also a fan of other sports commentators and analysts. And while I applaud NBC for hiring so many commentators, could this also have been an opportunity to work with even more sports experts to assemble the context-rich daily summaries users crave and then just use artificial intelligence to help deliver them? I ask because it seems we’re now relying on NBC’s LLM to decide the top takeaways. 

NBC said “a team of NBCU editors will review all content, including audio and clips, for quality assurance and accuracy before recaps are made available to users.” Still. 

The network isn’t the first to deploy a voice clone. In 2023, “the popular sports podcaster Bill Simmons revealed that his employer, Spotify, was developing AI to re-create its hosts’ voices for advertisements,” Vanity Fair noted. Universal Music Group signed a deal earlier this month with AI music startup SoundLabs to enable artists to create voice clones. And in 2022, actor James Earl Jones licensed the AI rights to his voice for one of his most iconic characters, Darth Vader.

This is all a reminder that how AI is deployed is up to us humans. The 79-year-old Michaels, after hearing his cloned voice, told Vanity Fair that his career as a play-by-play sports announcer may be “rendered obsolete” by AI. “I just sat there and thought, In the next life, I’m going to need a new profession.”

I hope not. Because while an artificial intelligence creation may sound like Michaels, it can’t think like Michaels. Will the tech ever be cognizant enough to react in the moment and deliver an emotional summation like, Do you believe in miracles? I guess we’ll have to wait and see.

Here are the other doings in AI worth your attention.

OpenAI CTO says ‘repetitive’ jobs may be better handled by AI

Speaking of how jobs might be affected by AI, OpenAI Chief Technology Officer Mira Murati said in a June 19 panel at Dartmouth Engineering that no one truly knows how many jobs will be lost and how many new jobs will be created due to chatbots like her company’s ChatGPT. (FYI, a desktop app of ChatGPT for MacOS became available last week.)

But what we do know is that some jobs will definitely be disrupted, she said during a nearly hour-long discussion you can watch on YouTube. The jobs discussion starts just past the 23-minute mark.

“Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place if the content that comes out of it is not very high quality,” she said. AI can handle “cognitive work” that is “repetitive” and leave us humans more time for work that will “expand our intelligence and creativity and imagination.” 

When asked how AI can help people do their jobs, Murati said it can provide the “first draft” for any work. “I  think basically the first part of anything that you’re trying to do, whether it is creating new designs, whether it’s coding, or writing an essay, or writing an email… It’s so much faster. It lowers the barrier to doing something and you can kind of focus on the part that’s a bit more creative and more difficult, especially in coding. You can outsource a lot of the tedious work.”

That tedious work may include, she said, handling customer service, writing, and code analysis. “Say if you’re preparing a paper, the research part of the work can be done much faster and in a rigorous way.”

AI, she added, is a tool that can help democratize people’s ability to be creative. “Right now if you think about how humans consider creativity, we see that it’s sort of this very special thing that’s only accessible to very few talented people out there. And these tools actually make it lower the barrier for anyone to think of themselves as creative and expand their creativity.”

I’m not sure if creativity is only “accessible” to the few. But I think opportunities certainly are. 

In any case, Murati’s comments caused a bit of a stir, with CNBC and Fortune calling out the warning that AI may “kill” some creative jobs or cause them to “disappear.” OpenAI told me that Murati provided further context about her comments in a post on X. 

“We must be honest and acknowledge that AI will automate certain tasks. Just like spreadsheets changed things for accountants and bookkeepers, AI tools can do things like writing online ads or making generic images and templates,” Murati wrote on June 22.

“But it’s important to recognize the difference between temporary creative tasks and the kind that add lasting meaning and value to society,” she added. “With AI tools taking on more repetitive or mechanistic aspects of the creative process, like generating SEO metadata, we can free up human creators to focus on higher-level creative thinking and choices. This lets artists stay in control of their vision and focus their energy on the most important parts of their work.” 

I agree — it is important to recognize the difference between temporary creative tasks and the kind that add lasting meaning and value to society. So we’ll need to watch to see that employers also recognize the difference and come up with new jobs, as they eliminate repetitive work assignments, that put a premium on more creative work. 

Murati’s vision of the AI future also assumes AIs won’t hallucinate — that is, make up things that sound true but aren’t — so you can actually trust that the research part of your work is fact-based. Also, the happy version of an AI future also needs to ensure that the content being delivered by AI is licensed from the original copyright holders, something we can’t assume today since AI makers aren’t sharing what’s in their training data. OpenAI has been signing licensing deals with some media companies even as it’s being sued by others.

Last, that optimistic view of a future enhanced by AI also assumes we humans will use these new tools to produce work that is not, to use the latest AI slang, slop. The Guardian and New York Times describe slop as the careless and/or unwanted AI-generated content that’s now proliferating on the internet.  

Record labels sue music generators for ingesting popular recordings

Al Michaels aside, imitation is not the sincerest form of flattery when it comes to artificial intelligence, copyright training data, and music.

Major music labels Sony, Universal and Warner Records, working with the Recording Industry Association of America, sued AI song generators Suno Inc. and Uncharted Labs, developer of Udio AI, for scraping their copyrighted sound recordings without permission or compensation. Suno’s pitch is that you can “make a song about anything,” while Udio “became popular when the US producer Metro Boomin used it to make BBL Drizzy, a viral parody of the diss tracks between Kendrick Lamar and Drake,” The Guardian reported.

The world’s largest record labels and the RIAA, in two suits, say the AI song generators trained their LLMs by copying “iconic music” from major artists including Mariah Carey. Their suit contains an exhibit showing how Carey’s “All I Want for Christmas Is You” was copied without permission by Udio. 

“AI companies, like all other enterprises, must abide by the laws that protect human creativity and ingenuity. There is nothing that exempts AI technology from copyright law or that excuses AI companies from playing by the rules,” they wrote in their complaint. “Building and operating [these services] requires at the outset copying and ingesting massive amounts of data to ‘train’ a software ‘model’ to generate outputs. For [these services], this process involved copying decades worth of the world’s most popular sound recordings and then ingesting those copies [to] generate outputs that imitate the qualities of genuine human sound recordings.” 

Signup notice for AI Atlas newsletter Signup notice for AI Atlas newsletter

TechCrunch offers up a shorter summary: “These AIs don’t ‘generate’ so much as match the user’s prompt to patterns from their training data and then attempt to complete that pattern. In a way, all these models do is perform covers or mashups of the songs they ingested.”

In a blog post called “AI and the Future of Music,” Udio said it stands behind its technology. “We are completely uninterested in reproducing content in our training set, and in fact, have implemented and continue to refine state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.” Suno CEO Mikey Shulman, in an emailed statement to The New York Times, said, “Our technology is transformative; it is designed to generate new outputs, not memorize and regurgitate pre-existing content.”

While this all plays out, two music-industry leaders announced AI:OK, a new service aimed at “identifying ethically aligned music products and services in the age of AI,” CNET’s Gael Cooper reported. Technology strategists and musicians Martin Clancy and David Hughes are founders of the group. Clancy is also an Irish musician who founded the band In Tua Nua. AI:OK plans to issue a “trustmark” to show that the music receiving the mark was created responsibly where AI is concerned. The criteria for issuing the trustmark will be determined by a music-advisory council. AI:OK says its early supporters include the RIAA.   

Toys R Us ad created with Sora gets mixed reviews

Sticking to the theme of how AI might affect industries and disrupt creative roles, we now turn to a new “brand film” created by Toys R Us, which it says is the first made almost completely with OpenAI’s Sora. 

If you don’t know, Sora is an experimental photorealistic text-to-video converter that was introduced to a lot of fanfare in February because of how it can, according to OpenAI, “create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.” How realistic? So much so that Wall Street Journal tech reviewer Joanna Stern, speaking with OpenAI CTO Mira Murati, said the results “are good enough to freak us out” and question “what is real anymore.” OpenAI has said Sora will be released publicly later this year.

The Origin of Toys R Us promo video, which premiered at the Cannes Film Festival, introduces audiences to founder Charles Lazarus and the brand’s iconic mascot, Geoffrey the giraffe. The 66-second video was created in “just a few weeks,” the company said.

“Charles Lazarus was a visionary ahead of his time and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” Kim Miller Olko, Toys R Us global chief marketing officer and president of Toys R Us Studios, said in a press release. Though Toys R Us stores were closed when the company went bankrupt in 2017, it now has shops within every Macy’s.

Now, I’m no marketing expert, but premiering a short brand film at one of the premier film festivals in the world — where your audience is a lot of very human filmmakers who believe in the craft — may not have been the best call, judging by the online reaction, which NBC News and others noted was “mixed.” Some praised the cutting-edge aspect of what Sora can do, while Forbes said the images of the young Lazarus weren’t consistent and that he sort of “shapeshifts” throughout the video.  

“This commercial is like, ‘Toys R Us started with the dream of a little boy who wanted to share his imagination with the world,'” TV writer and comedian Mike Drucker posted on X. “‘And to show how, we fired our artists and dried Lake Superior using a server farm to generate what that would look like in Stephen King’s nightmares.'”

In response to critics, Olko said about a dozen people worked on the film, which was created in partnership with production company Native Foreign. “It was a test,” Olko told NBC News. “I think it was successful. I think there was a lot of learnings. If the opportunity arises again, and it’s the right fit, we use it but it’s one of many different things that we would do.” 

She also noted that Geoffrey the giraffe was always an animated character in the company’s prior TV ads. “We weren’t going to hire a giraffe, you know what I mean?”

We’ll just leave it there.

Picture Apple’s Genmoji as an AI prompt training tool

Back in 1984, when Apple introduced the Macintosh, the software included a little 15-tile game called The Puzzle as a desk accessory. Users moved the tiles around with the mouse to put the numbers in sequence. Apple software engineer Andy Hertzfeld said The Puzzle was intended to inject a bit of fun into the Mac at a time when the most popular PCs were sold by IBM.

But the other benefit of The Puzzle was that it got people comfortable and skilled at using the new input device, the mouse. 

So, with Apple introducing Genmoji, a new emoji maker that will ship with its updated iOS 18 software for the iPhone this fall, I have no doubt Apple users will have fun and try to outdo themselves by dreaming up original emoji that are cool (💀), creative (💡) , clever (🤓), bizarre (🪱), obscure (🤺), silly or tacky (💩) or some play on all that. 

But I also couldn’t help but think that in addition to letting you create original emoji, it might have a hidden benefit: Could Genmoji be a simple, nonthreatening way to get people who haven’t played around with gen AI to become comfortable writing “prompts,” the term used to describe what we ask of AI-powered chatbots?

Apple didn’t provide me with any more insight beyond what it shared in its Genmoji demonstration during the keynote presentation at its Worldwide Developers Conference. But I’ve put together a story detailing a bit about Apple’s emoji efforts with Memoji, which let you customize some emoji, and whether my theory about helping users “talk,” with prompts, to the emoji creator makes sense. You can read it here. 

I will note that Caleb Sponheim, a user experience specialist with the Nielsen Norman Group and a former computational neuroscientist, told me my idea isn’t that crazy. It could be a way for “Apple to kind of create a low-stakes, low-cost environment for people to have a little fun,” he said. But, he added, it all depends on how capable and user friendly the tool is.

“If we are taking Apple’s presentation at face value [and that] it will work at least as well as they’re saying, then it will be an entrance point for people to experience and start working with generative AI in an image generation context,” Sponheim said.

But: “It’s up to Apple and their implementation about how usable it is and how enjoyable of an experience it is,” he added. “Because if it’s not, if it’s challenging or frustrating, it will turn into another iMessage feature that gets hidden behind the keyboard.” 

College students are embracing gen AI, and so are travelers  

There are two recent surveys I thought worth sharing.

The first comes from Pearson, an education company that publishes textbooks and e-textbooks and also offers a variety of learning tools. 

After studying 2 million interactions by 70,000 college students at more than 1,000 higher-ed institutions who are using its AI Study Tools, Pearson said in a press release that the students believe AI is helping them get better grades. 

And no, it’s not because they’re using AI to write papers or do their homework.

Pearson said students who use the AI tools are spending more time engaging with the textbook content. They’re also asking “for help on complex topics, suggesting a desire to build a true understanding of difficult concepts. The biology and chemistry topics students most frequently queried include: cells and cell structures, chemical reactions, combustion, osmosis and diffusion, and molecular shapes.”

In a separate survey of 800 college students in the US, Pearson found that 51% of spring semester students said gen AI helped them get better grades, an increase of four percentage points from fall 2023. And 44% of students said they’re looking for tools that help “walk them through problems.”

The second survey is from MoneyLion, maker of a money app, which says that 70% of Americans are already using or plan on using AI to help them for travel. In its 2024 travel and AI trends report, the company found that “younger adults (aged 18-34) are embracing AI for travel planning, with 29% already using AI tools, compared to 15% of those over 65. Over 80% of younger adults are also using AI or intend to for financial decisions, research and guidance.”

I’ll add that the survey found that 62% of Americans “express concerns” about data breaches when using AI for financial advice.    



Fuente