Google Cloud simply wrapped its Subsequent ‘25 occasion in Las Vegas, unveiling a jaw-dropping 229 announcements, spanning every little thing from superior AI fashions to new methods of connecting your favourite instruments with Google’s agentic ecosystem.
You’d suppose that may be sufficient to seize headlines by itself, however Google additionally teamed up with The Sphere to reimagine The Wizard of Oz utilizing AI. And which may simply be essentially the most mind-bending demo of all of them.
To unpack what occurred at Subsequent ‘25, I spoke with Advertising AI Institute founder and CEO Paul Roetzer, who attended the occasion, on Episode 144 of The Artificial Intelligence Show.
A Fast Snapshot of Google Cloud Subsequent ‘25
Whereas Google unveiled far too many new merchandise and updates to cowl intimately, right here’s a sampler of what dominated the dialog at Subsequent ’25:
- Gemini 2.5 Professional. Google’s newest heavyweight AI mannequin, now in public preview. It claims superior reasoning and coding capabilities that outpace earlier generations—and at present ranks #1 on the Chatbot Enviornment leaderboard, in accordance with Google’s stats.
- Gemini 2.5 Flash. A speedier, extra cost-efficient variant of Gemini that Google says nonetheless packs sufficient punch for a lot of duties.
- Generative media. Main upgrades to Google’s text-to-image, text-to-audio, and text-to-video fashions (Imagen 3, Chirp 3, Veo 2, and a brand new text-to-music mannequin known as Lyria). The main target? Excessive-quality outputs and sooner, extra exact modifying capabilities—whether or not you’re producing photos or total video scenes.
- AI infrastructure. Google flexed its muscle in massive-scale AI coaching and inference. Assume new GPUs, next-gen TPUs, ultra-fast networking, and storage—mainly, an industrial-strength AI spine.
- Agentspace. Google made massive updates to its “AI management middle,” which integrates your on a regular basis work apps and information with its strongest fashions and newly improved AI brokers. The concept: allow you to construct, customise, and handle AI-driven workflows—and do all of it inside a single ecosystem.
Appears like sufficient information to final the yr, proper? Properly, Google had another trick up its sleeve…
Google AI Brings 1939’s Wizard of Oz into the Future at The Sphere
Think about entering into some of the superior venues on the planet, solely to witness The Wizard of Oz from 1939 getting an ultra-HD, 360-degree AI makeover—full with out-of-this-world visuals, seats that rumble with thunder, and an immersive surroundings that places you proper in Dorothy’s ruby slippers.
That’s precisely what occurred on the primary evening of Google Cloud Subsequent 25. Roetzer was there, and in accordance with him, it was nothing in need of “loopy.”
The Sphere is an enormous spherical 360-degree occasion area the place Google previewed its work to make use of AI to show The Wizard of Oz into a totally fashionable, immersive expertise.
So how do you convey a traditional movie—shot in a small, rectangular body—onto a 160,000-square-foot dome with out it trying horribly stretched and blurry?
In accordance with Roetzer, Google took a number of AI fashions (together with variations of Veo and Imagen) and, with an enormous crew of human professionals, pioneered three key strategies:
- Tremendous decision. Sharpening these old-school frames into ultra-high definition imagery that may match The Sphere’s monumental show.
- Outpainting. Filling within the gaps between scenes to seamlessly develop the unique body.
- Efficiency technology. Compositing characters and particulars that merely by no means existed within the authentic movie, so the viewer sees a steady, immersive scene in 360 levels.
The top result’s a mesmerizing preview of what’s potential once you combine cutting-edge AI with the following technology of immersive experiences. The complete reimagined Oz movie debuts at The Sphere in late August. Should you’re planning a visit to Vegas, you would possibly wish to add this to your listing.
“The factor I took away from it was the human-machine collaboration,” says Roetzer.
This wasn’t a matter of handing your complete movie to Gemini and having it determine all of this out. Dozens of the highest minds inside Google DeepMind and Google Cloud labored on this, pushing the boundaries of the fashions and creating totally new strategies to make this potential.
Says Roetzer:
“They interviewed one man from Google DeepMind and stated, ‘Hey, when this venture began [two years ago], what did you suppose was inconceivable?’ And he answered: ‘Every little thing. There was nothing we had been doing that the fashions at that second might really obtain.’”
Agentspace: The Actual Star of Google Cloud Subsequent ‘25
Wowing the group with The Wizard of Oz apart, the true star of Subsequent ‘25 for a lot of attendees was Agentspace, Google’s hub for constructing, coaching, and orchestrating AI brokers.
“It’s a single area that allows you to, in a no-code surroundings, construct brokers to do no matter you wish to do,” says Roetzer. “And it connects to third-party software program and information. So it mainly turns into a platform the place you reside and do every little thing you want to do [with AI].”
Meaning on a regular basis customers can use Agentspace to construct their very own AI brokers for duties like:
- Researching any subject (with a “Deep Analysis” mode that synthesizes sources and particulars)
- Reworking prolonged paperwork into shows
- Producing an audio overview of your studies or strategic plans
- Creating an “agent gallery” the place your total group can uncover and deploy new AI helpers
Principally, it’s the holy grail of letting AI deal with busywork so you possibly can keep targeted on higher-level duties.
“The imaginative and prescient for it’s highly effective,” says Roetzer. “You might see the way it turns into like a management panel mainly for a data employee to simply have all of the instruments they want proper there.”
The one catch? Agentspace isn’t broadly obtainable but. Google’s letting potential customers request entry, however a timeline for common availability continues to be unclear.
Sundar Pichai’s Large AI Prediction
In a session at Subsequent ‘25, Alphabet and Google CEO Sundar Pichai made an announcement that obtained of us buzzing.
He expects the tempo of mannequin developments to proceed for at the least the following 12 to 18 months, says Roetzer, with new main fashions each three to 4 months.
Let that sink in: Each few months, we may even see leaps in capabilities, new specialised fashions, and expansions of current AI frameworks—on high of every little thing Google simply introduced.
“That’s simply loopy to consider,” says Roetzer.
Should you’re feeling overwhelmed, you’re not alone. However you’re additionally about to witness an period of unprecedented AI development, particularly from Google’s ecosystem, if Pichai’s prediction holds.
mike@marketingaiinstitute.com (Mike Kaput)