Disney’s Olaf Robot Trained in Two Days With an RTX 4090

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

The training run took two days. That’s how long it took an Nvidia RTX 4090 GPU to teach a 35-inch-tall, 33-pound robot to move like a beloved animated snowman — running 100,000 virtual copies of the physical machine through a simulation, rewarding each one for matching screen-accurate movement.

The robot is Olaf. He arrives at Disneyland Paris on March 26th and at Hong Kong Disneyland this summer. And according to the announcement, Disney Imagineering believes the technique used to build him changes how the company constructs characters entirely.

“This absolutely is the future of how we’re building robot characters,” says Kyle Laughlin, Disney Imagineering‘s SVP of R&D. Reinforcement learning, he says, is the “true unlock” that could let the company populate entire themed lands with interactive robot characters — now that complete builds take months rather than years.

What the robot actually is

Olaf is not artificially intelligent. The distinction matters. The figure carries 25 actuators and three computers — including an Nvidia Jetson Orin NX and a Raspberry Pi — but it does not generate its own speech or perceive the world around it. It plays prerecorded lines from voice actor Josh Gad while executing trained animations. Blinking is autonomous. Everything else requires a human operator running a Steam Deck gaming handheld: one joystick steers the walk, the other directs where Olaf appears to look, and a touchpad swipe cycles through banks of preloaded conversation responses.

In an early demo, responses were brief — “Of course!” or “Sure!” — not enough to sustain a real back-and-forth. But the movement held attention anyway. Moritz Bächer, director of Disney Research, offers an eight-word explanation for why: “The eyes go first, and the body follows.” The brain reads eye movement as evidence of intent, and assumes a living presence behind it.

The costume helps. A four-way-stretch fabric built over foam snowballs sparkles as light passes through. Olaf’s carrot, stick arms, and coal buttons are all magnetic — detachable on purpose, either for maintenance or as a physical gag.

The engineering problem nobody expected

Robots don’t usually have large, heavy heads balanced on narrow necks. Olaf does, because the character demands it. That geometry concentrates mechanical stress on a single joint, making it prone to overheating. The team addressed the problem partly through Kamino, a simulation tool Disney Research is now releasing publicly. It includes a thermal dynamics model specifically designed to prevent joints from overheating prematurely during training runs of “extremely complex mechanical assemblies.”

The release is part of a broader shift. Disney — historically protective of its intellectual property — has chosen to open its robotics research. Last March it partnered with Nvidia and Google DeepMind to release the Newton Physics Engine as an open-source project under the Linux Foundation. Kamino follows the same path.

Laughlin draws a line between this project and earlier work. Disney Imagineering‘s Star Wars droids were, in his words, “robots being robots.” Olaf is different: “This is our first animated character that we brought to life.”

Photo by Pixabay

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article