A new robotic Olaf from Walt Disney Animation Studios’ hit franchiseFrozen will soon delight guests at Disneyland Paris and Hong Kong Disneyland. This comes as Hong Kong Disneyland has already opened its World of Frozen, and Disneyland Paris will be opening a World of Frozen in 2026.
In the newest episode of We Call it Imagineering, this incredible new roaming Olaf was revealed. The episode of this series from Walt Disney Imagineering takes viewers inside Disney’s global Research & Development team’s efforts. They are driving the next wave of Disney innovation, which is pretty incredible. The group starts the development process and is where Imagineers blend creativity and technology to spark ideas for innovative new Disney experiences of the future. This is a team that has worked on some of the most incredible advancements that have been seen in Disney parks. This includes interactive attractions, innovative projection systems, and advanced robotic characters.
Throughout the episode, fans will get to see not only this incredible Olaf. There is also an aquatic robot prototype, the introduction of the next generation of Audio-Animatronics figures, and an exclusive discussion between NVIDIA CEO Jensen Huang and Bruce Vaughn. This is an exciting time for Disney fans with incredible new techology creating new storytelling experiences like never seen or experienced before!
Below is the newest episode of We Call it Imagineering. In this episode, fans can see the introduction of this new, innovative robotic character of Olaf. Check it out here:
Disney also shared an interview with Kyle Laughlin, SVP, Walt Disney Imagineering Research & Development. In this interview, he talks about some of the latest exciting innovations coming out of Walt Disney Imagineering.
Walt Disney himself constantly strove for technological innovations that would service storytelling. How does WDI Research & Development today blend creativity with technological innovation to advance Disney storytelling?
Like everything at Disney, we always start with the story. We think about how we want the guest to feel and what kind of emotional experience we want them to have. Our number one priority is always to build storytelling technology that empowers our Imagineers to truly breathe life into our characters. The technology should be advanced enough to make it invisible to our guests, so they aren’t thinking about all the work and tech that went into making something real, they’re just focusing on having an emotional experience with the characters they care about.
The BDX droids were a significant step forward inadvanced robotic charactertechnology — what did you learn while creating them about what it takes to make a robot not just a robot, but a character?
Disney’s roots are in animation — using motion to create the illusion of life. Walt did it with his early hand-drawn animated films, Pixar and Disney Animation Studios use it for computer-generated animation, and at Imagineering we bring physical characters to life through motion and emotion.
A key technology in our platform is deep reinforcement learning that enables robotic characters to learn to imitate artist-provided motion in simulation. This has enabled us to iterate extremely quickly between mechanical design and animation. Because we can iterate so rapidly and bridge the gap between art and science with physical AI, we can achieve this unmatched level of believability.
Tell us about the work you’re doing with NVIDIA and Google DeepMind to create the simulation framework Newton. How does thatallow forrobots toutilizereinforcement learning?
It takes humans years to perfect our motor skills for walking, and it takes additional years of practice to perform acrobatic motions that only a few of us can master. Deep reinforcement learning is a technology that helps robots acquire such skills in a shorter amount of time — for example, a BDX Droid or an Olaf learning to walk in a stylized way in the physical world. To make this technology scale well, we need fast and parallel simulation.
Together with our partners at NVIDIA and Google DeepMind, we are working on a new open-source simulation framework called Newton. At the core of Newton are building blocks that enable the rapid development of Graphics Processing Unit (GPU)-accelerated simulators. One of our contributions to this framework is a simulator called Kamino, which will unlock the potential of reinforcement learning for robotic systems of unmatched complexity. This will help us take a huge step forward in our mission to build robotic characters that perform in-character, just as we know them from the films.
Talk to us about the amazing Olafrobotic character. How does the robot utilize the breakthroughs made for the BDX droids, but then go beyond those innovations to create a character who isn’t meant to appear as a robot?
The BDX Droids and our latest Olaf bring story and emotion to life through their movements and interactions with guests. However, BDX Droids in the films are literally robotic characters. Olaf is an animated character that is far more challenging to bring to life in the physical world. To get Olaf to move as authentically as possible, we’ve made fundamental additions to our reinforcement learning framework to boost the believability of the character by enabling motion at the limit of hardware. Olaf has a “snow” costume which deforms and moves differently from the BDX Droids hard shells, and he has the ability to fully articulate his mouth, eyes, and (removable) carrot nose. And, most of all, Olaf can speak and engage in conversations.
What unique challenges are presented when creating robotic figures based on an animated character? How did you innovate to meet those challenges?
When bringing an animated character to life through advanced robotics, the most important thing is that we stay true to the vision of the character created by the filmmakers. Translating these characters from the screen to the real world requires pushing the boundaries of not only technology, but also creativity. We want to bring these characters to life in ways our guests have imagined, but never seen before. Our latest Olaf is a fantastic example of representing an animated character as authentically as possible in the physical world — a challenging task because animated characters most often move in non-physical ways. For example, to make Olaf’s snowball feet move along his body, we paired state-of-the-art deep reinforcement learning with an artistic interface and advances in mechanical design.
How do you see the breakthrough you made on Olaf being applied for future innovations?
What’s so exciting is that we’re just getting started. The BDX Droids, self-balancing H.E.R.B.I.E., and now our latest Olaf all represent increasing levels of performance and innovation in bringing our Disney characters to life through advanced robotics. The speed at which we’re able to create new characters — and get them in front of our guests — is unprecedented. We are working to scale even bigger, to bring more emotive, expressive, and surprising characters to guests at our parks and ships around the world.
What do you think of this new robotic Olaf? Would you like to meet him? Share your thoughts and opinions in the comments below!
Mr. Daps is the Founder and Editor in Chief of Daps Magic! Find Mr. Daps regularly at Disneyland with his trilby and bow tie taking pictures and simply enjoying the Happiest Place on Earth. He is the weekly host of Geeks Corner and your reporter for all that Disney And Positive Stuff. Find videos of all of that one the Daps Magic YouTube channel. Mr. Daps is also a Brand Ambassador with Her Universe! Find Mr. Daps on Twitter,Instagram, and Threads! If you see Mr. Daps in the parks, be sure and say hi!