Skip to content

Latest

More results

Accelerating Creation, Powered by Roblox’s Cube Foundation Model

Moving Beyond 3D Creation

  • Roblox is announcing 4D generation, powered by our Cube Foundation Model, which adds the dimension of interactivity to generate functional 3D objects so they behave the way players expect. 
  • 4D generation will unlock new types of gameplay and player engagement, while simultaneously providing creators with a powerful new feature within Roblox Studio. 
  • The technology behind 4D generation and Cube will eventually allow any creator to generate full scenes, including assets, environments, code, animations, and more with natural language prompts. 

We built the Cube Foundation Model to power creation tools like 3D object generation and eventually more complex tasks like full scene generation. Today, we’re announcing the next step on that path: the beta release of 4D generation. This technology adds the crucial element of functionality to our Cube model, moving beyond static 3D objects and allowing creators and players to generate fully functional objects. Once a creator enables 4D generation in an experience, players can use a simple text prompt to generate a fully functional car, get in it, and drive it around. The system uses rulesets called schemas to deconstruct specific objects into parts, then adds behaviors that bring them to life. With 4D generation, creators can unlock new types of gameplay and enable players to bring their own creativity into their experiences.

How Developers Are Using 4D Generation

Developer Laksh has been testing Cube and 4D generation in his game Wish Master, where players can wish for anything and see it materialize in-game. Players have generated cars that drive, planes that fly, and even flying dragons. “About six or seven months ago, I was experimenting with Studio’s AI Assistant and found it really impressive,” Laksh explained. “I thought it would be amazing to create something similar for players to use in-game.”

Giving players the freedom to create anything they wished for posed some challenges. “Players would request things that didn’t exist or phrase their wishes in ways we hadn’t anticipated,” Laksh said. “We’ve been continuously improving the system to better interpret what players actually want. The player reactions have been amazing. Since they have the freedom to build, they experiment with all sorts of things and genuinely enjoy the experience.”

Today, players can use Wish Master’s Basic, Pro, and 4D generation options to create all manner of objects. Laksh and the team have big plans for Wish Master, including a new AI model for outfit generation, a build mode, and a player-versus-player mode. During early access, players generated over 160,000 objects using 4D generation. Laksh said the team began to see a trend: “Players who engage in 4D generation have shown a 64% increase in play time in Wish Master on average.”

How 4D Generation Works

During the beta, we’re focusing on in-experience creation. We launched with two schemas to test the system: Car-5, a five-part, multimesh car composed of a body and four wheels, and Body-1, which can generate any single-mesh object. We’re actively working on our end vision of an open vocabulary schema system that will allow the creation of any schema. We expect to soon include schemas that cover the range of thousands of objects in the real world.

On the back end, schemas define the required mesh outputs to make a model functional. For example, the Car-5 schema ensures that generated cars have five separate MeshParts, named body, front left wheel, front right wheel, rear left wheel, and rear right wheel. Now, instead of a single mesh with no articulation, users can generate a car model with wheels that spin and turn based on scripts. 

After geometry is generated with a schema, the next step is to add functionality. With users generating objects on the fly, scripts that attach behavior must adapt to a range of shapes and sizes. 4D creation includes a step that retargets scripts to the unique dimensions of the generated object so that parts work as expected.

What’s Next

We envision a future where creators and users will be able to generate any type of 4D object and behavior they want—based off of any schema. We’re excited to put this technology in users’ hands very soon.  

We’re exploring other avenues to evolve creation, including our most ambitious research project: real-time dreaming. We think there’s an enormous opportunity for world models to enable new types of experiences on Roblox. Roblox CEO David Baszucki recently shared an early demonstration of our real-time dreaming research on Roblox Today. 

The next frontier of creation on Roblox is the continued AI-driven evolution of our creation platform that will allow creators to generate immersive environments, iterate, debug, and collaborate with their teams all through natural language prompts. If someone can dream it, they should be able to bring it to life.