Last year at RDC, we announced an ambitious project to power the creation of immersive 3D objects and scenes in Roblox. Today, we are excited to launch Cube 3D, a 1.8B parameter foundation model for 3D creation trained on 1.5M 3D assets—including the beta of its first capability, text-to-3D mesh generation, also known as the Mesh Generation API. Roblox Cube, our core generative AI system, will underpin many of the AI tools we will develop in the years to come, including scene-generation tools in the future. Learn more about the technical details of our model (GitHub) (Technical Report).
Expanding Creative Possibilities with 3D Generation
Since launching Assistant to everyone using Studio, we’ve seen creators make experiences more efficiently with personalized help, script and texture generation, and more — and now, we’re ready to take things a step further with 3D generation. Using our Cube 3D model, we’re providing you new tools to make 3D creation more efficient. Think of generative AI as your copilot helping you iterate prototypes and concept art or props quickly, so you can focus on the creative, meaningful side of world-building. The creative decisions are always yours; generative AI is simply there when you want it.
Beyond aiding your creation, our mesh generation API also lets your users create personalized items in-experience in real time, unlocking higher engagement. In feedback sessions with creators, some of the novel use cases shared include the ability for a user to customize a room in their own style, or allowing a user to generate a custom reward at the end of a quest. In-experience generative AI opens the door to a new wave of creativity for millions of Roblox users, making gameplay even more personalized and interactive.
Currently, there are two ways to leverage 3D generation:
-
In Experience APIs: In experiences where the Mesh Generation API is enabled, users can quickly generate unique in-experience items and props with a simple text prompt. For example, they can describe objects like “brown leather moto jacket” and see it come to life.
-
Assistant in Studio: With Assistant in Studio, you can easily create a mesh and texture object in seconds with the “/generate” prompt. For example, if you are building a race track scene, you could type “/generate a motorcycle” and have the object created in seconds and ready to be placed into an experience.
The Mesh Generation API can take text input and generate single 3D mesh and texture objects. You and your users can bring to life 3D objects in seconds with just a few text prompts.
The demo below illustrates how you can allow your users to generate a single object like pink sunglasses within an experience:
We built this Cube 3D Play Experience for you to test out generative creation. In this experience, all the assets except the buildings are built using Cube 3D.
We also created a Mesh Generation Template for creators as a reference API implementation. You can go to experience details and click on the “…” to edit it in Studio.
How to Enable the Mesh Generation API
For the full API documentation, please make sure to explore the docs here: Mesh Generation API | Documentation - Roblox Creator Hub
Enabling Mesh Generation API in-experience:
To enable this feature you need to enable Editable Mesh / Editable Image APIs by going to Studio File > Game Settings > Security and selecting the following:
Enabling Mesh Generation API for Assistant in Studio:
Important Safety Measures
We facilitate creation with safety at the forefront. All 3D generated outputs are proactively moderated by Roblox’s AI safety systems to ensure the content complies with our Community Standards. Our safety tools can surface any policy violations quickly and help determine what is safe and appropriate to publish in an experience.
For 3D object generation in experiences, developers will not be held responsible for potentially violative output if they have not actively attempted to violate our policies, which includes keeping output visible only to the user that generated it. When using Assistant, creators remain responsible for all of their creations, including 3D generated output, and how they are used in their experience. To avoid consequences for violating Roblox’s policies, creators should ensure all experience components align with our Community Standards.
Open-Sourcing the Cube 3D Foundational Model
We’re open-sourcing Cube 3D for research and academic usage so anyone in the industry can experiment, fine-tune or train it on their own data to suit their needs. We are making the model weights and inference code available under an OpenRAIL license. This resource is accessible outside of Roblox through GitHub / Hugging Face.
What’s Next
Later this year, Cube 3D will not only take inputs from text, but also images. It will also extend to scene generation and understanding, going beyond placing objects into a scene to understanding the context and relationships between objects. With this understanding of objects and scenes, we’ll be able to serve users the experiences they’re most interested in and augment scenes by adding objects in context that is consistent with the rest of the scene. For example, in an experience with a forest scene, you could ask to replace all the lush green leaves on the trees with fall foliage to indicate the season changing. These generative tools react to your requests, helping you rapidly create, adapt, and scale your experiences.
Share Your Feedback
We’re excited to see what you and your users create using this beta for the Mesh Generation API. We expect the output of the Cube 3D model to get better over time as we continually work to improve output quality and add more ways to control mesh resolution.
Please use this form to share your planned generative AI use cases, which additional features you’d find most beneficial, and any additional thoughts that could help shape the future of 3D generation. Your feedback will directly contribute to refining and enhancing our models.
FAQs
Do 3D generated assets persist? Will they end up in a user’s inventory?
- Currently, assets generated in-experience do not persist outside of the experience, are not added to a user’s inventory and will not be listed in Marketplace. However, assets generated in Studio via Assistant are added to a creator’s inventory and accessible via Studio’s Toolbox.
Are there any limits on the generative creation capabilities of the model?
-
We expect the Cube 3D model capabilities to improve over time; however, at launch there are some limitations we want to acknowledge as follows:
- Developers and their users will only be able to generate single objects. We are working on making both part and scene generation available next.
- The object scale will require user manipulation to get to the desired size.
Is there any cost associated with using 3D generative creation APIs?
- There is no cost. However, during this beta release, each experience is currently limited to 5 generations per minute. When this limit is reached, an error message that the developer can customize will be returned to the user. As we get started during the beta, the usage limits are low. In the future, this rate limit will be adjusted to account for concurrent users.
Where does the Cube 3D model training data come from?
- The model for 3D generation is trained on 1.5M 3D assets that utilize a combination of licensed and publicly available datasets, as well as the free data assets that are available in our Creator Store. It does not include experience data.
How is Roblox ensuring that developer data is protected from cloning / IP infringement?
- We have a responsibility to build tools that reduce the barrier to creation while also respecting the intellectual property rights of all creators. The creator data used to train the Cube 3D model is part of our AI data sharing program, which allows creators to control their preferences with respect to how their data is used. We also implement output filters to reduce the likelihood that Cube 3D generates content that is similar to copyrighted works. Finally, Rights Manager allows creators to report content they believe is copyright infringing and file a removal request. We are continually refining our approach to minimize any IP-infringing content that may be generated from our AI models.
How might artists and experience developers benefit from Roblox’s generative AI tools?
-
At Roblox we aim to provide artists and content creators with new tools that expand and augment their capabilities. For example, the mesh generation tools introduced today may help creators rapidly iterate on ideas and explore more concepts (i.e., generate initial versions of assets for concepting and ideation) or quickly populate experiences with set dressing or background objects, allowing them to focus their skillset on creating the most important assets for their experiences.
-
We also believe in using AI tools to expand access to a wider community of potential creators that have unique ideas for experiences to create and stories to tell. Overall, we believe AI-based tools will provide new opportunities for both novice creators and highly skilled artists to focus on their creative vision. The unique perspective, storytelling ability, and sense of aesthetics of our creators are essential for crafting the most compelling and engaging content on the Roblox platform.