Hi Creators,
We’re excited to announce more updates to Mesh Generation and introduce new Screenshot and MCP tools!
More Control to Mesh Generation
This update allows you to generate a textured 3D mesh from a text prompt in seconds using the GenerateModelAsync API, in Studio. Whether you need a specific prop for a scene or a placeholder asset for a new game mechanic, Assistant can now help create and iterate faster.
How to Generate
There are two ways to start creating:
-
Direct Commands: Use the /generate command followed by your prompt (e.g., /generate a weathered stone well).
-
Conversational Requests: Ask Assistant directly, for example “make a futuristic crate”
Improved Workflows
We’ve improved workflow efficiency by adding:
-
Batch Creation: Users can now simultaneously generate multiple meshes, ideal for bulk creation.
-
Reusability: A single generated asset can be instantly applied to multiple locations with the click of a button.
-
Multitasking Support: Users can continue with other tasks while Assistant is generating a mesh, minimizing downtime.

Generate an oak tree, birch tree and pine tree, then make a forest out of them
Granular Control
This update introduces more precise control over the generated meshes:
-
Bounding Boxes: Use a Part in your workspace as a bounding box. Select a part before prompting and Assistant will utilize its size and location as input, ensuring the generated mesh fits within the defined space.
-
Max Triangle Count: Define a maximum triangle count for the returned model. Lower values result in more faceted and low-poly generations. This is an optional input with the default set to 10000.
- For smaller, simpler objects, or if you are going for a low-poly aesthetic, we recommend a lower triangle count (100s to 1K).
- For larger or more detailed objects, you can use the default or explore reducing it based on your overall scene complexity. This doc has helpful guidelines for optimizing for performance.

Generate a tent and a campfire with bounding boxes (video sped up for demo purpose)
New Tools in Assistant and MCP Server
We’re continuing to expand the capabilities available through Assistant and the Studio MCP Server. This update adds the following tools, available both in Studio and to external AI clients like Claude, Cursor, and others:
-
insert_from_creator_store: Search for and insert models from the Creator Store directly into your experience. This lets AI agents pull in community-created assets, plugins, and models without manual browsing.
-
generate_mesh: Generate a textured 3D mesh from a text prompt using AI. The same mesh generation capabilities available in Assistant are now accessible through any connected MCP client.
-
generate_material: Generate custom material variants from a text prompt.
These tools are available today through Assistant and the built-in Studio MCP Server.
Screenshot Tool
We’re adding a Screenshot tool to the MCP Server, also available for creators using their own API keys in Assistant. Combined with the playtest automation and virtual input tools from our last update, AI agents can now see what’s happening in your experience - not just read code and logs.
- screen_capture: Captures the current Studio viewport in play mode and returns the image data. Agents can use this to verify visual changes, check scene layout, or inform their next steps.
OpenGameEval Update
Earlier this year we open sourced OpenGameEval, a benchmark for evaluating how well AI models perform on real game development tasks in Roblox. We’re now expanding it with debug focused evals.
-
30 new evaluations built from 15 base scenarios, each with 1–3 injected bug variants.
-
Tests a model’s ability to identify and resolve issues within an existing game, a core skill for AI-assisted development workflows.
-
Available now in the OpenGameEval repository.
What’s Next
We are actively developing new tools to generate and iterate on 3D assets, including advanced procedural object generation and editing. We are also improving our prompt processing to ensure a more accurate understanding of your requirements before beginning the generation process.
Multiple chats and chat history are also coming soon, making it easier to keep workstreams organized, preserve context across tasks, and pick up where you left off.
Stay tuned for more updates. Please let us know if you run into any issues or have suggestions for new features you’d find useful.
Happy creating!
