Hi Creators,
At this year’s RDC, we’re announcing major updates to Assistant. Earlier this year we announced the open-source Studio MCP server, allowing you to use your preferred LLMs that use the Model Context Protocol (MCP). Now, we’re building support for MCP directly into Studio, with Studio natively acting as an MCP server and Assistant as an MCP client. This deep integration allows Assistant to better understand your experience and perform much more complex tasks, including, soon, orchestrating tasks with third-party tools.
Our goal is to boost your productivity by enabling Assistant to handle more complex, multi-step tasks from a single prompt. This is a step towards helping you spend less time on monotonous work, and more time focusing on the fun part of creation. For skilled creators, you can now offload repetitive workflows, allowing you to bring your biggest ideas to life faster. For new creators, Assistant can act as a powerful learning partner that can explain code, find relevant documentation, and build prototypes to help you quickly familiarize yourself with Luau and Studio.
With this change, the model powering Assistant is becoming much more intelligent and aware of what’s in your experience through its data model. Assistant can now:
- Perform multi-step actions: Assistant can now perform multiple actions from a single prompt, turning a complex request into a simple, automated process. For example, when you ask it to build a starting point for a capture the flag game, it first breaks down your request into a sequence of smaller tasks, then works through the plan by automatically inserting assets, adding scripts, and building UI. It may ask for your preferences or feedback along the way to guide the outcome. This same power can be used to make modifications to your experience, such as refactoring scripts or restyling your UI elements.
- Better understand the context of your experience: Assistant can now search through your experience’s Data Model dynamically and, if needed, over multiple passes. This enables Assistant to make more accurate changes to your place and also to answer more complex questions about how an experience works.
You know what’s best for your creations, and we’re always building Assistant to make sure you stay in the driver’s seat.
More context-aware, complex responses and actions
Here are a few examples of what new capabilities in Assistant enable. You can ask Assistant to add features to an experience, modify it or build simple experiences from scratch.
Adding a Feature to an experience: “Add a treasure counter to this game.”
Here we ask Assistant to add a treasure counter UI to the platformer template. It first looks through the game to understand how treasures are placed and counted, then adds a UI element and a script that counts the amount of treasure collected. With this generated script as a reference, you can add your own UI elements to track other collectibles or items relevant to gameplay.
Modifying an experience: “Make the leaderboard for this game cartoony”
Another example would be asking Assistant to change elements in an experience. Here we asked it to update the style of a leaderboard from Roblox’s default to a more cartoony one. It searched the DataModel to find relevant scripts and changed its UI to look more cartoony.
Accessing the new Assistant
As we move Studio over to the new UI, you can find the ‘Assistant’ icon in Studio’s mezzanine bar at the top of the screen.
We’ve started to roll out these updates, and will gradually release them to all of you. We want to make sure that we’re thoughtful with this launch, and ensure there are no issues before it’s available to everyone.
Coming Soon
Later this year, we’ll be introducing support to connect with other 3rd party MCP servers, so that you can, for example, import UIs designed in Figma or generate and import skyboxes from Blockade Labs.
Assistant will be able to connect to third-party MCP servers, allowing it to complete tasks that go beyond Studio. This will let you use powerful 3rd party tools for things like UI design, skybox generation, and 3D creation using MCP servers directly into your workflow via natural language prompts. For example, you could design a UI in Figma and have Assistant generate a corresponding GUI in Studio.
We’ll keep you posted on progress and we’re aiming to launch these new MCP capabilities by the end of the year!