Thanks for this feedback. We’ll look at expanding SocialService to allow inviting a group of players, though it’s not something we’re actively working on. We appreciate the suggestion! For social interaction broadly, we’re also thinking about how to enable durable groups, and enabling the ability to join communities too. We’ll be moving the ‘collaboration’ use cases for groups to Creator Hub over time, but these changes will not replace the social use cases for groups.
Based on community feedback, many creators avoid adopting newer features due to their lack of ability on the low end. For now, we are focusing on improving mobile performance, fidelity and scalability so that our higher-end features are available to more players. As we close this gap we will transition to adding more features at the high end like some of those that you mention.
Many of our creators are already making amazing, high-fidelity experiences, like Frontlines. We want to see more experiences that approach this level of quality, and we want to continue adding functionality to the platform to make it easier and easier to build realistic/large worlds. Two years ago at RDC, David showed an image of a 50,000 person stadium as an example of the type of experience we’d like to support, and we haven’t lost sight of that goal. We want to enable limitless scalability and fidelity long-term.
In terms of specifics, this year we will be working on significantly improving performance, especially on lower end devices (better client frame rate, improved memory efficiency, etc.) so that higher fidelity experiences can reach everyone on Roblox. We’re also focused on avatar performance (such as layered clothing performance), and pushing the limits on larger experiences. If you haven’t turned on streaming, I encourage you to do so. We will continue to improve this system and long-term this is what will get us to being able to support truly massive worlds.
Yes. We are in final performance testing and plan to release this feature soon!
Yes, that is directionally where we want to go in the long run. We’d like to close the loop with reporters when their report is reviewed and closed, and provide status information. We plan to share as much information as possible on the reasons behind our decisions. We have not solidified our timeline for this yet but will share when we do.
As part of our new fluid dynamics system, we are launching hydrodynamics later this year. Similar to our aerodynamics launch last year, this will improve the behavior of objects in water and allow for more lifelike simulation and interaction.
Thank you for sharing your excitement about the new Studio UI. We are planning to release the first Studio UI and ribbon update in beta in early Q2. We have no plans to release a multi-view viewport in Studio but we released further improvements on docking last November that allows for more flexibility in the UI layout.
This is still coming, but we are taking a step back to look at this in the context of what we call our “next-generation programming model.” We’d like to take everything we’ve learned to improve the programming model so that simple things (like data binding) are trivial and more advanced things (like streaming) are also easier and seamless or free. As we do that, we’ll enable physics scripts, but also other types of scripts that need to run at different rates, like audio scripts. Obviously this is important to get right and we are still working out all the details.
Given this, we don’t have an exact timeline, but we are working on it!
Thanks for your question. Ads are a big area of investment for us, and we want to build a product that works really well for advertisers, publishers and users. This year, we are providing more tools to advertisers (including creators) so you can improve the performance of your campaigns.
In particular, we are investing in better ad targeting for eligible audiences so you can better direct your campaigns, improving placements of sponsored ads on the Home page, and testing the video format for Immersive Ads.
At this point, in-game UGC is limited to geometry and does not support code. So if I’m understanding your question correctly, this scenario shouldn’t be possible or at least would be out of policy (injecting Lua code with API calls into the experience).
We are actively working on enabling voice (and voice moderation) for languages beyond English. We expect to deliver the first set of these languages and thus expansions into new countries in the first half of this year.
Since we previewed decorators a few years back, we’ve taken a step back to think about how we can support these in a more general framework. We are currently gathering feedback and working on a plan for a unified, scalable, and flexible system that works for everything from smaller objects like grass to larger ones like forests. Given other priorities this year, we don’t anticipate these improvements arriving until next year.
Thank you very much for the feedback - we hear you loud and clear. We are committed to communicating more frequently. First of all, we aim to publish a vision and strategy post about discovery products (Home, Search, Discover, Matchmaking, and Notifications) in February, which will kick off a series of related deep-dive blog posts and dialogues with our developer community. Secondly, we are committed to offer more Search & Discovery focused AMAs, similar to this session.
We’re making improvements to the Marketplace both in terms of UI and search and discovery for an improved shopping experience. As we make these updates, we will be communicating with creators so you can better understand how items are ranked.
Thank you for the question. We’re looking at further customization options and how to build a much more robust procedural placement/set dressing system, but we have no plans to announce at this time.