I have no doubt that you can make large open worlds at this point on ROBLOX, assuming you use parts.
The support for smooth terrain has become minimal and it still is extremely difficult and unoptimized to use on a large scale, and quite frankly, if you’re trying to make a realistic map- it’s the only possible way.
The memory usage for streaming when it comes to terrain is outragous, and I’ve made posts on it before.
Would love to be able to customize the distance for rendering, specifically to allow developers to set a range of which it changes the distance + give developers the ability to give users a setting value to change the distance on their own. Basically, users can change render distance between a range set by developers.
I really want to congratulate everyone on the StreamingEnabled team, I was losing my hope for it a few years ago but they really brought it up with a insane amount of great features.
With that said, I would like to request something a little different…
Streaming still causes a few problems in low-end devices due to the streaming speed, sometimes it either loads too slow or too fast, so I would request something that controls the speed that the new instances are replicated across the client.
Let me give you an example, if we have a really detailed map Streaming works perfectly, right? Almost, there’s the loading problem if you teleport a player to a different region for example right in the middle of it, their device will receive a massive FPS drop because of the insane amount of parts being loaded at once, so my request would be something to control rendering speeds, making instances to be replicated or rendered slowly.
This would be even more awesome because it could give time to other instances to be unloaded before loading the new ones!
Anyways, cheers for everyone of the Streaming team! I love you guys.
This is my biggest request as well! An API to create and stream regions at will would be awesome. Using something like Region3 to define an area and then being able to just call a method to say “hey, I need this specific region streamed in/out right now!” would be awesome.
Streaming is great and all but right now i don’t believe it’s fully ready and i still don’t think ill be using it for a while. (One of my places CANNOT be joined because streaming enabled would just crash the server almost instantly)
I believe that for streaming to be something actually reliable and good, roblox must stop treating it as some sort of magic feature where all we have to bother about is changing like a few values and a check box and let roblox servers magically handle all of it. Personally i would want to implement my own streaming solutions that does not rely on the server for assets to be streamed in and out 24/7 but instead instead, i would want to preload everything on local disk space on player join and only stream in and out whatever assets the game requires when needed and straight from the players device. I believe that this can pretty much speed up any sort of streaming WAYYY MORE as it is not, yknow, bound to whatever internet speed has (usually around the 3-50 mb/s range) and instead use disk space that at WORST is around 200 mb/s reads and writes. Most devices today (even budget/cheap ones) come with relatively to quite fast storage that would easily be able to read and write large amounts of assets very quickly. I also understand that this idea may have the flaw of having one single VERY LONG loading time, however heres a few proposals that i have for streaming that may have a pretty decent solution to this long loading time problem:
Give us access to the players internal space, It does not have to be some crazy high amount, just a dynamic partition of space dedicated for roblox games to store assets locally. It does not have to be some crazy amount, nor it does not have to be some space that is permanently occupied. Personally i think some sort of “Priority” system for this would help it, as assets with higher priority would be the first to be added to local space and only a bit later everything else that is lower priority will be added. If there is not enough dedicated space to hold some of the lower priority assets, then roblox could fall back onto the regular streaming methods that we have currently. Also for any security concerns about this like some players or exploiters stealing assets from these files, well they can already do that with just having the game open with third party software and scripts soo yeah…
Let us make and use our own LOD’s, along with giving us access to when and how these LOD’s will be used. As of right now the only way to really use custom LOD’s would just be to constantly :Clone() and :Destroy or Debris:AddItem() to those instances. I know LOD’s are kinda the same thing but i don’t think the Clone() and Destroy() functions were ever designed to handle something like actual real LOD’s
Let us preload assets to local space from other places. As an example of what i mean by this, lets say you have a game that has two places, a starting place where you get to manage and create your characters and a second place where most of the game is actually happening. My idea with this would be to have the ability to slowly load the assets (From high priority to lower priority) for the second place right from the first place while the player is going through various menus, managing and creating their characters and once they’ve finally selected one to play as, they should be able to join the second place without a ultra long loading screen that requires a lot of assets to be loaded as at least a decent bit were loaded in the first place.
Give us access to more to lighting properties like render distances of various assets, quality levels of shaders, etc… Like for example, it would be neat if i could simply render shadows much further away while rendering any far away shadows at a lower resolution rather than just have no shadows far away. Same could apply for lighting, rather than just having no lighting, we could just instead be able to render the far away lights at a lower resolution.
Combine the lighting technologies into one bigger thing rather than having all of them be separate but do slightly different things. For example, instead of either having everything use future lighting for some important lights but end up wasting performance on some other lights that clearly do not need that level or quality or just use shadow maps or voxel instead but end up with degraded quality everywhere. What if you could instead select what kinds of lighting methods instances and lights would use? Maybe shadow maps or future lighting could be some sort of instance similar to PostEffects like maybe some sort of “ShaderEffect” instance? I think it would also be neat with this sort of thing to just be able to remove all lighting features and just end up with a flat color world, i mean hey, at first glance a “flat color world” may sound boring and flat, but theres people who would want to make some games using that sort of visual direction.
Let us use also our own textures and decals with different resolutions that work similar to the LOD’s but of course, with textures. Right now we can’t really do that in any meaningful way to save any sort of video memory.
This might be quite far fetched but what about implementing technologies like FSR (FidelityFX Super Resolution), DLSS 2.0/3.0 (Deep Learning Super Sampling) and XeSS (Xe Super Sampling), maybe even some other alternative that uses Temporal Anti Aliasing. While DLSS may be less widely available (XeSS can actually be ran on non intel gpus, usually just not as efficiently), technologies like FSR Can be used widely by most gpus to easily improve game performance while trying to retain a similar look to how the game would look like without it enabled.
I know these are a lot of big things to ask for but i believe that these sort of features would greatly help the developers and players have a better experience in each game and keeping the said game performant, optimized, and maybe even GIGANTIC!
I would like to use streaming in my game, but I need the ability to stream to a location that is unknown to every client except the one being streamed to.
Currently, you can set the player’s replication focus, but that property can be read by all players. I would like for a non-replicating property, or an exclusively replicating property, or even a method to privately set streaming focus.
Using the aggressive streaming memory mode, issues still exist where models continue to be streamed in on the client when they should be well beyond the streaming radius and not shown. This screenshot is part of a low-end device settings test where the streaming radius is only set at 128 studs. The structure of the level is streaming just fine, but the robot models are still streamed in far away when they should not be visible yet. On a PC with lots of spare RAM and Video, it’s not an issue as the stream distance can be very high for that user. For players on phones and tablets where resources are much more limited, these kind of odd behaviors of streaming seems to be defeating the purpose of streaming by using the additional resources and bandwidth to show those robots out in the distance even though they should not be visible yet until the player walks over there. None of those robots are set in any way to be visible everywhere, they are just using the defaults that streaming enabled is using. I’ve never filed a bug report on it because I wasn’t sure if that is the intended behavior of the current streaming model or not.
It would be nice to add streaming to Studio. In large games studio struggles to load the map and it’s impossible for low end users to even enter the project.
Overall I feel it’s a cool addition, pretty nice for big games, and there’s time to keep improving this good feature.
The recent updates to Streaming have been great, and I feel it is currently in a pretty good state overall - but more explicit control over things both inside and outside of the streaming range would probably be my biggest request.
I’d like to share one of my current use cases of streaming that is a bit outside of the current intended usage, which I would love to see be officially supported (or at the least not harmed by future updates):
One of my game worlds is a large map with floating islands spread across it, this means there is a lot of empty space, but if I wanted to keep distant islands loaded in under normal circumstances I would need a massive streaming radius - which is detrimental if your map also happens to contain high density areas.
My current solution to this is a modest streaming radius with distant islands loaded in via RequestStreamAroundAsync() - Islands are divided into a few points each which get requested from closest to furthest when the player first enters the islands portion of the world. This solution is actually working quite well at the moment (we target higher end devices for our game, but even on relatively low end hardware the distant scenery tends to stay loaded in) - however I am aware that these distant islands could get streamed out if memory called for it, which is fine.
So my request would be a more official / less hacky way of handling situations like I’ve laid out here. If the islands were only parts and models I suppose I could use the persistent per player options to make a similar system, but they are made out of terrain as well so RequestStreamAroundAsync() is currently our best option for getting the terrain to load in and stay at a decent quality level. Like others have said manual regions for streaming in and out seems like the best solution to me.
I would also like to ask that aggressive / opportunistic stream out behavior not be forced on us until after such manual controls are given - as if that were the case I would not have any option to achieve what I am after without an insane streaming radius (which as others have stated would mean having to space out all interiors / other zones to a huge degree for effeciency)
It would be good if CollectionService can pass information through with the part tagged. There are some situations where I have to wait for a part to be streamed to the client, but also that part has some information attached that I also want to pass (like it’s current state as an example)
A Gross Tutorial is necessary since Roblox have provided some many updates towards streaming.
If Studio is aiming to become a low-code engine, you might need spend more time to find a tricky way for streaming method division in most situations by default like UE5 Nanite, and thus most programmers don`t need to consider render details inside.
If you are going to focus at streaming or LOD due to render performance issue… It might be a better choice to expose the render pipeline.
Recent additions to streaming have been amazing for creating experiences that are runable on lower-end devices, Through these new changes my mobile players have gotten +10-15 FPS ingame which is a massive difference in keeping them in-game.
Something I wish was talked more about is how roblox renders objects, in many engines the engine does not render objects that cannot be seen or don’t load them until the player definitely can see them… By what I mean is that if I were to have a hallway and massive amounts of blocks, builds etc behind those walls the game would still render them regardless if I see them or not, this is a major issue to games which are based on sections, I don’t want my new players to have less FPS just because the direction that they look in is continently the 70% of the map’s part count.
Surely the new added features have helped with reducing this, but only if your radius is set low enough which can be a real game breaker for when you combine it with guns and opening/closing doors which need to be loaded at all times for the best experience.
It’s possible to load/unload sections through custom chunking however this also leads to massive FPS drops when you enter new areas, this isn’t a big of an issue but sometimes just doesn’t feel nice to experience in-game, only way to solve it is to mesh and mesh more things however I cannot mesh entire hallway sections because yknow, they’re hallways and I need to be able to edit them in studio not as a mesh.
Streaming layers would be interesting to see. For instance, having terrain on Layer1 which has a larger rendering range, buildings on Layer2 which has a smaller rendering range, and then finally furniture on Layer3 which has the smallest rendering range. Could be interesting to see and would remove the need to make a custom streaming system for furniture or other objects that don’t need to render at the range the rest of the map does. Furniture is a great example because players won’t be expecting to see couches, lamps, desks, etc. from a very far vantage point.
The changes are really nice. Porting my game has been relatively frictionless so far.
Here are some pain points which still exist:
I notice hitches with performance, probably due to the density of objects in my game. Streaming should take into account the density of objects before sending, or distribute the creation of atomics over multiple frames perhaps.
I think Folders should have the same streaming-related properties as Models. It’s common for builders to use Folders at the top levels to avoid selecting / dragging the whole Model around in Studio. Alternatively, there could exist a property on Models so they can be treated like Folders wrt Studio’s selection tools.
I’d like an option to prioritize unloading of minor details (bushes, debris, rocks, fences, campfires, particle-emitting parts) before important distant details (buildings, island terrain).
I’d like an API to check whether a Model is streamed in or streamed out for a particular player on the server. It will help me optimize RemoteEvents. Example: Player5’s character fires off a distant attack but said character is not streamed in for Player1, so using this API I could check & skip firing the “attack used” RemoteEvent to Player1 altogether.
Sometimes while streaming, the LowMemory option is actually more performant than the newer Opportunistic option. This is because it takes more resources to stream things over and over again than it is to keep them in memory.
I suggest a dynamic stream out behavior that somewhat combines both modes to have the best of both worlds.