The overall outcome is that I want to make a “render” system, pretty much a chunk system but without the chunks.
I’ve begun to make one but I feel like I could make good use of searching algorithms as I mentioned many months ago: Types of searching algorithms
@sircfenner had made a very good visualisation to show how these algorithms work. I thought I implement this in a similar way you can use spatial partitioning to only load parts near you.
My thought was that you can have a key in a array or something which represents the relative distance between the character and the part itself. When the player moves by enough distance, it can loop through from the nearest part up to the distance you want to set your maximum loading distance.
Not sure if this is possible to do and even if this will be performance effective in a practical basis but I’d like to know your thoughts.
My current “Rendering”/Chunk system works as:
-
When the game starts running, it loops through all descendants of workspace and gets its current Transparency for the baseparts- this is stored. All baseparts are put into an array.
-
There is a loop to see if the player has moved enough. If the player has moved enough then it looks through the array with all the baseparts in to see if the player is close enough to the part in order for it to go to its original transparency. If it isn’t then its Transparency goes to
1
.