As a Roblox developer, it is currently impossible to know how much space is being occupied by a MemoryStore.
If Roblox is able to address this issue, it would improve my development experience because as MemoryStore has a very small memory size quota, I would like to know in advance how much space this is taking up, to avoid space overflow error (Code: 5).
That would be pretty cool for a game that uses this feature often to log how much data the developer can budget / use before running over their limit. I’m making a server system which uses MemoryStoreService and I could see how useful this feature could be.
Roblox lacks tracking for usage of resources like memorystore and datastore. Impossible to know until you hit limits and have to throw away work. Totally unacceptable expectations for us to work with a system like this, especially with intention to use these resources for more than one feature.
@PeZsmistic@Dogest23@rogeriodec_games
Hi All, I support the Memory Store system at Roblox. I am happy to hear you are interested in using Memory Stores and I’d like to add functionality to support your needs. To better understand the pain points you are facing, I have a few questions:
Are you looking for a new API call that can tell you storage used by a specific data structure in real time, or are you looking for some kind of dashboard to track usage over time?
Would you care about specific data structures or your total universe usage of memory store?
One tricky aspect of Memory Stores is that your available data storage changes with the number of players in your game, which means it is quite dynamic. If your player count is going up rapidly, whatever storage limit you query may actually be higher before you act on it.
Is your trouble with Memory Store limits exclusive to storage size, or also related to API Request Limits?
Hey there, I know I’m not one of the people mentioned however I believe I can give a bit of input in this as I recently had to create a custom version of a budgeting api for my upcoming project that has MemoryStoreService as a main gameplay element. I am currently dealing with networking all servers together into one big server via MemoryStoreService
Personally in my case, it was the global limits that I was mainly running into, however if possible both should be implemented as more powerful and flexible api is key to success.
While this certainly makes sense I personally do not believe this warrants the outright exclusion of this feature, possibly a warning stating that the data storage limits may not be entirely accurate could be added to the API’s documentation.
More often than not, for me, it’s actually request-limit issues I’m running into rather than data-size. However, as I previously mentioned, we should ideally be allowed to track any limit to give great flexibility to all use-cases.
All of your questions are answered by a single idea:
Developers need to be able to build systems that do not self destruct.
Currently, developers cannot write storage systems that know what they need to do. A good system incorporates backoffs and slowdowns to avoid hitting the limit and dropping requests. We cannot write that code because there is no way to see what the limit is or how close we are to it.
The issue is made even worse by the fact that when developer’s blindfolded systems inevitably implode, the service just fails in the worst way possible. It often does not error, but just busy-yields forever and kills all threads. It is catastrophic, and developers not only have no way to prevent it, but they have no way to debug it.
@MrChickenRocket
We are putting a focus this year on improving the quota limits system for Memory Stores and Data Stores. Can you tell me what kind of "hidden’ limits you are running into for Data Stores? The limits in general have two goals:
Ensure equitable access across all Universes trying to use the service
Prevent service instability or failures due to overall extreme usage or usage patterns triggering hotspots that could fail a single node.
We will do our best to be more transparent with the limits going forward but some are meant more for service reliability rather than for Creators to code against if that make sense. For example, if a single server can handle X requests, we don’t want a Creator aiming to send exactly X requests to one server to max it out.
Hi,
I understand your concern! Perhaps another way to look at it is giving us better tools to make good design decisions We are not always looking to max things out, although I can see why you might think that’s our concern here.
eg: say we’re dealing with case where we have a leaderboard.
We don’t want to use all of our datastore budget for refreshing this leaderboard, we have other things we need to use that budget for. So the code might be something like: “if we have over half the budget, allow for trivial refreshes every 60 seconds, but if not, just wait around.”
It’s also important because not to be impolite about it, but things get weird right now when you exceed the budget, especially with memorystores or GetKeysAsync. Currently it’s pretty easy to get silent freezes on requests that take 1+ minutes, and at worst we’re looking at errors, and data loss.
From a “user code” point of view, we don’t EVER want to be dealing with that, so we’d rather just throttle ourselves down vs learning the hard way that the request was not welcome via errors or freezes.
FWIW: this post has the list of the missing budget requests for datastores: