Introducing MemoryStore - High Throughput, Low Latency Data Service!

Hi developers,

We’re thrilled to announce that MemoryStore - a brand new data service that offers access to in-memory data structures shared across all servers in an experience - is now available to everyone!

In addition to persistently saving data such as player profiles in DataStore, we understand that there are many use cases that need more frequent, ephemeral data access. For example, you may want to build a global marketplace with shared inventories across all servers where the data needs to be updated frequently. Moreover, you may want to have a skill-based matchmaking system so that players can enjoy more engaging competitions. You’ll need fast data access to manage player pools for those who want to be matched.

With high throughput, low latency data access across all servers, the MemoryStore service now makes it easier to implement these features. Instead of raw data access, the service offers data structures such as sorted maps and queues so that you can integrate it faster. To learn more about the service, check out our Tutorial and API reference documentation.

We cannot wait to see what you will build with MemoryStore! As always, your feedback is invaluable for us to keep improving the service. Feel free to post any questions you may have. Our team will monitor the posts regularly and get back to you.

Happy building!

The Roblox Developer Services Team


Q1. When should I use DataStore vs. MemoryStore?

  • MemoryStore is designed for fast but non-persistent data access. The data will be removed when expired. So use DataStore if you need the data to be persistent. The relationship between DataStore and MemoryStore is similar to harddrive and memory in a computer.

    To improve performance, you can use MemoryStore as cache for some of the features, e.g. the global marketplace, but the cache should be in-sync with DataStore for persistent storage. In such cases, DataStore should act as the source of truth for your data.

Q2. How can I flush all my memory during testing or recovering from failures?

  • There is no flush API in the initial release as we feel it may cause more issues than the benefits it brings (we may consider it in future improvement). All of your data has a mandatory expiration after no more than 30 days. We recommend setting the shortest possible expiration time based on your use case so you can rely on it to recover your storage quota.

Q3. How can I test the service in Studio?

  • The service offers a separate Studio namespace, which is isolated from your production data and automatically used when you are developing in Studio. This means you can safely test in Studio without worrying about impacting your production data.

    On the other hand, you will have to use Developer Console to debug your production experiences since the Studio tools will not be available. Check out our Tutorial to learn more.

Q4. How do I know if I’m over the quota limit?

  • There’s no API to get the quota usage info today, and we are considering to add one as future improvement. On the other hand, when you exceed either the size or request quota, your API calls will be throttled and get an error message that contains the details of your quota usage. You can use that info to debug your code.

Update 1/14/2022

Good news! We’ve heard your feedback and increased the size limit for queue and sorted map items from 1KB to 32KB so you have more flexibility to design your data structure. Please note the total size quota stays the same.


This topic was automatically opened after 4 minutes.

Will this data persist if all servers are closed? Or is the expiry solely based on time?


I was honoured to participate in the MemoryStoreService beta and I can say without a doubt that it’s one of the best services I’ve used so far. Excited to see the use cases developers come up with! I personally use it for matchmaking* right now to replace using permanent DataStores with private server keys and match data for the value.

It feels nice and intuitive, usable almost like regular DataStores. If anyone’s been waiting up for ephemeral DataStores, this is that feature. You can move some of your more temporary data over to MemoryStoreService, read it when you need it and get rid of it fast. Temporary persistence in general helps reduce a bunch of regular DataStore use cases I’d have originally and I’ve even been thinking of using MemoryStore to coordinate with DataStores.

Not looking forward to all those private server DataStores I’ll have to clear after DataStoreV2 is no longer an experimental setting though… :laughing:

*Just wanted to clarify for matchmaking since I got a few PMs on that: my use case hasn’t gotten to the level of cross-server matchmaking yet, sorry! By “matchmaking” I’m more referring to the definition of data for a private server than the complete matchmaking suite.


It should be more clear within the DevHub that this will not replace the current DataStore system. Any developer could come across the article and immediately see the benefits such as “low latency” without realizing the unreliability when it comes to saving “permanent” data.

Mostly just nitpicking though, this is overall a great update and I’m thrilled to see what comes out of this.


The lifetime of the data is independent from the lifetime of the servers. Even if all servers are closed the data will still be kept until the expiration time you set is reached.


How much can we store using this service, and what is the latency in which all servers will see the data - is it eventually consistent?

This seems like a great addition, great work!


That is amazing. Thank you for the quick answer.

Sky is the limit with this one I think.


This is a wonderful addition - two questions though:

  1. Are we (or is there already) a script connection that can listen for changes to particular keys?
  2. Is there any possibility we can access this from outside of RCC?

I hope that these can eventually be implemented; if and when that happens, we can remove a significant expense in our Amazon SQS workloads that we use for RCC<->API comms.


indeed (and the fun part is that roblox has no sky limits!)


I just finished my faction/clan system using regular data stores and MessagingService :sob:. Awesome update though, will deffo switch!


There are limits which are detailed here: Memory Store (

It appears that they have not opted for an eventually consistent model, instead they go for strongly consistent (supported via locking the values when read for a duration)


Does this fix those terrible memory leaks in the macOS version of Roblox? :thinking:


Ah, alright. I’ll have to keep the lock in mind when writing code using this service. I could potentially imagine a case where synchronization between servers can be annoying, but that could be solved so as long as they queue requests sent.


This refers to high-speed temporary data inside Roblox’s backend servers, memory leaks on Mac OS are a separate issue and should be reported as a bug


Oh this is actually amazing, thank you so much! This will help a lot!


For the first one, we don’t have an OnUpdate equivalent for MemoryStoreService. I imagine it might suffer from the same technical problems that regular DataStores did, requiring polling. It sounds a lot more realistic to do and maybe we could even get a proper OnUpdate in the future given whatever technical constraints made OnUpdate fail don’t exist for MemoryStores, but I assume we’re still supposed to use MessagingService here to communicate changes or another workflow of choosing.


Really exciting to see this finally release! Looking at the documentation this thing looks super robust, but I can’t seem to find anything relating to the average ping between servers.

For context I’m currently hoping to create a walkable smooth transition portal / gate between servers - one where you could see the moving avatars of other players from other servers before you make the jump.


How fast actually is the service across servers on average? Is it around 1 second like MessagingService?

  1. Is there a limit for the expiration value? Because I can imagine people setting it to ridiculous numbers, which would keep the items in memory virtually indefinitely if there isn’t any.

  2. Is there a way to clear all memory at once? For example to purge or delete a memory map/queue. Or are we required to loop through all values and delete them using RemoveAsync() as mentioned in the tutorial? Because it seems highly inefficient to have to loop over each key to delete them, especially for larger and more active experiences. Maybe a ClearAsync() function would be helpful, unless the described method doesn’t have any negative performance implications, which I am not aware of.