MemoryStore HashMap [Beta]

This is actually remarkable, really.

3 Likes

Is it possible for us to get a rough estimate on the per-hashmap rate limits? The docs say theyā€™re variable but even something like a rough range would be useful.

Theyā€™re higher than the previous ones but how much higher? 2x, 3x, 10x, 50x? Anything more concrete would be useful information.

4 Likes

Thatā€™s a reasonable ask :slight_smile:
In the spirit of full transparency, we will dynamically be tuning these limits throughout the Beta to ensure experiences get maximum throughput while ensuring the service itself can robustly handle requests. We may document a ballpark estimate/the actual limit once we are confident and settled on a value that is sustainable longer term. Until then, you are assured that these limits will be higher than the documented limits for Sorted Maps and Queues.

Apologies as I know this isnā€™t a very clear answer, but hope this provides some clarity on the new rate limits.

4 Likes

Perfect timing for release, Iā€™ll give this a try on the cross-server tournament system Iā€™m designing later today.

2 Likes

Thanks! The transparency is much appreciated :slight_smile:

1 Like

Now that request limits have been dramatically increased with the release of HashMaps, are there any plans to increase the per-player quota limit of 1KB in the near future?

My upcoming experience will be making heavy use of HashMaps for planned social features, and although the allotted 1KB is enough for basic usage or player metadata - it is not enough to accommodate sequences of player-generated strings (which cannot be reliably compressed).

Iā€™m aware of the base allotment of 64KB on the experience level, but I am hesitant to push past the 1KB per-player quota and potentially run into scalability issues in the future.

Perhaps I could create a feature request outlining the specifics of my use case to assist engineers.

1 Like

We do not currently have plans of increasing the per-player memory quota limit of 1KB.

For larger values, we recommend using Data Stores. Happy to hear more details on your feature request and use case. Feel free to outline it in this thread, or send me a direct message. Thank you for your feedback!

Currently got the following error when using MemoryStoreHashMapPages:AdvanceToNextPageAsync()

MemoryStoreService: InternalError: There was an internal server error. API: HashMap.ListItems, Data Structure: SCHEDULED_EVENTS. - Studio
InternalError: There was an internal server error.
RequestId: 00-142f7a88b15b0b1bf859f53ddf2bd715-6f6d66d0a08177b8-00

It also seems to not be reporting accuratly if the pages object is finished,
as im using it on a hash map that has 0 entrys
https://github.com/kalrnlo/luau-libs/blob/92f24c92c0ed264e82afb2e5f9db952c96e89fb2/libs/Pages/pages.luau#L34

3 Likes

Hey there. Are there any plans to implement an Open Cloud API for this feature? The much more generous limits of hash maps make it perfect for data that requires a lot of keys but doesnā€™t necessarily require sorting (or can be sorted after the data is retrieved, so it doesnā€™t count a lot towards the ratelimit)

Hey, I was reading through the documentation for the MemoryStoreService and I found this bit:


According to this, I should be using hashmaps when dealing with more than 1000 keys at a time while also needing the ability to scan all items at once, but I believe the ListItemsAsync function has an ā€œamountā€ limit of 200. Not really that experienced with memory stores, so Iā€™m wondering how I could access all the items in the hashmap?

Hey there, you can list up to 200 items on one page. You would be required to paginate through the data returned by ListItemsAsync by advancing to next page. You can see some examples in the guide on how you can traverse through all items in the hashmap.

Hello! Thatā€™s a good question and a feature request that we do have in the pipeline. I canā€™t commit to an ETA but it is on the roadmap. Happy to update you once itā€™s live.

Thanks, I just looked further into the documentation and I believe I found what you are talking about.

Could we have the internal errors actually exposed? As id like to be able to actually make bug reports but cant as saying an internal error is an internal error isnt useful at all, and having the full error would be much better.
image-40

Hi, Iā€™m slightly confused with the current documentation about MemoryStore rate limits.


This part states that there is a hard limit of 100,000 requests per minute to a single hash map. Is this still the case? Or is the information in the screenshot above incorrect now.


The documentation right now seems conflicting, just wanted some clarification.

Thanks

2 Likes

Good catch, the latter is accurate. Iā€™ll update the documentation to reflect the new limits in place. Thanks for reporting this!

1 Like

Iā€™m confused about this as well. Do the request unit quotas exist on top of some partition quota? The partition limits page has very very spotty documentation and doesnā€™t go into the specifics at all regarding what might hit the limits. Is there any hard numbers to go around? 100,000 per min for queues? 1,000,000 per min for hashmaps? If the ā€œautomatic partitioningā€ allows for a dynamic limit, how do we find these numbers? Will there be any API provided to estimate request count per data structure?

Iā€™m assuming that by the latter is accurate you mean the limits for any single data structure (100k req/min) has been removed, and been replaced by a flexible limit depending on the data structure (so hashmaps have theoretical infinite req/min). However, did the overarching API call limit that scales with the player count also get removed?

For API request limits, thereā€™s a Request Unit quota applies for all MemoryStoreService API calls, which is 1000 + 100 * [number of concurrent users] request units per minute. Additionally, the rate of requests to any single queue, sorted map, or hash map is limited to 100,000 request units per minute.

The 100k limit per structure got removed. Did the 1000+100*users across experience limit get removed? My use case is a massive experience-wide hashmap that updates very frequently, and this would be viably scalable with the removal of the 100k limit, but I want to know how frequently I can update this hashmap depending on the overarching request unit quota