Hmmmm just thinking out loud here… the data limit for a single entry in a data store is 4,000,000 characters after JSONEncode
.
Could you just store all the levels metadata in a few giant entries to a meta-datastore and, on server load and when request limits allow, just update your in-memory copy of all of those giant entries?
So you just have two datastores, neither of which are ordered:
- Your metadata store, which is really just a holder for huge arrays of level metadata.
- Your level data store, which is where you store anything that you’d need to actually load a single level (like block layout, etc.). This one would be a traditional map from level GUIDs → data.
Something like:
Example metadata datastore code
local function RefreshMetadata()
local numPages = metadatastore:GetAsync("NUM_PAGES")
local metadatas = {}
-- collect all pages from meta data store into metadatas array
local totalSize = 0
for i = 1, numPages do
local metadataPage = metadatastore:GetAsync("page" .. tostring(i)) -- dynamic paging
totalSize += metadataPage["NumLevels"] -- number of levels in this page
table.insert(metadatas)
end
-- merge all pages into one massive array (or don't do this and keep them separate?)
local merged = table.create(totalSize)
local lookup = table.create(totalSize)
local mergedIdx = 1
for i = 1, #metadatas do
local page = metadatas[i]
for j = 1, #page do
merged[mergedIdx] = page
-- for O(1) lookup of metadata from level ID:
lookup[page[j].LevelId] = mergedIdx
mergedIdx += 1
end
end
return merged, lookup
end
Pros:
- Can easily process/sort local table for things like rank and timestamp
- Fast lookup of metadata by ID
- Can keep actual level data in another, standard datastore.
- Can keep (UserId → List) info in another standard datastore to find levels by specific person
Cons:
- Metatable datastore is going to be a complicated pain to maintain
- A whole lot of memory usage (like probably enough that it kills this idea)
- Sorting will probably be pretty slow, but you only have to do it once on refresh
Upon reflection, this is probably not workable. I think dealing with request limits would be easier.