How to use DataStore2 - Data Store caching and data loss prevention


#41

os.time is consistent across all servers, so that’s not it. Other than that, I’m not sure since I don’t have your code.


#42

I’m getting this warning in my output:
Request was throttled. Try sending fewer requests. Key =
And now my main script seems to be running abnormally slow (Although I don’t really use the module in there, that’s handled in another script.)

Any idea why this is happeneing, if it’s even connected?


#43

How many unique keys are you using for DataStore2? That warning is the throttle for ordered data stores.


#44

Quick question, would I store multiple inventories under one variable or would I make multiple?

My logic in place:

local Item1Store = DataStore2("Item1Store", Plr)
local Item2Store = DataStore2("Item2Store", Plr)
local Item3Store = DataStore2("Item3Store", Plr)

#45

That seems like it’d be up to preference, rather than anything related to DataStore2. Neither will add more memory.


#46

An update has been pushed with a new feature: combined data stores. Documentation has been added. Tell me if you have any issues with or without using these.


#47

Sorry for waiting long to reply. I had another script somewhere using an OrderedDataStore but it was updating every 60 seconds. Is that too often to do with this module saving data as well?


#48

You should be fine, but I’d recommend only using one unique key in that case (or use the new combined data store feature for projects that don’t have existing data/you’re willing to port old data).


#49

Did someone say tutoriaaal? alright give me a week. so pay attention at dutchdeveloper on youtube for the tutorial


#50

I believe here you missed a .OnServerEvent :yum:


#51

You’re right, fixing.


#53

I just made a tutorial on this module:


#54

You must be a super hero or something…


#55

Does this module set itself up correctly for garbage collection? (i.e. clearing data on leave and removing additional stored data that isnt used or a garbage collect function to optimize memory)


#56

Yes.


#57

Does it also work correctly or have a preference for storing all the data in a single table instead of spreading it across multiple different saves or does it automatically store the “coinstore” or whatever as part of the save dictionary? Does it also support data that is not up to date with current data? (replacing with defaults or with new calcs)


#58

Saving all the data in one table is the point of combined data stores, which I have a tutorial for in the post. By default, all keys will be split among different data stores, however it is recommended to use combined data stores so that you aren’t throttled.

I’m not sure how this would be a built in feature, but the closest it has is GetTable, which is the same as Get only if the table you pass it doesn’t have a key that the default you gave GetTable does, it’ll add it.


#59

you should add default tables with functions so that nonexisting elements (or saves) are appended on load (or the function is called with the save data in order to calculate it)
An example being:

default = {
Cash = 1000,
Exp = 0,
Level = function(t) return floor((t.Exp or 0)/100) end
}
--later 
local data = module:LoadWithDefault(name,key,default) --or whatever the equivalent is

and if the old table had no Level it will add the key with the called value


#60

I’m not sure what you’re suggesting here, can you provide an example?

You added an example, I see. This seems like something you’d either just write your own wrapper for or use BeforeInitialGet, the latter being more extendable.


#61

I just think imo though that it shouldve been implemented so that it only uses a different datastore name when specified or when the data size is too large. It could store those data in 1 dictionary and it could also store them in separate json tables as a stream if the data limit is reached. This helps with managing space and also makes it easier to deal with. (i.e. from dss:GetDatastore(name) when making a new item, to datastoretbl so that all new keys go in that as opposed to individual keys)