How to implement the BEREZAA method (DS2 method)

Well, recently I have been learning about how DS2 saves data. I personally never really used DS2 for any games, because I just don’t like the way DS2 makes you “think” you’re making a new datastore and having variables for every content you put in.

Anyways, because of that I ended up making DS+, which works similarly. Except it has a MAIN version, DS2 only works on versioning. Meaning that it just gets the most recent indicator for a version and uses that to get data. The main version for DS2 is technically the newest backup. There have been some problems with DS2 in the past, like this. They’re mostly roblox’s fault. But, for example, the good thing about having a main version is that roblox can roll it back easily for you, without much thinking. There are problems with dealing with a main version, but those can be usually overcome with some changes in your system.

So, why does DS2 call it “OrderedBackups”? Well, it keeps indicators, as in “versions”, the most recent might be 100, 143, and it just finds the new one and stuff. A example of how that would work would be this:

local DataStore = DSS:GetDataStore("PlayerData/18472515")

local orderedDs = DSS:GetOrderedDataStore("PlayerData/18472515")

These would be the datastores which are needed to get data.

The way data would be saved would be something like this.

Let’s say you wanted to save data, you would save a indicator pointing to a datastore value. So you might have the key to the new data as os.time() or in general, a higher number than the most recent version, it depends on your system of course. Berezaa says he uses os.time(), DS2 uses versions. As in comparing the highest value.

So your key might be “100”, or a os.time() value.
And the indicator to the ordered data store would be similar, having that uniquely always higher number, as it’s value.

An example of a code which puts this idea to mind, pretending we keep the version we got, would be:

local function Save(data, currentVersion)
    local orderedDs = DSS:GetOrderedDataStore(player.UserId.."/PlayerData")

    local dataStore = DSS:GetDataStore(player.UserId.."/PlayerData")

    dataStore:SetAsync(tostring(currentVersion + 1), data)
    orderedDs:SetAsync(tostring(currentVersion + 1), currentVersion + 1)
end

I won’t be doing pcall examples, but you should be retrying and doing pcalls.

One example also for getting data:

local function Get()
    local orderedDs = DSS:GetOrderedDataStore(player.UserId.."/PlayerData)
    local dataStore = DSS:GetDataStore(player.UserId.."/PlayerData")

    local ordered = orderedDs:GetSortedAsync(false, 1)
    local page = ordered:GetCurrentPage()
    local data
    local isBackup = false
    local tries = 0
    local versionGotten
    repeat
        for version, info in ipairs(page) do
            local getData = dataStore:GetAsync(info.value)
            if getData ~= nil then
                data = getData
                versionGotten = info.value
            else
                isBackup = true
            end
            tries += 1
        end
        
         if data ~= nil or tries>= 4 or ordered.IsFinished then
             break
         end

         ordered:AdvanceToNextPageAsync()
         page = ordered:GetCurrentPage()
    until data or tries >= 4
    return data, versionGotten, isBackup
end

Reminder: I’m not doing pcalls here but you should.

Also, I do recommend having a main version, it helps like I said.

Anyways I hope that was helpful to anyone, just a reminder, roblox might send you some warnings about deleting certain player’s data, if they do, you have to remove every entry from the ordered data store and it’s correspondent data.

You can also have an clean up for older versions. Some people suffer with DS2 backups because they stay… Well, forever. So cleaning let’s say, versions older than 30 can be a good idea.

2 Likes

os.time had some reliability issues at one point. I’m not sure if it still does, but regardless, I’m pretty sure berezaa switched to an incremental system:

5 Likes

How would I go about deleting old backups after a while?

1 Like

Yep. Yeah, I found the original post I believe that he talked about this method, he mentioned using os.time() and I was also skeptical, I would also go for a incremental system too.

I wonder how ProfileService doesn’t (if it doesn’t) have problems with that… Since profile service uses the Epoch Time to lock it’s data. I guess they would be using some website to get that value.

Here:

local function DeleteBackupsOlderThan(olderThan)

    local orderedDs = DSS:GetOrderedDataStore(plr.UserId.."/PlayerData")

    local backupDs = DSS:GetDataStore(plr.UserId.."/PlayerData")

    local ordered = orderedDs:GetSortedAsync(false, olderThan)
    local page = ordered:GetCurrentPage()
    local isFirstPage = true
    repeat
        for _, info in ipairs(page) do
            if not isFirstPage then
                orderedDs:RemoveAsync(info.key)
                backupDs:RemoveAsync(info.value)
            end
        end
        
        if ordered.IsFinished then break end
        
        ordered:AdvanceToNextPageAsync()
        page = ordered:GetCurrentPage()
        isFirstPage = false
    until false
    print("Deleted backups from", plr.UserId, "from Data Store: ".. "PlayerData)
end

I would recommend running this on a coroutine, again, add the necessary pcalls and retries. Especially for deleting data.

1 Like

This seems quite expensive, my solution would be to automatically overwrite existing keys after some number of saves. It’s actually pretty straight forward to do this, you just switch the dataStore key from this:

tostring(currentVersion + 1)

to this:

local maxBackups = 10
tostring((currentVersion + 1) % maxBackups)

That way, after 10 saves the first key will be reused and the old data overwritten. You just need to make sure to use this key when fetching and saving information to the data, DataStore.

1 Like

Does ds+ already do this or should I implement it alongside ds+?

Well, yes It does that, and it has some parameters you can give to you know, be accurate with data; As it’s recommended for example, that you keep the version from a backup you got etc…
Right now, I’m working on a bit of a rewrite, of some functions, You might wanna wait a bit.
Just a warning “onUpdate” is going bye-bye.

I think that example woudn’t work, but yeah.

There’s no reason for it not to. By using the modulus operator you get the remainder when dividing by that number. For example, 12 % 10 is 2. So you just increment the version number, and then save to key which is that version number mod the number of backups you want.

Here’s an example of what that would look like with maxBackups set to 5:

Version Number OrderedDataStore Key DataStore Key Overwrites
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 0
6 6 1
7 7 2
8 8 3
9 9 4

As you can see, from version 5 onwards it starts to reuse existing keys, meaning they’ll naturally get overwritten with each new save.

Here’s one thing though, that also means that these backups will be inaccurate. Since you’re supposed to get the newest one…? Or am I wrong…? That would be even bigger of a problem if you’re using only backups and not having a main version. Anyway, wiping the last data is actually not that expensive. DataStore+ got a function for that yesterday. You can also have a maximum amount of versions that can be wiped before it stops doing so.

Not keeping these older backups in the first place should be what you would be doing;
It’s actually not even that expensive from my testing.

You’re still saving to the OrderedDataStore with the incremental key, never it’s modulus. So no matter what, the newest key in the ODS will always refer to the most recent data in the DataStore.

It may not be that expensive, but when you are already reaching the limits of your request budget then it becomes a problem.

Given the outcome is the same, using fewer requests only serves to be beneficial in my opinion.

2 Likes