You have two options with saving player purchases:
Return PurchaseGranted immediately and put the product save into a queue
Save product immediately and wait for saving to complete before returning PurchaseGranted
The former is problematic because if a server shutdown or a similar situation causes the queued save to never be processed, the player is shorted ROBUX for something never saved to their inventory. The latter is problematic because players mass-purchasing products can slow down purchasing (if they use up all requests) and make users wait to use their purchase (bad user experience), and malicious users can also DoS the server’s DataStore by continually purchasing cheap products, depriving the server of requests. Maclious DoS attacks may seem infeasible, but for only 400 ROBUX a minute someone can use up all requests for a full 10-player server that has a 5-ROBUX developer product.
The first option’s weakness really can’t be worked around, but the second one’s can. We should get an extra DataStore request every time a purchase is made in-game so that these issues never happen. Another added, if not more valuable, benefit is that it also ensures that we always have enough requests to save player purchases, leading to a better buying experience for players across ROBLOX.
Wouldn’t this extra request be subject to a race condition with other DataStore requests rendering the solution somewhat useless?
I understand there should be proper queueing of other requests, but can’t you slow that queue rate to ensure you have enough? I think some way of getting number of remaining requests would be more useful if youre genuinely getting people spending 400 per minute (doesn’t sound like a bad income to me), that way you can reject some purchases. Either that or put in your own flood control for those people that are spamming the purchase.
How would you tell the difference between the malicious user and someone that genuinely wants to buy that many products?
I’m all for extra DataStore requests, but this solution has since issues in its current state
Depending on how it’s used. ROBLOX (and anything else really) isn’t truly asynchronous, and it only processes things in “parallel” when the previous thread yields. If I get the extra request as soon as ProcessReceipt fires, and I communicate with my DataStore interface before I do any yielding, I can pause the saving of other requests until it’s processed the purchase. With this implementation there’d be no issue of a race condition.
No. Player purchases are uncontrolled and not limited. There’s always the possibility of players making more purchases than a limited #requests allow.
With the proposed solution you’d have no need to. This only sounds like an issue with your suggestion to put in flood control.
So you’re proposing a priority system for datastore requests. AFAIK the datastore requests all go into a queue to be processed as the DataStore has no idea what you are saving and shouldn’t care.
Example of race condition assuming the above:
3 datastore requests remain
4 requests in queue
ProcessReceipt is fired making there 4 data requests left but 5 in the queue, the last being the product saving
Queue is processed until the product saving fails.
No. There’s no need for DS to be aware of where the request came from – this is completely in the hands of the developer.
Purchases should not be queued. They should be processed immediately to avoid issues like the ones mentioned in the OP (server shutting down while purchase hasn’t been processed in the queue, etc). Here’s how it’d go down if done correctly:
0 DS requests remain
4 Requests in queue
Process Receipt is fired; +1 request; DS queue is paused by developer
Player purchase is saved; -1 DS request; DS queue resumed by developer
I think there may have been a misunderstanding. DataStore requests can queue up on Roblox’s side which can lead to the race condition I describe.
To exaggerate before you say that the servers are super-fast. Imagine n requests to datastore with n-1 requests remaining, your purchase will still fail if n is a large number due to how throttling would work.
Requests aren’t queued unless there are already enough requests – they exist to prevent the queue from overflowing in the first place. There aren’t any issues once your request makes it into the queue.
In which case, any request in-flight may beat your purchase request to the queue. Or the increase in requests may arrive later than your purchase request. Still a race condition.
I’m not saying a priority system would be bad, it means we can assign higher priorities to things like purchases and lower priority for, say an avatar’s current costume
Lua parallel computing is only done when one thread yields or is finished. ProcessReceipt would not run until all in-flight DS access requests were processed into the queue, and once ProcessReceipt began no more DS access requests would occur until ProcessReceipt yields. If the developer pauses their DataStore interface before ProcessReceipt yields (which is very easy to do, even unknowingly), there is 0 chance of a conflict.
This is quite complicated. So the first thing is that there are different request budgets for each request type, so we would probably need to add to all the budgets, this isn’t too much of an issue. The second thing is that there is a per key throttle implemented on the game server which means a 6 second cooldown from when you last set this key.
Try running this code in the server console of a game you own for an example:
local d = game:GetService("DataStoreService"):GetGlobalDataStore("TestDataStore")
for i = 1, 10 do
spawn(function()
d:SetAsync("B", 1)
print(i)
end)
end
print("All save operations spawned")
As a side note, this means you really need to save things in parallel in your OnClose callback, if some of your keys are throttled and you try to save them all in sequence you could easily run out of time.
We can’t remove this throttle for every key when a purchase is made but there might be other things we can do to help with this problem. The first thing I want to do is make it possible to check the current request budget for a request type. This wouldn’t help with per-key throttling but it would allow you to always keep around a base line of requests for important operations. Something like this:
int GlobalDataStore:GetRequestBudgetForRequestType(Enum DataStoreRequestType)
For the per-key throttling problem it might be a good idea to add an API to check if a key is throttled and when the throttle expires for that key. I’m not sure if this method is easy to use effectively though, it might be better to solve this problem in other ways.
We should also add better reporting about when you are running out of DataStore requests or when a key is being throttled regularly.
I agree – I don’t see that being very effective. If you’re saving to the same key in < 6 seconds, you either need to improve how you use DataStores or DataStores need to be improved for that specific use case. In the case of the former, automatically reporting keys that are throttled regularly would be all you’d need and be infinitely easier to use than having to manually track throttling for keys. In fact, it may even be neat to see a graph of throttling of your keys in the developer console or something.
In the case of the latter where there’s absolutely no way around needing to use the same key in < 6 seconds intervals, no amount of throttling reporting is going to change that, so no kind of API would be useful. Automatically reporting that your request is being throttled though will help developers find features to request from DataStores though.
I don’t think any sort of API would help in this regard – it’s really a design issue with DataStores that needs to be reviewed. Users should not have to wait 6 seconds for us to verify their successive purchases. Regardless of what’s going on in my game I should be able to process, verify, and act on user purchases 100% reliably and as quickly as possible. I’m not sure what a good quick-fix solution for this is, but ultimately it’d be really nice if we could revisit why keys are throttled in the first place. I’m no database expert, but I’ve never experienced any similar situation in any other game / online marketplace (at least as noticeably severe). What’s preventing this from being the case in ROBLOX games? Is there anything we can do to make throttling a non-issue?
When I was designing my last DataStore interface, one of the problems I encountered was not being able to tell if a request failed due to:
I’m saving too often (not enough requests, writing to same key from too many servers at once, etc)
My UpdateAsync callback errored
Service issue
If I was saving too often, I wanted to pause the whole saving queue until the next pass. If my callback errored, I wanted to remove that request from the queue and continue saving. If services were having issues, I wanted to retry but if it kept being an issue, pause until the next save pass. It was difficult for me to differentiate between the three because I didn’t want to categorize by error name and either miss an error name or have it change in the future. I ended up coming up with a pretty good solution where I stopped until the next pass if I got 3 errors in a row (solved first and last points), and I removed a request from the queue if saving that same request errored 10 times (solved middle point).
The only thing I’d really love to change is being able to know when my callback errored so I could remove it immediately instead of trying 10 times in a row, but maybe the other issues I encountered may help improve the design of DataStores as well.