Need help with understanding inventory system (queue requests?)

I’m pretty early on in making my own inventory system, and I realised im still not confident on the behaviour of serial lua connections. i am currently handling requests made by clients/other server scripts in the form of RemoteEvents/Functions or BindableEvents/Functions all being handled by the same script, and so I would assume the only way to securely execute each request, while guaranteeing other requests are not running at the same time, is to queue them up once sent to the script, so that they would execute serially to prevent unwanted behaviour. (eg. 2+ players attempt to pick up the same item, a script requests to remove an item from a player’s inventory while they interact with it, etc.)

do all RemoteEvents/Functions (or their Bindable variants) execute serially regardless of when they fire, when all connected from within the same script?

  1. if the answer is yes, then i would assume i could simply continue handling each request as soon as the connection is fired.

  2. if the answer is yes but only to an extent, please let me know which scenarios i would need to look out for and how to prevent this behaviour im hoping to avoid possibly seeing in the future

  3. in the scenario this is actually an issue that i need to consider, i’ve already found a “Scheduler” module made by Crusherfire, but it isnt very clear how i would implement it for RemoteFunctions/BindableFunctions so that the invocation yields until the Scheduler gets to handling their request, and then also return any data back. is there a better alternative out there? or how would i do it myself? (i dont have much experience with handling threads which isnt helpful :sweat_smile:)

10 Likes

Honestly it might just be me, but I think you are overthinking how this could work.

  1. Click on the object → Client expects the item to get put in their inventory (so you can see it)
    → Client sends message to the server
  2. Server responds with :+1: or :-1:
  3. If the server respond with :-1: delete it from the inventory visually.
    I also recommend having a red text label just appear on the screen with the error.

Make sure to have a check for other actions which have to do with the player’s inventory, like checking if they even own an item and so on.

5 Likes

yes, this is fundamentally how the code will end up working, but i’m not entirely sure about step 2. in your example;

my original question stems from a doubt i have on how directly “responding” to a request may have issues if the server tries to respond to multiple requests in certain conditions at the same time. for example, multiple players/the same player sends multiple requests simultaneously to pick up an item, the server may try to handle both requests in quick succession/at the same time and therefore assume that they should be allowed to pick up the item, even when it should only be the first request that attempted to do so.

3 Likes

Alright so for step 2 I would just delete the part (as it has been picked up). Now just have a check if the part which you want to pick up is invalid then respond with an error of some kind.

local function onGrabItem(player, item)
   if not item then
      -- let the player know the item is invalid/has already been picked up
      return
   end

   item:Destroy()
   -- let the player know it's his item now & add it into their inventory table.
end
3 Likes

Also another thing you can do is check the item id.
First have an id - player dictonary.
Generate an id for each single item, if the item does not have an owner (aka a player) then it is safe to pick up.
Then just put the item id inside of the inventory and voilà

4 Likes

i mean i understand how i would debounce certain invalid requests however my entire question is generally more about networking than anything else, as if we were to assume that there is a chance for this function to be able to execute while another still runs, there could always be a possibility that both functions pass this item check before the other deletes it and therefore continue to redeem the item to their corresponding players (obviously it would be very slim for a practically instant computation like in this example however i would prefer to have the same guarantee for any task as obviously having this single if statement wouldnt be the only opportunity for failure in an entire function)

2 Likes

when i think about it, i would kind of assume that connections shouldnt run when another is executing, however i also remember placing debounces for completely cancelling a request from executing while a networking event/function executes and so im still unsure if this is actually a vulnerability or if i am just overthinking it lol

2 Likes

Well like I said that would be quite easy to avoid if you have the id - player dictionary

local itemOwners = {} -- [itemId] : player

-- Function:
if itemOwners[itemId] then return end
itemOwners[itemId] = player
...

There is no way that will fail, but if you really want to make a thousand percent sure, which is quire redundant in my opinion, you could always do:

-- Function:
if itemOwners[itemId] then return end
itemOwners[itemId] = itemOwners[item] or player
if itemOwners[itemId] ~= player then return end
...

Connections and events will continue to run parallel, that is no issue that is why you should use debounce of some kind, I showed you two different kinds of debounces you can use, the first one checking if an object is even valid the second one checking if an item has a owner or not.

3 Likes

Like man you are worrying way too much, unless the function to check takes a while there is no way in hell the examples I showed you will not pass and work properly. Minimising the chance from 0.0000000000001% to 0.000000000000001 is up to you, but it will not make a huge difference. Just make it not easily exploitable and you will be fine and even so like I previously said if you safe it per itemId, if a player tries to use an item with the itemId and you check if they are the owner of the itemId they cannot use it unless they are the owner, this solves duplicating items.

To elaborate further if it is not already clear ever item would have a unique identifier, not per item type, but per item.

3 Likes

Although event connections are asynchronous, they are fired from latest to oldest. As long as you have your conditions, you can pretty much guarantee your expected behavior. This behavior also applies to a single connected event.

local bindableEvent = Instance.new("BindableEvent")
local condition = true

-- Connected first
bindableEvent.Event:Connect(function(): ()
	if condition == true then
		condition = false
		
		print(`First connected event passed condition`)
	else
		print(`First connected event did not pass condition`)
	end
end)

-- Connected second, but will fire first
bindableEvent.Event:Connect(function(): ()
	if condition == true then
		condition = false

		print(`Second connected event passed condition`)
	else
		print(`Second connected event did not pass condition`)
	end
end)

bindableEvent:Fire()

-- Second connected event will always pass the condition,
-- but the first connected event will always fail

Not to mention in a real game environment, it is highly unlikely 2 players are going to be firing the event at the same exact flush queue. It is especially true with remotes as the delivery of the signal varies between client connections.

Although, if you are worried about a possible duplication method (an example I can think of is some sort of yielding before flipping a condition), then you can always create a custom flush queue.

3 Likes

Roblox actually has a staff-built ThreadQueue module listed in their Code Samples page. It should be pretty easy to use (with a demo script in the model itself) and worth checking out.

3 Likes

i already recognise the chance of it actually occurring will likely being negligible however ontop of this my original post also details remote/bindable functions too, which means that, for example if i were to amass longer computation time while yielding for other API’s, scripts, etc, it would create a reasonable window for any other invocation to begin during the existing request being handled

(this message relates to @OniiSamaUwU’s first reply too)

2 Likes

yeah, you took the words out of my mouth, mb for repeating your point :rofl: i’ll definitely look into this code example though as it sounds like what i was originally looking for

3 Likes

thankfully, this module seems to be the solution! i wasnt sure if returning a value in the function submitted would actually return back to the environment it was submitted from, but it seems to have worked :+1::+1::+1:

(edited to mention an apology for my incorrect variable naming, i originally assumed the submitAsync function returned a boolean and string however later recognised it was actually the tuple and just now noticed i forgot to change its name accordingly lol)

i will mark your reply as the solution, but i still have two questions relating to this module;

  1. while reading through the ModuleScript code, i saw that submitAsync returns the values returned by coroutine.yield, and coroutine.yield returns a boolean along with the tuple of values returned from the function submitted, but what is the first boolean value for? it seemingly was true every time i used it, and so my guess is for if the thread “failed” somehow (maybe error) but i’d like to be sure and the documentation page didnt really help :sweat_smile:

  1. do you understand what the third parameter for creating the ThreadQueue object is for? i dont lmao, the only place i see it implemented is here, and the comments they left confused me;

1 Like

That is actually the returned values from this line here:

task.spawn(entry.thread, pcall(entry.callback))

task.spawn can also resume a thread yielded by coroutine.yield, passing in whatever values you provide. What _popQueueAsync does is essentially resumes the yielded thread, passing the values returned by pcall(entry.callback) (executing the given callback in a pcall), returning the success value and whatever value you returned (or an error string).

It’s kind of hard to visualize so here is what I mean:

-- gets the running thread
local runningThread = coroutine.running()

-- after 3 seconds, run this function
task.delay(3, function(): ()
    -- resumes the thread, passing in these values
    task.spawn(runningThread, true, 1, "Hello, World!")
end)

-- what this line does is yield the thread, and anything passed to this
-- yielded thread when resuming (from our function above) will get returned
return coroutine.yield()
-- this will return true, 1, "Hello, World!"

-- without returning, it's essentially this:
local boolean, number, string = coroutine.yield()

Concurrency basically runs all the queue asynchronously instead of waiting for the previous item to finish. It is defaulted to false.

The timeBetween delay will happen after the callback has finished, meaning if you yield one second in each callback with a timeBetween delay of another second, it will take 2 seconds before executing the next item. With concurrency enabled, it will execute the callback in another thread and only delay for whatever given timeBetween delay.

In my opinion, you wont need this most of the time, just use task.spawn. This is only if you want the success, response pattern and values and want to use the built-in timeBetween delay.

2 Likes

thanks for the elaboration on my first question, and as for the second, i still frankly dont entirely understand, however from what i understood, for my use application i should keep it set to false so that the queue runs synchronously?

i added these lines to my existing code to understand further and now feel like i would need it set to false as when set to false each function runs in series like i requested in the original post however when set to true they all seem to submit and execute (nearly) at the exact same time, getting (nearly) the exact same seed for the RNG causing only ~2 unique “waitTimes” to be printed

It’s a lot to read so I hope you’re ready haha.


The problem with Roblox’s conventions is that “async” functions have the word “async” because it does asynchronous tasks internally, however, they are not asynchronous on the Lua side. Instead, they actually yield and the function itself is synchronous with your code. For example, GlobalDataStore:SetAsync() is actually synchronous, and yields until it finishes it’s execution. Unfortunately, we are too late and too far in to change this naming design.

I mention this fact because it is the same principle as submitAsync. When running the function, it is actually a synchronous operation in it’s current thread, and is only ever “asynchronous” if the current thread itself is asynchronous and you don’t care about anything but adding to the queue.

Here’s an example of what I mean about it’s synchronous behavior:

local threadQueue = ThreadQueue.new()

-- this will yield for 1 second before returning true, "Hello, World!"
local success, response = threadQueue:submitAsync(function()
    task.wait(1)
    return "Hello, World!"
end)

-- meaning any code after this has to wait until the submitted callback above is finished
print(success, response)

Here’s an example of asynchronous behavior in the context of another asynchronous thread where you don’t particularly care about anything after:

-- process each item with a 1 second delay after,
-- aka rate limiting
local threadQueue = ThreadQueue.new(1)
local someRemote = game:GetService("ReplicatedStorage").someRemote

someRemote.OnServerEvent:Connect(function(player)
    threadQueue:submitAsync(function()
        print(player.Name)
    end)
end)

Concurrency is a different topic. This is how the queue works with concurrency disabled:

  1. Submit a function through submitAsync, gets added into the back of queue.
  2. If the queue is not running, start the queue for processing.
  3. If there is an item in the queue, it fires that callback function in a pcall and resumes the stored yielded thread with the values returned by the pcall. The queue has to wait for these steps to complete (synchronous).
  4. If specified, waits for timeBetween seconds.
  5. Go back to step 3.

With concurrency enabled, the queue does not wait for the callback function to complete. It executes the callback function and resumes the thread in another thread, meaning it goes straight to step 5.

  1. Submit a function through submitAsync, gets added into the back of queue.
  2. If the queue is not running, start the queue for processing.
  3. If there is an item in the queue, the queue creates a new thread that fires that callback function in a pcall and resumes the stored yielded thread with the values returned by the pcall. Since it is ran on a separate thread, the queue does not need to wait for these steps to complete (asynchronous).
  4. If specified, waits for timeBetween seconds.
  5. Go back to step 3.
-- threadQueue with a second delay and concurrency disabled
local threadQueue = ThreadQueue.new(1)

-- threadQueue with a second delay and concurrency enabled
local threadQueueConcurrency = ThreadQueue.new(1, nil, true)

-- examples
local function executeAndPrint(callbackFunction)
	task.spawn(function()
		print("Non concurrent", threadQueue:submitAsync(callbackFunction))
	end)

	task.spawn(function()
		print("Concurrent", threadQueueConcurrency:submitAsync(callbackFunction))
	end)
end

for i = 1, 10 do
	executeAndPrint(function()
		task.wait(1)
		return i
	end)
end

--[[
	Output:
	
	10:20:16.639  Non concurrent true 1  -  Server - DemoScript:12
	10:20:16.639  Concurrent true 1  -  Server - DemoScript:16
	10:20:17.641  Concurrent true 2  -  Server - DemoScript:16
	10:20:18.645  Non concurrent true 2  -  Server - DemoScript:12
	10:20:18.645  Concurrent true 3  -  Server - DemoScript:16
	10:20:19.649  Concurrent true 4  -  Server - DemoScript:16
	10:20:20.653  Non concurrent true 3  -  Server - DemoScript:12
	10:20:20.653  Concurrent true 5  -  Server - DemoScript:16
	10:20:21.656  Concurrent true 6  -  Server - DemoScript:16
	10:20:22.656  Non concurrent true 4  -  Server - DemoScript:12
	10:20:22.661  Concurrent true 7  -  Server - DemoScript:16
	10:20:23.665  Concurrent true 8  -  Server - DemoScript:16
	...
]]

In both cases, submitAsync itself is still synchronous, and will only resume and return when the callback function has finished processing. The only difference is the concurrent queue itself doesn’t have to wait for the callback to complete since it’s in a separate thread.


In your example, that would be correct. Since your timeBetween is not specified, it is defaulted to 0. Your concurrent thread executes the function without waiting for your task.wait(waitTime) to complete and moves onto the next item without delay, even if your function isn’t completed.

If you had a timeBetween, each one would print every timeBetween after the first item.

If you had a timeBetween and concurrency disabled, each one would print every waitTime + timeBetween after the first item.

1 Like