Update HttpService wiki page to indicate 3 concurrent connection limit

HTTP service has a maximum number of concurrent requests. This caused a bug in one of my plugins, where the user ended up with a bad UX & they had to wait a while for a request to get through. If I had known about this limit in advance, I could have rearchitected my system to minimize concurrent connections.

Basically, my plugin needs to hear asynchronous updates from the server, and since I can’t open up my own TCP server in studio, I have to settle for doing HTTP long-polls to some HTTP server on the local PC. I had planned to have independent projects maintain their own polling loop, but this blows up on me when I exceed 2 projects.

The limit is set to 3 at the moment. I don’t need this limit raised. I just want the next person to not have a bug because of some undocumented behavior like I did.

(note: page currently talks about requests-per-minute limits, but not byte limits, or concurrent request limits)

How I arrived at my conclusion.

I set up a webserver in python whose role is to accept connections, yield a short while, and then return.

import http.server
import socketserver
import time

class YieldHandler(http.server.BaseHTTPRequestHandler):
	def do_POST(self):
		self.log_message("%s", "received post from {}:{}".format(*self.client_address))
		time.sleep(3)
		self.send_response(200, "OK")
		self.end_headers()
		self.log_message("%s", "responded to {}:{}".format(*self.client_address))

srv = http.server.ThreadingHTTPServer(("", 608), YieldHandler)
srv.daemon_threads = True
srv.serve_forever()

I then wrote a Lua script to execute a bunch of requests against this server in separate threads.

for i = 1, 12 do
	spawn(function()
		print(string.format("Thread %d attempting to connect...", i));
		game:GetService("HttpService"):PostAsync("http://localhost:608/", "hi");
		print(string.format("Thread %d received response", i));
	end)
end

Reviewing the output of the execution shows:

C:\test-concurrent-httpservice-limits>python main.py
127.0.0.1 - - [20/Feb/2019 19:44:34] received post from 127.0.0.1:57176
127.0.0.1 - - [20/Feb/2019 19:44:34] received post from 127.0.0.1:57177
127.0.0.1 - - [20/Feb/2019 19:44:34] received post from 127.0.0.1:57178
127.0.0.1 - - [20/Feb/2019 19:44:37] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:37] responded to 127.0.0.1:57176
127.0.0.1 - - [20/Feb/2019 19:44:37] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:37] responded to 127.0.0.1:57177
127.0.0.1 - - [20/Feb/2019 19:44:37] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:37] responded to 127.0.0.1:57178
127.0.0.1 - - [20/Feb/2019 19:44:37] received post from 127.0.0.1:57182
127.0.0.1 - - [20/Feb/2019 19:44:37] received post from 127.0.0.1:57183
127.0.0.1 - - [20/Feb/2019 19:44:37] received post from 127.0.0.1:57184
127.0.0.1 - - [20/Feb/2019 19:44:40] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:40] responded to 127.0.0.1:57182
127.0.0.1 - - [20/Feb/2019 19:44:40] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:40] responded to 127.0.0.1:57183
127.0.0.1 - - [20/Feb/2019 19:44:40] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:40] responded to 127.0.0.1:57184
127.0.0.1 - - [20/Feb/2019 19:44:40] received post from 127.0.0.1:57188
127.0.0.1 - - [20/Feb/2019 19:44:40] received post from 127.0.0.1:57189
127.0.0.1 - - [20/Feb/2019 19:44:41] received post from 127.0.0.1:57190
127.0.0.1 - - [20/Feb/2019 19:44:43] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:43] responded to 127.0.0.1:57188
127.0.0.1 - - [20/Feb/2019 19:44:43] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:43] responded to 127.0.0.1:57189
127.0.0.1 - - [20/Feb/2019 19:44:44] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:44] responded to 127.0.0.1:57190
127.0.0.1 - - [20/Feb/2019 19:44:44] received post from 127.0.0.1:57195
127.0.0.1 - - [20/Feb/2019 19:44:44] received post from 127.0.0.1:57197
127.0.0.1 - - [20/Feb/2019 19:44:44] received post from 127.0.0.1:57198
127.0.0.1 - - [20/Feb/2019 19:44:47] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:47] responded to 127.0.0.1:57195
127.0.0.1 - - [20/Feb/2019 19:44:47] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:47] responded to 127.0.0.1:57197
127.0.0.1 - - [20/Feb/2019 19:44:47] "POST / HTTP/1.1" 200 -
127.0.0.1 - - [20/Feb/2019 19:44:47] responded to 127.0.0.1:57198

I also ran wireshark for extra certainty. I don’t see the TCP SYNs start until after the number of concurrent connections drop below 3.

12 Likes

Hey! Would you mind looking at Roblox fails with closing connections on test end cause I’ve came to completly other conclusion. Basically I’ve found out that max concurrent connectuons (opened) are 8 and not 3.

PS: Maybe GET and POST requests have separate limit of concurrent connections.

That could be true. I was only using POST commands. Looks like you were only using GET. There might be separate limits.

It’s uncanny that you posted that on the same day as me. I know this isn’t a new bug of mine – I just only got around to root-causing it yesterday.
I tried searching for a post talking about max concurrent connections and only found posts on the 1 MB limit and 500 posts/minute limits.

Coincidence I guess. Yeah I was messing up for 2 days with that and couldn’t figure out why requests aren’t done and don’t even timeout. That should really me documented and fixed.

We generally do not document undefined behavior like this. The concurrent request limit is something that might change at any time and your game should generally not rely on this. Using HTTP long polling is a hack to achieve server push, but it’s not something Roblox officially supports.

Are there plans of a more supported way like client webhook support on Roblox servers?

This is what you document.

22 Likes

Heck, I would even settle for a sentence that says “there is some limit for concurrent request; avoid HTTP requests that could take a long time”. Right now there is nothing – not even a sentence that tells you not to use long-polling.

If it means we have to come to the forum to get the true number (because some projects may need it), so be it. Of course, it’s silly to have your devs reverse engineer these numbers when you can easily just pull them from source code…

11 Likes

Does this concurrent connection limit cause httpservice’s request-sending functions to stall up to 30 seconds too? Should I only allow a request at a time?

As far as I know, they don’t stall some pre-set amount of time – they just are added to a queue and are only processed when the current requests are satisfied or time out.

If you have lots of normal (not long-poll) requests, you don’t need to do any special queuing. The system will queue them for you.

If you’re planning to long-poll, then maybe consider:

  1. Using gets instead of posts.
  2. Trying to get as much utility out of a single long-polling connection instead of splitting it into multiple.
1 Like

Hi all! Closing this thread since there’s been several changes and updates to the documentation regarding HttpService since the creation of this thread. I believe there are also other threads that discuss more recent behavior as well.

Please feel free to create a new documentation issue, or add a contribution on our open source docs if you see any additional opportunities for documentation improvement. Thanks!