Get Requests Lagging Server

I have a game that counts all your badges and you can flex them to people. It uses an API that gets pages of your badges until no more pages are left. And because APIs can only get 100 badges per page, the counting is pretty slow if you have, for example, a million badges. If only I count the badges by myself, the server does seem to stabilize and be completely fine, but since the badges are counted on the server when multiple people count badges all at the same time, the server ping begins to spike and the whole thing is a mess.

What I’ve done as a temporary solution is add a delay when counting the badges. The delay is increased the more people are counting their badges. This can make counting badges very slow and irritating for badge collectors with over 10M badges. Does anybody have a solution for this?

Note : I understand the rate limit and that I can just count the badges in a different private server but I feel that that isn’t a very exciting or engaging way to show people the game.

1 Like

So I looked a little bit into this. And of course, the best way to do this is to use a proxy.

You can turn on HTTPService in Game Settings, and make a request to roproxy that would look like the following:

https://badges.roproxy.com/v1/users/[ID]/badges?limit=[LIMIT]&sortOrder=Asc

This should return a JSON string. From there you can decode it, and do what you wish with it.

Although I wouldn’t recommend loading a couple thousand at a time! Loading millions of badges will undoubtedly put a lot of load on your server.

Hope this helped!

note: got this from here

I do use that proxy. It doesn’t lag the API, it lags the server. the server ping goes up to like 200ms.

Yeah, in that case you can’t really do much else except distribute the load as much as possible. Also, this might sound like a stretch, but have you considered using Parallel Lua? I’m not sure how much it would help, because I don’t know if the latency is caused by the requests or the server load (but I think it’s the server load).

Edit: You can probably test this theory by measuring the latency of both JUST requesting IDs, and processing through them. If it’s the first one then you can’t do much except lower the load, and if it’s the latter, try multi-processing.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.