Need help on how to create an anti-rule breaker system

First off, the scripting support category is not for us to create entire scripts for you, if you really want to make this script then it you’d probably be better off learning how to script.

Also, the majority of what you want to do here is impossible. The only thing you can really do is detect exploiters, you can’t detect scamming or mass raiding.

Also, you can edit a message instead of mass replying to yourself.

3 Likes

What do you need help with then? We told you that everything you want to do is impossible unless you have manual moderation.

1 Like

‘‘Manual moderation’’?
Please explain further, there is many admin systems I can use to moderate my games at times. Doing it myself is manual right, not messing with a system except admin which doesn’t detect, you have to use the prefix and command name and execute it properly if you are using admin. Thats me doing some of the work.

Manual Moderation is where you’d either type !ban/kick USER or do it via a GUI.

1 Like

You will need to do some of the work because it is not possible to create a system which detects users violating your rules.

2 Likes

Yes, I would do some manual moderation if I have time by doing for example:
Troller: Trolls alot
Me: ;kick <username/user id here>

You’re starting to get off-topic here. You already have an answer: you can’t automate discipline against these behaviours except for exploitation. Everything else must be handled manually.

Hire a moderation team from your community. If you want to learn how to script, check out Recommended YouTube channels for scripting beginners

It is possible, even if it would be out of typical scope. You should focus on other stuff, but if you really must know…

Summary

Sorted via correct permission management

See above

Check if a user “advertises”, and if they are not “authorized”, you can ban.

Detect for common terms and react as such. If you want to go up, AI it up and get it to learn.

Subjective. See above.

See above

Depends what you want, detect exploiting by any appropriate method, if detect react as appropriate.

Covered in above

See Online dating

Above

If any of the above occur, and they belong to those roles, do nothing.

You should look @ #public-collaboration:public-recruitment and get someone to do that for you. This is not easy work.
Manual moderation via a moderation team, as others have suggested seem best.

1 Like

@railworks2 I was not aware that troll detecting is possible! If it’s not a system of key words/all caps then please let me know what the basis of it is!

If most of these are chat-based infractions, some sort of machine learning approach is your only bet for automation that works. This is an (somewhat)active area of research and there exist companies that provide services to do this (Sift being one of them…) For general harassment detection, categorized data-sets are publicly available on the internet. You could start with regular classifiers like Naive Bayes/SVM/etc and seeing how they perform. If their performance isn’t sufficient, you could try neural network or Markov models.

I mean, research is still being done! If you develop a good solution, maybe you could get published. This could become your dissertation! Imagine the time saved! You can pitch your break-through technology to a panel of angel-investors and secure millions in funding, rolling out a next-generation of anti-abuse technologies. All the VC’s will be lining up outside your corporate campus, shoving to get inside and talk to you. Amazon or SoftBank will try to acquire you; and in 10 years time you will be the one buying them out!

3 Likes

Chat based infractions? What does that mean?

It’s a very literal phrase. Most of what you listed are things that are only accomplishable and done from the chat system.

I would not make an automatic system for all exploits. I would make a system for specific exploits and leave the rest of the job for people who you have given moderator permissions.

This is not a good feature to include because auto banning is a little harsh. I would give a auto warning system that counts how many warnings that user has collected. If they have the max number of warnings, temporary ban them at first don’t erase them from the game completely.

Why wouldn’t you? Manual moderation isn’t scalable. Especially when it comes to exploitation prevention, you should be automating that kind of prevention. Your code should be handling more of the work than a moderation team, especially if you don’t intend to hire one.

2 Likes

You do have a point. Though I do suggest giving the player a chance before banning said player.

Warnings is a thing, and the system should be able to handle giving out warnings as it is an Anti Rule Breaker system…

1 Like

It would be too long to read it all in one message unless you really wanted to sit there for hours reading.

What are you replying to? I’m confused at which part of my post you’re attempting to reply to. It’s been 20 days. In addition to that: are you still trying to force this system into your game? You’ve been given enough solutions, most of which you seem to have dismissed.

Alright the topic is dead. Ima just hope this topic gets flagged so it can be deleted