First off, the scripting support category is not for us to create entire scripts for you, if you really want to make this script then it you’d probably be better off learning how to script.
Also, the majority of what you want to do here is impossible. The only thing you can really do is detect exploiters, you can’t detect scamming or mass raiding.
Also, you can edit a message instead of mass replying to yourself.
‘‘Manual moderation’’?
Please explain further, there is many admin systems I can use to moderate my games at times. Doing it myself is manual right, not messing with a system except admin which doesn’t detect, you have to use the prefix and command name and execute it properly if you are using admin. Thats me doing some of the work.
You’re starting to get off-topic here. You already have an answer: you can’t automate discipline against these behaviours except for exploitation. Everything else must be handled manually.
It is possible, even if it would be out of typical scope. You should focus on other stuff, but if you really must know…
Summary
Sorted via correct permission management
See above
Check if a user “advertises”, and if they are not “authorized”, you can ban.
Detect for common terms and react as such. If you want to go up, AI it up and get it to learn.
Subjective. See above.
See above
Depends what you want, detect exploiting by any appropriate method, if detect react as appropriate.
Covered in above
See Online dating
Above
If any of the above occur, and they belong to those roles, do nothing.
You should look @ #public-collaboration:public-recruitment and get someone to do that for you. This is not easy work.
Manual moderation via a moderation team, as others have suggested seem best.
@railworks2 I was not aware that troll detecting is possible! If it’s not a system of key words/all caps then please let me know what the basis of it is!
If most of these are chat-based infractions, some sort of machine learning approach is your only bet for automation that works. This is an (somewhat)active area of research and there exist companies that provide services to do this (Sift being one of them…) For general harassment detection, categorized data-sets are publicly available on the internet. You could start with regular classifiers like Naive Bayes/SVM/etc and seeing how they perform. If their performance isn’t sufficient, you could try neural network or Markov models.
I mean, research is still being done! If you develop a good solution, maybe you could get published. This could become your dissertation! Imagine the time saved! You can pitch your break-through technology to a panel of angel-investors and secure millions in funding, rolling out a next-generation of anti-abuse technologies. All the VC’s will be lining up outside your corporate campus, shoving to get inside and talk to you. Amazon or SoftBank will try to acquire you; and in 10 years time you will be the one buying them out!
I would not make an automatic system for all exploits. I would make a system for specific exploits and leave the rest of the job for people who you have given moderator permissions.
This is not a good feature to include because auto banning is a little harsh. I would give a auto warning system that counts how many warnings that user has collected. If they have the max number of warnings, temporary ban them at first don’t erase them from the game completely.
Why wouldn’t you? Manual moderation isn’t scalable. Especially when it comes to exploitation prevention, you should be automating that kind of prevention. Your code should be handling more of the work than a moderation team, especially if you don’t intend to hire one.
What are you replying to? I’m confused at which part of my post you’re attempting to reply to. It’s been 20 days. In addition to that: are you still trying to force this system into your game? You’ve been given enough solutions, most of which you seem to have dismissed.