If most of these are chat-based infractions, some sort of machine learning approach is your only bet for automation that works. This is an (somewhat)active area of research and there exist companies that provide services to do this (Sift being one of them…) For general harassment detection, categorized data-sets are publicly available on the internet. You could start with regular classifiers like Naive Bayes/SVM/etc and seeing how they perform. If their performance isn’t sufficient, you could try neural network or Markov models.
I mean, research is still being done! If you develop a good solution, maybe you could get published. This could become your dissertation! Imagine the time saved! You can pitch your break-through technology to a panel of angel-investors and secure millions in funding, rolling out a next-generation of anti-abuse technologies. All the VC’s will be lining up outside your corporate campus, shoving to get inside and talk to you. Amazon or SoftBank will try to acquire you; and in 10 years time you will be the one buying them out!
I would not make an automatic system for all exploits. I would make a system for specific exploits and leave the rest of the job for people who you have given moderator permissions.
This is not a good feature to include because auto banning is a little harsh. I would give a auto warning system that counts how many warnings that user has collected. If they have the max number of warnings, temporary ban them at first don’t erase them from the game completely.
Why wouldn’t you? Manual moderation isn’t scalable. Especially when it comes to exploitation prevention, you should be automating that kind of prevention. Your code should be handling more of the work than a moderation team, especially if you don’t intend to hire one.
What are you replying to? I’m confused at which part of my post you’re attempting to reply to. It’s been 20 days. In addition to that: are you still trying to force this system into your game? You’ve been given enough solutions, most of which you seem to have dismissed.