This update looks cool, but I feel uneasy with this update. While it is VERY cool to see automation being used to protect users, I do not want people reading my code unless I’ve open sourced it.
Few questions:
- Who are these “specially-trained” indviduals? Why should I trust them?
- How does said “specially-trained” team handled obfuscated code? What if code was randomly generated and it just happens there is innapropiate references?
- Is this done on ALL code? Or just code I open source (models, or with team create)?
- What are the punishments? Is it the same as normally breaking the rules?
- What about old games and disabled scripts? This was not there when the code was written and an exodus of game deletion isn’t fair if the person was unaware at the time what can happen for the way they write their code.
- How do you identify “personal” information and “real-life” threats using automation? This seems pretty broad and sounds almost impossible.
- Would there be a place we could have our “extra chat filter” reviewed to make sure it complies with regulations properly?
Regardless, keep doing what you’re doing! I assume ALL updates are meant to improve the quality of the platform, and that has held true! But up until now I am questioning that.