Central Group Hub Moderation Seminars

Image from Gyazo

Transparency In Moderation

Transparency in moderation is the idea that moderator actions and the reasons for those actions should be made publicly visible and clear to the users of the server.
Without transparency, users may have difficulty understanding how server rules are enforced and may also find it difficult to trust that the moderation team has their best interests at heart.
However, too much transparency may lead to users testing the limits of the rules or harass others, both users and moderators, for their actions.
Therefore, balancing the need for users to understand the thought process and actions of moderators with the need to ensure a reasonable level of privacy for each user and deter harassment is vital to a successful moderation system.

Pros and Cons of Transparency

While there are certain “best practices” when it comes to moderation transparency, there is no single system that is right for everyone.
The amount of transparency you need for your moderation system ultimately depends on your server rules, culture, and vision. This article will explain the pros and cons of transparency and ways that you can apply transparency to your moderation system.

Though the idea of moderation transparency is generally considered to be a good thing, it is important to understand that there are both pros and cons to transparency in moderation. Some of these pros and cons are described below.

To help you understand how the pros and cons apply to transparency, consider an example in which a moderator publicly warns another user not to call someone an offensive name referring to any kind of disabilities or that persons color, etc. because it violates an existing “No Racist or Discriminatory” rule.


  • Accountability : A transparent moderation system holds moderators accountable to their own rules. For example, if a moderator were to call someone the same word, other users would know that behavior isn’t acceptable and could report it to an admin.
  • Community : Allowing the community to see when someone gets warned and why helps foster dialogue between moderators and regular users regarding server culture and rule enforcement and encourages cooperation. Users that may not understand the reason why calling someone that word is prohibited can become educated on the moderators’ position. Moderators may also be able to clear up any misunderstandings community members may have about what slurs are included in the rule and can update the rule accordingly if need be.
  • Comprehension : Providing users with practical examples helps them understand the difference between right and wrong. Users that were previously unfamiliar with what an “ableist slur” looks like now have a practical example to reference.
  • Compliance : Users can proactively and correctly encourage good behavior themselves without moderator intervention. Once users know that calling someone that word is unacceptable, they can echo that message throughout the server and let others know that that behavior isn’t tolerated.


  • Testing the Limits : Malicious users may take advantage of transparency to skirt the rules without punishment, or to manage their infractions to just barely avoid being banned. In the example above, users may try to censor or alter the word to have it slip under the radar of any watching moderators
  • Rules lawyers : Transparency may encourage “rules lawyers” in which users will attempt to use the letter of the rules without reference to the spirit of the rules to appeal their warnings and punishments in bad faith. Explaining that the slur in question was used to insult those diagnosed with mental disability and is thus prohibited may prompt a bad faith counterargument that “fa****t” should be allowed because it used to mean “a bundle of sticks.”
  • Harassment: Moderators taking action or users that were punished may be subject to harassment by server members. Users may also feel harassed if they are publicly warned by a moderator. The person who was warned may carry a stigma with them in future interactions or be made fun of. Even if no one treats them differently in the future, they may feel embarrassed at being publicly criticized for their behavior. Conversely, those who sympathize with the warned user may start harassing the mod for being “too sensitive.”
  • Privacy : Transparency may cause moderators or users to feel that their privacy is insufficiently protected in relation to moderation issues. The lack of privacy can result in harassment or embarrassment as mentioned above. Furthermore, if the evidence for the case is preserved in public view then additional messages and usernames may be visible even if the original messages are later deleted by their authors.

The Moderation System

Now that you are aware of some of the pros and cons of transparency in moderation, you must next understand the components of the moderation system so that you can consider ways in which these components can be made more or less transparent. Broadly speaking, a moderation system can be split into the following components:

  • Server rules and penalties for breaking them
  • Guidelines for the moderation team to ensure consistent enforcement of the rules and penalties
  • Logging and communication of user infractions and applying the appropriate penalty
  • Processing appeals from users related to their logged infractions
  • In-Game moderation to the appropriate full standards on HIGH enforcement

Transparency and communication go hand-in-hand. The more you communicate these components to relevant users and the server as a whole, the more transparent your moderation system is.

Implementing Transparency

There are several ways to implement transparency in each of these components, each with their own pros and cons. Each section here will establish ways in which a component can be made more or less transparent and a recommendation of the appropriate level of transparency for each. However, please keep in mind that every server’s needs are different and some of the pros and cons discussed may not apply to your server. It is always important to consider your specific community when it comes to implementing transparency.

Server Rules and Penalties

Your server rules are the backbone of your moderation system. They describe how your members should conduct themselves and what happens if they don’t meet those expectations. In general though, your rules should be specific enough to ensure comprehension and compliance without being overly wordy or attempting to provide an exhaustive description of prohibited behaviors.

For example, giving a couple of examples of NSFW content for a “no NSFW content rule” may help people understand what you interpret as being NSFW, compared to other servers or Discord itself. However, too many examples may make the list seem fully comprehensive, and people will assume that items not on the list are fair game. Disclaiming that examples of rule-breaking content are non-exhaustive and that the moderators have the final say in interpreting if someone is breaking the rules can help to address users that are interested in testing the limits of the rules or being rules lawyers to escape punishment on a technicality.

Moderation Guidelines

Developing moderator guidelines is another important part of your moderation system. Similar to your rules guiding the conduct of your server members, your moderator guidelines help guide the conduct of your moderators.

Keeping your moderator guidelines visible to the rest of the server will encourage compliance from members, and enable them to defuse incidents without moderator intervention. Furthermore, providing basic standards of moderator conduct will help users know when it’s appropriate to report moderators to the server owner for misconduct and hold them accountable. However, you should avoid putting too much of your moderator guidelines out in the public in order to avoid rules lawyers deliberately misinterpreting the spirit of the guidelines to their advantage. After developing your moderator guidelines, balancing these pros and cons will help you determine how much of your guidelines you should present to the public.

Infraction Logging

Logging user infractions is key to ensuring that the entire moderation team has the same understanding of how often a user has broken the rules. Transparency between the mod team and the user in question is important for the user to understand when they have received a warning that brings them closer to being banned from the server. Informing the user of which moderator warned them is important for holding moderators accountable to the warnings they issue, but may leave moderators open to harassment by warned users. Having a procedure to deal with harassment that stems from this, is one way to achieve accountability while still protecting your moderators from bad actors in your server.

Although the communication of infractions is vital to ensure understanding among your server members, it may be prudent to withhold information about exactly how close a user is to being banned so that they do not attempt to toe the line by staying just under the threshold for being banned. Furthermore, even though a public infraction log may be a good way to promote cohesion and transparency by showing examples of unacceptable behavior to the rest of the server and fostering discussion between the mod team and community, others may think that such a log infringes on user privacy or that these logs may constitute a “witch hunt.” It may also leave mods and users open to harassment over warnings given or received.

If you want to encourage a sense of community and understanding without taking away user privacy or inadvertently encouraging harassment, a better option may be to encourage users to bring up criticisms of rules or enforcement in a feedback channel if they wish to. Provided that the mod team ensures these conversations remain constructive and civil, creating a public medium for these conversations will help others understand how the mod team operates and allow them to provide feedback on how the server is run.

Managing Appeals

Everyone makes mistakes, and moderators are no exception. It is important to have a process for users to appeal their warnings and punishments if they feel that they were issued unfairly. If you decide to have a public infractions log, you may receive appeals on behalf of warned users from people who were uninvolved in the situation if they feel the warning was issued unfairly. While this can help with accountability if a user is too nervous to try to appeal their warning, it can also waste the time of your mod team by involving someone that does not have a complete understanding of the situation. In general, it is better to keep the appeal process private between the moderation team and the punished user, primarily via mediums such as direct messages with an administrator or through a mod mail bot. During the appeal process, it is best to ensure that you clearly and calmly walk through the situation with the appealing user to help them better understand the rules while maintaining moderator accountability.


In the end, there is not a single “correct” way to manage transparency in your moderation system. The appropriate level of transparency will vary based on the size of the server and the rules that you implement. However, walking through the steps of your moderation system one by one and considering the various pros and cons of transparency will help you determine for yourself how to incorporate transparency into your moderation system. This will help you build trust between moderators and non-moderators while preventing abuse on both ends of the system.

Image from Gyazo

Sensitive Topics

Permitting the discussion of sensitive topics on your server can allow users to feel more at home and engage with their trusted peers on topics they may not feel comfortable discussing with others.
This can encompass subjects like politics, mental health, or maybe even their own personal struggles. Having dedicated channels can keep these topics as opt-in and in a dedicated space so that people who do not want to see this content can avoid it.
This can also allow you to role gate the channel, making it opt-in, level gated, activity gated, by request only, or some other requirement to keep trolls or irresponsible users out.

Allowing Sensitive Topics in Your Community

Establishing channels dedicated to sensitive topics can also be an exhausting drain on your moderation team and can invite unwanted content into your server. These channels can quickly get out of hand if they are not set up mindfully and moderated carefully and will often require their own sets of rules and permissions to be run effectively and safely. Whether you want these discussions to occur in your space at all is up to you and your team. Having channels for these topics takes a lot of work and special consideration for you to determine if it’s the right fit for your server.

In short: This document will serve to educate you on how best to discern if you want these different channels, whether it be a channel on venting, serious-topics, or a real world event. Keep in mind- no matter what topics (if any) that you decide to include in your server, remember that all content needs to be within Discord’s Terms of Service and Community Guidelines along with the Rules within our community.

Determining What is a Sensitive Topic

The first step to determining whether to have sensitive topics channels in your server is to define what is considered a sensitive topic for your community. If you are running a server for people from a specific country, a discussion of that country’s conflicts with other countries may be a sensitive topic. Conversely, if you are running something like a political debate server, that same topic can be relatively non-problematic and not upsetting to the members of the server.

There are two main types of sensitive topics: triggering topics and contentious topics. A triggering topic is a topic or word that can prompt an increase or return of symptoms of a mental illness or mental distress due to trauma. A contentious topic is one that is controversial that has the potential to provoke heated arguments.

While sensitive topics can vary depending on what kind of server you own (e.g. a mental health server vs. a gaming server), keep in mind that there are topics that can be considered triggering and topics that can be considered contentious in most, if not all public spaces.

Triggering Topics

Triggering topics can vary wildly from community to community depending on what the focus of the community is.
For instance, in a community for transgender people, in-depth descriptions of a body or the discomfort some people experience because of their body is likely to be a triggering topic.
There are some triggers that are very common and should be handled with the assumption that they will cause multiple people in your community to feel uncomfortable or even traumatized regardless of what type of community it is.
This would include things like sexual assault, graphic depictions of violence, other traumatic experiences, suicide and self harm, eating disorders, parental abuse or neglect, etc.
These more sensitive topics should likely be separated out from more community specific topics that have the potential to invoke trauma such as transitioning or coming out in a server for LGBTQ+ people.

Moderation Concerns

  • Emotional burnout from dealing with users in crisis or users asking for advice about upsetting personal issues can be detrimental to moderators. Whether moderators are actively engaging with users in the chat or just reading the chat to ensure it is not getting out of hand, the emotional toll is high. Moderators who engage with and moderate these spaces should understand their limits and how to manage burnout.
  • Moderating users who are in a crisis or really going through it is unpleasant and can make the staff team look harsh or uncaring to other users regardless of how egregious their behaviour is.
  • The chance for abuse in these channels is higher than the average channel. Users who overuse the channel and are in constant need of support/advice can quickly become a drain on the emotional well being of everyone involved with the channel. Know the red flags for emotional abuse and keep an eye out for users who are constantly in crisis and trying to manipulate users into doing things for them.
  • Trolls and malicious attention seekers will target this channel. They will come in with extremely upsetting sob stories and fake mental/physical health crises to make users panic, upset people, or just generally disturb the peace. Allowing them a space to soap box makes these sorts of users more difficult to pick out and remove before they can start doing damage.
  • Some users will intentionally seek out content that they know will trigger them as a form of emotional self harm. It can be difficult to know whether this is happening in your server unless someone explicitly mentions that they are doing it.
  • If there is a separate team in charge of moderating or overseeing these channels, they will need to closely communicate with the rest of the moderation team about problem users or concerning behavior.


Channels focused on sensitive topics can provide users with a comfortable space to discuss personal issues of varying severity and build closeness and trust between members of your community.
These channels also have very specific risks and required mitigation strategies that will vary depending on the nature of the specific channel.
If you are running a channel on transition advice for transgender users, your main concern will likely be fake advice about foods that change hormone levels or dangerous advice regarding illegally acquiring hormones.
If you run a channel for sexual assault victims, your main concern will likely be victim blaming and ensuring that users reach out to professionals when needed.
You have to consider what the specific risks in your channel are and ensure that you are writing policies that are specific to your needs and finding moderators that are knowledgeable and comfortable with those topics.

Moderation Concerns

  • Moderation actions can look biased if a moderator is engaging in a conversation, disagrees with a user and then needs to moderate them for behavior in the same channel.
  • Moderators can be upset by the content of the channel, or the opinions/conduct of another user even if no rules are being broken and respond inappropriately.
  • Trolls and malicious users will target this channel. People will come in to spout stupid or offensive opinions to start arguments. They will also ping pong between extremely contentious topics until someone takes the bait. If the comments they are making and the general behavior of trying to start a debate about anything are allowed on the server, it will be more difficult to remove them before they get the chance to be disruptive.
  • Misinformation can be spread in the channel, moderators must have a good understanding of current events in order to prevent dangerous misinformation or conspiracy theories from being proliferated in their communities.
  • If there is a separate team in charge of moderating or overseeing these channels, they will need to closely communicate with the rest of the moderation team about problem users or concerning behavior.


Channels focused around contentious topics can provide users with an engaging space to discuss topics with people from varied backgrounds and explore other perspectives.
These channels also have very specific risks and required mitigation strategies that will vary depending on the nature of the specific channel.
For example, if you are running a channel on COVID19, your main concern will likely be dangerous misinformation and conspiracy theories.
If you run a channel for the 2020 US Presidential Election, your main concern may be things getting too heated or insult-flinging.
You have to consider what the specific risks in your channel are and ensure that you are writing policies that are specific to your needs and finding moderators that are knowledgeable and comfortable with the topics.

  • Do not spread misinformation. A rule like this is incredibly important, especially for topics that relate to public safety (such as COVID-19).
  • Keep conversations on-topic. Making sure that conversations do not go too off-topic will let others jump in and give their own insight. If a conversation becomes too meme-y or off-topic, then it will be harder for others to jump in and it could turn the channel into an off-topic channel.
  • Be respectful and try to keep arguments to a minimum. Arguments in these kinds of channels will flair up, but it is important as a moderator to ensure they do not devolve into name-calling or personal attacks. If an argument does flare up, try to ensure that users tackle the arguments and ideas presented and not the user that presented them.
  • Encourage others to jump in and give their thoughts. There are usually many different viewpoints when it comes to real-world events. Especially ones that warrant their own channel. So it is good to encourage those with different viewpoints to chime in with their own points of view.


Channels like these can be difficult to manage. On one hand, you want things to be contained and on-topic. On the other hand, you may want to allow for other kinds of discussion that relate to the topic at-hand.
Ultimately it is up to you to decide how to best implement these channels.
Whether the channel is for a global pandemic, a friend passing away, a game releasing, or anything in-between, these channels will require a special finesse that other channels may not.
It is our hope that these example rules and channel names can help you create a space that adheres to a specific topic and creates an atmosphere that is both respectful and engaging.

Image from Gyazo

Ban Evasion and Advanced Harassment

Occasionally while modding a community, you’ll come across one individual who refuses to abide by the rules in any way.
They just can’t seem to help but repeatedly break the rules, harass other users, or evade bans by creating new accounts to continue their bad behavior.
In this article we’ll review how to deal with users that constantly cause trouble or evade bans, and the active steps you can take to get it to stop.

Understanding High Conflict Persons (HCPs)

The vast majority of community members are interested and willing to participate according to the platform rules, even if they might not agree with every one of them. Sometimes people break rules or disagree, but their behavior can be quickly corrected and they can learn from their mistakes. If users continue to break the rules, they may be given longer-term or even permanent bans from the community or platform. Most users will accept their ban, but a small fraction will not.

A 2018 study by Stanford University estimated that 1% of subreddit communities on Reddit initiate 74% of all conflict on the platform. This incredibly small percentage of users rank extremely low in the agreeableness personality trait, and have no interest in getting along with others. Only a trained clinical psychologist can diagnose a patient with a disorder, but a term commonly used prior to diagnosis is HCP (high-conflict people). There are four primary characteristics of high conflict personalities, which is not a diagnosis but their description of specific conflict behavior:

  • Preoccupation with blaming others (their “targets of blame”)
  • All-or-nothing thinking
  • Intense or unmanaged emotions
  • Extreme behaviors (often what 90% of people would never do)

If you fail to use tact in your moderation technique and communication approaches, you may find that you or your community become the target of a high-conflict person. They may spam your community and you may delete their posts and ban their accounts, but more accounts can be created. Discord uses IP bans to prevent users from creating new accounts, but VPNs can be used to circumvent these bans. If truly motivated, armies of bot accounts can be created and used for mass-spamming, members of your community can be doxed, and ISPs or platforms can be DDoS’d to create fear in your community. If a high-conflict person gains access to money, they can pay somebody else to do the work for them.

Most moderators choose to simply wait out the harassment. Advanced harassment like this may go on for several days, or even weeks, but then stop abruptly as the individual turns their attention to something new in their life. In some cases the harassment can go on for months, continuing to escalate in new ways that may put the lives of your team or community members in danger.

What can you do to protect your community from High Conflict Persons? What motivates a person to behave like this? This article will help to explain the motivations behind this persistent, destructive behavior, and provide actionable steps to reduce or resolve their harassment.

The Virtue of Battling a Nemesis

A “nemesis” is an enemy or rival that pursues you relentlessly in the search for vengeance. A nemesis typically holds some degree of fascination for a protagonist, and vice versa. They’re an antagonist who’s bent on revenge, who doesn’t go away, and who seems to haunt the mind of the protagonist. They’ve moved past being an enemy to become something much more personal.

You might assume that a high-conflict person harassing your community is your nemesis, but this would be incorrect. You’re not going out of your way to obstruct their behavior, your primary focus is to engage and moderate your community. If the harassment stopped, you would move on and forget about their behavior. You resist their behavior only as long as it falls under your realm of influence.

In their mind, you have become their nemesis, and you must be punished for your insolence.

To them, you are the Architect of an oppressive Matrix, the President Snow of an authoritarian Hunger Games, the tyrannical Norsefire government in V for Vendetta. You or your community represent the opposite of what they believe. In one way or another, either by your direct actions or through your association with your community, you have wronged them and deserve to suffer for your behavior. It’s clear that you will never learn or understand what they see. You not only participate in creating the corrupt and unjust system that they are oppressed by and fight against, but as a moderator, you are the very lynchpin that maintains the corrupt system.

You may believe this sounds outlandish, and you would be correct. Most people don’t believe that the world is out to get them, and that they’ll be hunted down and persecuted for what they believe. These individuals have an overactive threat detection system that makes them believe that you or your community are actively plotting their downfall. They take your opposing stance as a direct challenge to their competence, authority and autonomy. They harass you and your community because they believe that you’re out to get them, or want to replace them and their way of life. The truth is, all you really want them to do is follow the rules and maintain a civil conversation.

Understanding Tactical Empathy

Now that you have a better understanding of how somebody like this thinks, we’ll discuss the strategies that you can employ to solve this problem. The goal is NOT to get them to seek help or change their mind- we aren’t attempting to solve people. Instead, our goal is to prevent or stop certain negative behaviors that keep happening so that you can protect your community and focus your energy elsewhere.

The key to getting an individual like this to change their behavior is through utilizing “tactical empathy”. Tactical empathy is the use of emotional intelligence and empathy to influence another’s behavior and establish a deal or relationship. It is not agreeing with them, but just grasping and recognizing their emotions and positions. This recognition allows us to act appropriately in order to respond to our counterpart’s position in a proactive and deliberate manner.

The premise behind tactical empathy is that no meaningful dialogue takes place when we are not trusted or we are perceived as a threat. In order to get someone to stop harassing your community, you need to shift yourself from being the villain of their story to being just another random person in their lives. You must work to shatter the persona that they have projected onto you and show that you are not the enemy working to destroy them. You’re just a mod trying to keep your community safe.

By demonstrating that you understand and respect them as an individual, this will disarm them and allow them to focus their energy elsewhere. It will not change their opinion, but at least their behavior will change.

A Mod’s Guide to Situation Diffusal

When somebody continues to harass or disrupt your community, they’re essentially holding your community hostage. If someone truly is holding your community “hostage”, they’re often doing so because they’re looking to open a dialogue for negotiation. Frequently, people take hostages because they need somebody to listen . They aren’t getting the attention that they believe they deserve, and attempt to cause as much disruption as possible in order to make their case.

You are a community moderator negotiating the peace of your community, not their lives, but these tactics can still apply.

Situation diffusal can generally be defined by three primary processes, each designed to collect information and use it to disarm the high-conflict person from believing that you’re an enemy or threat. These processes are called The Accusations Audit, Mirroring to Understand and Getting to “That’s Right”.

The Accusations Audit

An accusations audit is where you focus not on just the things that they believe, but the things that they believe you did wrong. An accusation Audit is not based on logic - it’s based on the unfiltered emotions of the other person.

It’s important that you go through their early comments and messages to understand what prompted this behavior in the first place. This might have been banning them breaking a rule (which is what you’re supposed to do, this isn’t to say that you aren’t acting unreasonably) or not properly punishing another community member they got into an argument with. They might believe “I feel like you didn’t give me a chance to explain myself” or “I feel like you’re discriminating against me”.

Your understanding of their beliefs will be flawed and inaccurate, but you must do your best to piece it together into a coherent argument on their behalf. If possible, learn more about the other communities they’re a part of. Identify if they’re harassing any other communities, and the reasons for doing so. Are there any commonalities of note?

Mirroring to Understand

Once you believe you’ve figured out why they’re upset with you or your community, mirror their language to verify it. At this point, opening a dialogue might be incredibly difficult if they’re using throwaway accounts regularly. Chances are they do have a primary account they continue to use in other communities, which can help greatly with starting your dialogue. At this stage, you’re still working to collect information about what they believe, directly from the source. Examples of questions you can use to verify their opinions include, “It seems like you believe that I’m being unfair because I didn’t give you a chance to explain yourself.” or “If I understand correctly, you believe I’ve been discriminating against you instead of taking your opinion seriously, is that right?”

Chances are, the responses you receive will be filled with aggression, profanity and insults. You must ignore all of this, and continue working to understand their position and the events that resulted in them targeting your community. Negotiations like this are difficult in voice-to-voice communication, and nearly possible via instant or private messaging. They will be incredibly resistant at first, perhaps thinking that you’re attempting to trick them into a perjury trap for them to admit their guilt or ignorance.

When you get them talking to you, mirror that language to get them to elaborate further on their beliefs. An example of dialogue might go something like the following:

Spammer: “It’s bullshit that mods ban strawberry jam lovers because the blueberry jam lovers are afraid of being exposed for who they really are.”

Mod: “Afraid of being exposed?”

Spammer: “Yeah, the blueberry jam lovers are secretly running the world and plotting against anyone who doesn’t believe in the same jam flavor preferences as they do.”

Realistically, blueberry jam lovers are not actually running the world or plotting anything nefarious, but in the mind of the spammer this is undeniably true. And while this example was intentionally mild, you can infer more severe types of conversations that would follow a similar format.

Regardless, as you dig further into what they believe, you’ll notice that the rabbit hole will go very deep and be filled with logical fallacies and obviously disprovable biases that make no sense. Remember that the truth or reality behind what they believe is completely irrelevant, and attempts to correct them will undermine your goals. Your job is to help them explain their beliefs to you to the best of their ability, and for you to understand their position to the best of your ability. Once you believe you’ve collected enough information, you can move to the final step, getting to “That’s Right.”

Getting to “That’s Right”

Once you believe you’ve completely understood their position and what they believe, you can repeat their entire position back to them. Demonstrate your understanding by effectively summarizing it concisely and accurately, regardless of how much you disagree with the position. Don’t focus on their behavior or the actions that resulted in them getting banned. Instead, focus exclusively on the ideology that drove their behavior. Do this until you’re able to get them to say “Yes, that’s right” at least 3 times, or by asking if there’s anything else that you forgot in your summary. If you did miss anything, repeat the entire position again while including the extra information. When reiterating their points, be very careful about restating things that are not true. Do your best to remove personal bias from the statements to focus them back to “absolute truths.”

Their actions are about trying to make a point- but what you’re doing is getting them to make their point without taking action, because you have heard what they are trying to say. If you do this well enough, if you put enough effort into doing this correctly (and it doesn’t need to be perfect), they will know that you finally understand where they’re coming from and that they’ve been heard by you, and their opinion has been validated. By demonstrating you understand their position, you go from being part of the problem to being a real person. They might not like you, but they will at least (if begrudgingly so) respect you.

End on a High Note

When you successfully reach this state of your discussion, it’s essential that you be careful with your choice of words. There’s a good chance that the spammer will leave your community alone now that they know that their opinion has been recognized. At the very least, you should see an immediate reduction in the number of times they attempt to cause harm.

If they do continue to harass you or your community, it’s possible that you failed to address the primary reason that they’re upset. Open dialogue with them again and follow the steps above from the beginning, or check to see that you haven’t fallen into a common pitfall or mistake.

Common Pitfalls and Mistakes

Below is a list of common examples of mistakes people make during negotiations:

Getting to “You’re Right” instead of “That’s Right”

When using tactical empathy, remember that the purpose of the exercise is to bring their beliefs to the conscious mind and demonstrate agreement. If you attempt to tell them what they should believe, you may instead get a “you’re right” and fail to see any change. The difference is subtle, but important. Make sure that the other side actually feels heard, and that you’ve fully understood their position.

Don’t Try to Correct their Opinion

As a reminder: do not attempt to correct or modify their opinion. Remember the purpose of this process. It is not to modify their position or opinion, it’s only to mirror their opinion to stop identifying you and your community as a threat.

Be Careful with Tone and Wording

The methodology outlined in this article is designed for conversations in real-life, especially over the phone. It’s unlikely that you’ll be able to get the spammer on an audio call, so it’s essential to be patient with the process and careful with your wording. Formal grammar like punctuation can make a sentence feel more serious or threatening. Use casual phrasing and make an occasional spelling mistake to show you’re human. If you’re uncertain about tone, read the sentence out loud while sounding as angry as you can, and adjust accordingly.

Remember to Ask for Help

The process outlined here can be easily undermined by others who aren’t involved in the process. If you’re working to negotiate with a spammer but another moderator is threatening them in a different conversation, you won’t see any changes in their behavior. Communicate with your team on the strategy you plan to use, and remember to ask for emotional support or step away if it becomes too taxing.

Can High-Conflict People be Helped?

There will be some of you who believe that after getting this far, you may be on the path to rehabilitating a person like this. The mistake is believing that you are further along than you really are, or that you’re qualified to help someone struggling to control their emotions. The truth is, getting to “that’s right” is only 1% of the process.

Even if you’re a clinical psychologist, you wouldn’t be getting paid for your work, at least not this work. Attempting to provide support via text chat will have diminishing returns. Attempting to show somebody like this the “error of their ways” may result in all of the work you have done being reversed.

Instead, you must focus on the people who want your help and who need it- this being the people in your community. Empower the people who are truly deserving of your time and energy. At the end of the day, you’re a human and a moderator. Your primary focus in this realm is to make sure your community is safe and stays safe- and if you’ve managed to get the persistent spammer to stop then you’ve accomplished what it is you’ve aimed to do.