Share This Article
As a community grows, depending on the topic of your community, your platform can attract bots or spam users who have the goal to wreak havoc on your platform through dummy content, promotion of unrelated products, or collection of your user’s personal information for nefarious purposes. You may also find users that intentionally or unintentionally contribute to a negative atmosphere with inappropriate content, bullying, or trolling. To maintain a healthy and thriving community you should implement tools from the beginning to moderate content and prevent toxic content from being spread.
Mitigating Spam
There are many different types of software out there to help you monitor and prevent spam in your community. These programs will help by flagging spam or blacklisted users so that moderators can then block them from entry to the community. But how can you identify spam users or content?
Identifying Spam
- The user’s email looks fake. Typically a users’ email address will contain their name or a few words relating to them. If the email you see is a string of letters or numbers, the email may be from a spam user.
- The commenter’s username looks fake. A spam commenter may not use traditional names like “Joe” or “Samantha” when creating their account, they might instead create an account with keywords related to the product they’re selling, such as “black boots for sale” or “cheap iPhones.” This tactic is meant to play with SEO and SEO spammers will inject keywords and links into healthy communities in order to manipulate search rankings and lure the site’s traffic into their spam content.
- The comment contains a link that doesn’t relate to the topic of conversation. Much of the spam content produced has the purpose of selling products. If the comment you see drops a link to a website unrelated to the conversation it is likely spam and the link could potentially contain malware.
- The comment itself is generic. If you see a comment and it seems to be unrelated to the content that it is replying to, it is highly likely that the content is fake and the spammer is trying to create as many links as possible to their spam content. The content may also have grammatical and spelling errors.
Blocking Spam
There are many ways to block spam content that require little to no configuration. Software like StopForumSpam or Project Honey Pot can be set up to track any users blocked by you or other users of your community software so that the spammers are universally blocked. This software will show the quality of a user’s account within the moderation queue and if they register and are linked to previously blocked content, they will show as having a blacklisted or flagged IP and what banned username or email they may match if they have tried to register an account with your community previously.
Besides using these software options to block spam, you can also implement strategies in the registration process that will block spam content from entering the site. Using CAPTCHA or requiring email confirmation will block spam bots specifically because the process can only be completed by human users (for now).
Monitoring Toxicity
Monitoring and preventing toxicity is one of the most important parts of running a forum. If your community houses bullies or lets rude comments slide, users won’t want to use your platform. To monitor toxicity, you need to have a dedicated moderation team along with the strong spam-catching systems in place. That way, as much content will be blocked as possible before it has the chance to actually enter the community. Not all toxic content is from spam users, however. Some users like to troll – or sometimes users just don’t get along with one another – and toxicity ensues. This is when reporting comes in. Your moderation team should be monitoring all community content and automatically reporting content that doesn’t stand up to the guidelines of the community. This, combined with the community rule that all members have a responsibility to care for and mitigate toxicity in the community, should keep negative content at bay.
Moderation Team
Creating a Moderation Team
When creating a moderation team, you need to make sure that you have enough moderators to support the amount of content being generated by users in your community and the amount of daily active users. If your community is smaller, you may be able to rely on crowdsourced moderation to control toxicity and spam, but as your community grows, you’ll need a dedicated team to monitor users. This is not a hard and fast rule, as you may also need more or less moderators based on the amount of toxicity your community experiences.
You also need to determine how you’d like to set up your moderators. You could have all of your moderators have the ability to moderate all content on the forum or you could also have moderators assigned to specific forums if you have several different types of content in your community or some forums have more traffic than others. Each way is valid; you just need to determine whether you want users more broadly looking at the community and moderating content, or if you want more dedicated moderators that solely focus on small sections.
When creating a moderation team, it is important to keep the safety of your moderators in mind because enforcement of rules often doesn’t make the enforcer the most popular user and this could lead to abuse. Give your moderators the choice to remain anonymous, giving them the ability to use a fake name or an avatar that doesn’t represent their real-life appearance.
Forming Community Guidelines
As your community grows, having a strong set of guidelines will be imperative. These guidelines shouldn’t be overbearing or tyrannical; they should just be in place to protect users from negativity and guide the discussion and trajectory of the community. In your guidelines you should make it clear what content is acceptable, what content isn’t, and the resulting punishment of violation of the guidelines. This document should be short and simply worded so that users can read it quickly and be able to understand the rules they need to abide by.
We’ve included an example of community guidelines that should help you get a start on creating rules for your own community!
Conclusion
There are many different ways to block toxicity or moderate content and the strategy that will work best for your community largely depends on the size of your community and the content your users produce. As a general rule, having a clear set of community guidelines and a dedicated moderation team will set you on the path to success because the toxicity will be mitigated and your users will feel cared for and listened to.