Chat allows users to form a sense of community. But, trust can quickly be destroyed by bad actors and spammy content. Chat moderation mitigates its negative impact on users and the chat platform itself.
What is Chat Moderation?
Chat moderation is the act of reviewing and regulating user-generated messages and content posted on a platform to ensure it’s not inappropriate or otherwise harmful to the brand's reputation or its users.
Why is Chat Moderation Important?
Chat is paramount to building a strong social community within your app. Users love the ability to connect with like-minded people worldwide, and companies love how chat increases engagement and retention rates, customer acquisition costs, and lifetime value metrics of their apps. However, the destructive potential of an unmoderated chat platform significantly outweighs any possible benefits.
Moderating your app’s chat functionality is the key to ensuring that users can enjoy themselves and feel comfortable while engaging with your app and making connections with others.
Here are the top three reasons why your chat solution must be well moderated:
1. Community Safety Trolls, bad actors, and bots can disrupt a healthy group dynamic by sending toxic messages, inappropriate images, and spamming other users. Dangerous dynamics between players can also emerge if left unchecked, like cyberbullying and grooming. Your app’s sense of community and key metrics can suffer if users don’t feel safe and welcome to share their thoughts.
2. Risk of Reputation If word that your chat solution is an unsafe environment gets out, repairing your app’s reputation and relationship with investors, users, and advertisers can prove quite difficult. Once loyal users might churn and word-of-mouth advertising becomes a liability. Investors and other financial contributors might be wary of continuing their support based on the optics of being associated with a platform that hosts harmful content without consequence.
3. Legal Requirements Moderation is important to any chat service and is required, in some cases by law, on platforms that host user-generated content (UGC). Both the Apple Store and Google Play Store require documented moderation workflows for apps that provide users access to content generated by other users.
Take a closer look: Toxicity & the Hidden Dangers of Shoddy Moderation
What Content Requires Moderation?
Whether your chat interface exclusively supports text-based messages or allows users to share audio and visual content, moderation is still an essential part of maintaining a healthy communication experience for your users.
The ability to send photos, record audio messages, and chat live enhances UX, but it also complicates moderation efforts.
Each type of content has a unique set of advantages and challenges when it comes to protecting users from discriminatory, explicit, hostile, and off-topic content.
1. Text The lion's share of moderation efforts will go towards reviewing text-based content. Moderators should keep a close eye on the subject, language, and sentiment of messages between users to prevent bad actors from disrupting the harmony of the social community. However, due to the convenience of text messaging, moderators might face an unmanageable volume of content to regulate and should employ the assistance of AI.
2. Images Images can add an extra element of personality to a chat platform and allow users to build deeper connections by sharing snapshots of their offline lives. However, if left unregulated, images have the power to destroy the safety of an online chat community. Image moderation allows apps to control the types of images being shared via chat. It helps protect users from being exposed to unpermitted content, such as explicit or suggestive nudity, violence, or visually disturbing content.
3. Audio Certain apps will allow users to bring their messages to life by sharing audio recordings instead of text-based content and allowing players to talk over a mic in real-time. Due to the high cost, frequent inaccuracy, volume, and privacy concerns surrounding voice moderation, some apps opt to exclude a voice messaging functionality and rely on users sending text-based messages to create a sense of community.
Regardless of the medium, the goal of content moderation should always be to protect users and issue fair judgments. For most apps, basic moderation tools are enough, but developers do have the option to introduce artificial intelligence (AI) into moderation efforts to lighten the load of moderators who review content manually. Having a human moderator on your team provides an opportunity for recourse for users who believe moderation actions were taken against them by mistake.
Take a closer look: The Essential Guide to Content Moderation
Top Two Moderation Methods
While there are a variety of chat moderation tactics at your disposal, they can all be categorized into two groups: manual or automated. Both methods assist with effectively censoring harmful content, but sometimes, one is clearly better suited for a certain situation than the other.
Manual Moderation
Manual moderation requires an employee or longstanding, trusted member of a userbase to act as a moderator, regularly review message content and intervene when necessary to protect the safety and integrity of your community.
Advantages of manual moderation:
- Special Permissions: Moderators and users with special permissions can ban certain users from an app entirely or from a channel for posting spammy or offensive content.
- Self Moderation: Users can self-moderate by flagging content for administrative review or employing their ability to mute offensive users.
- Proactive Moderation Defenses: Establishing community guidelines requires users to read & consent to them in order to enjoy the chat functionality. Developers can also design a user verification system that requires a valid phone number and email address from registrants and dissuade toxic users who wish to remain anonymous from entering the chat.
Disadvantages of manual moderation:
- Resource Heavy: The financial cost of manual moderation can quickly add up alongside an overwhelming workload.
- Disproportionate Moderator-to-User Ratio: If the number of moderators is disproportionate to the number of harmful messages on a platform, users will still be subjected to offensive content, and their experience will be impacted.
- Time to Resolution: Due to the case-by-case basis on which manual moderators review issues, the time to resolution can be long and result in a poor user experience.
Take a closer look: Community Moderator Tips
Automated Moderation
Automated chat moderation uses deterministic rulesets, such as blocklists and regex filters that issue a judgment on what to moderate before users enter the chat to prevent harmful content from ever reaching or impacting them. Sentiment analysis, stance and intent detection, Natural Language Processing%20is,in%20the%20field%20of%20linguistics.) (NLP), and Machine Learning%20is%20a,to%20predict%20new%20output%20values.) (ML) techniques are used to understand the meaning and intent of a message to decide if it is harmful.
Here are the top tactics automated moderation employs:
- Advanced Moderation: Advanced moderation examines messages as they’re sent, automatically detecting illicit content and taking action to either flag messages for human review or block them entirely before they are ever displayed to other users in the chat channel.
- Blocklist: A blocklist is a list of words chat moderators can define to moderate message content. A unique blocklist can be assigned to each channel type to either block or flag messages that contain certain words.
- IP Banning: IP banning is another variation of a ban, which automatically blocks the access of users who connect with the last known IP address of an IP banned user.
- Shadowban: Instead of a default ban, you can shadowban users from a channel, set of channels, or an entire app. When a user is shadowbanned, they will still be allowed to post messages, but any message sent during the shadowban will only be visible to the message's author and invisible to other users of the app.
Here is an example of automated moderation:
A user drafts a message but is unable to send it. A notification would then appear, letting them know that their content includes language against community guidelines and needs to change to send it. This type of moderation prevents unsavory interactions but can disrupt a user's experience if not implemented correctly.
Take a closer look: Can You Rely on AI Automation?
Chat Moderation Best Practices
No matter your method of moderation, there are best practices for both to help keep your community safe and make quick work of chat moderation.
Manual Moderation Best Practices
For apps that employ a human-assisted moderation method, or even designate trustworthy users as moderators, there are a few ways to streamline the review process and feel supported while doing so:
- Get to know your community: Be a part of it. Get to know the people who are chatting on your platform. Help build moments and memories that can last a lifetime and generate a majorly positive impact on your app.
- Moderating is never a one-man show: Like any other team activity, the better you coordinate and communicate with your fellow moderators, the easier everyone’s work will be. Whether you’re only a team of two or have 30 fellow moderators to lean on, the closer you are as a team, the easier it’ll be to work together to solve any incidents that may occur.
- Know the moderator tools available to you: There are ways to decrease the volume and frequency of messages you’ll need to moderate. From setting limits on the number of messages a user can send within a certain time frame, to limiting sending abilities to VIP users or those with special permissions, the better you know your tool arsenal, the better you can use them.
Take a closer look: 7 Best Practices of Content Moderation
Automated Moderation Best Practices
Automated moderation should always be used in concert with a manual moderation approach, it is there to support high-volume chat platforms that a single human or small team would be unable to manage. There are a few modes of control moderators can activate to review content automatically:
- Follower Only Mode: Restricts sending messages to only followers who have been following chat channels for the timeframe you designate - from minutes to months.
- Unique Chat Mode: This mode disallows repeated messages.
- Slow Mode: Disallows users from sending multiple messages within a given time frame. By default, it allows users to send 1 message every X number of seconds, so you can customize the timeframe to your preference.
- Sub-Only Mode: Allows only subscribers, VIPs, and moderators to chat, and prevents all other messages.
- Emote Only Mode: Restricts non-moderator users from chatting using anything other than emotes.
Chat Moderation Use Cases
Live Chat
Many companies feature a chatbot on the homepage of their website or offer visitors a chance to chat live with a sales representative or customer support agent via text-based message.
- Immediate Connection: The instantaneous nature of live chat makes it a prime example of a vulnerable platform. Anyone who visits a site with a live chat window can be instantly connected to a real representative, whose time is valuable and should not be wasted on moderation.
- Unmoderated Live Chat: If left unregulated, legitimate prospects and customers can get lost in the shuffle and make live chat a less viable business channel.
To keep live chat a safe and productive place for both prospects and employees, automated moderation tactics can be implemented and prevent harmful content from ever being sent.
Multi-player Video Games
In-game chat is a breeding ground for toxic content in need of moderation. Spammy and abusive players can ruin the trust within gaming communities and in turn, result in high player churn rates, low ROI, and dismal lifetime value metrics.
- Common Offenses: Most of the offenses over in-game chat revolve around mature content, language, spam, bullying, and grooming. Regardless of their age, gamers should not have to endure abuse from other players via chat.
- Risk of Unsatisfied Players: Without proper moderation, gamers will feel unsafe and unwelcome within the chat. Their dissatisfaction can lead to them putting down the game for good and taking their opinions and sharing bad experiences over social media, a major risk to a game’s reputation and relationships with sponsors.
Livestreams
In today’s world, many events that used to be held in person have transitioned to an online or hybrid setup. From webinars on business development to streaming gamers on Twitch, events that capture live content and allow attendees to engage with each other via chat require moderation.
- Absence of Moderation: Livestream chat without moderation risks attendees feeling uncomfortable and leaving the event, low attendee return rates, and even the risk of a sponsorship loss.
- High Message Volume: Depending on the size of the event, live stream chat has the potential to connect hundreds of thousands of users and due to the volume of messages within the chat, pull focus from the speakers and content.
It is in the best interest of event coordinators to ensure that proper moderation is in place on the day of the virtual event, or else they risk letting months of planning go to waste.
Chat Moderation Example Guides
If you are in need of inspiration when it comes to outlining what tools, tactics, and methods of moderation your business should employ, take a look at the five examples below.
1. Discord A popular gaming chat platform, but it is not immune to toxic players and content. To combat this, they have established a Moderation Academy that covers everything from the basics to advanced moderation so that anyone, from first-time moderators to experienced veterans of massive online communities, can find resources to learn how to effectively monitor in-game chat conversations and keep players safe.
2. Miappi Miappi helps brands strengthen awareness alongside the loyalty of their user bases by building a sense of community through UGC. Given the variety of content mediums at the disposal of users, their moderation toolkit is an essential component of their community safety strategy.
3. YouTube Youtube is a goliath in the live streaming space, which means monitoring what goes on in the live chat feature is especially important. To help live chat moderators keep conversations clean, they’ve released a guide that outlines all of the live chat moderation tools available, along with suggestions on how and when to use them.
4. Twitch Twitch is a popular live streaming platform, primarily used by gamers. They’ve made their moderation and safety guidelines public, outlining everything from how to choose a moderator to the tools you can use to keep your stream clean.
5. Facebook Another major player that supports live events, streams, chat, and comments. Once a live broadcast has begun on Facebook, live comments come streaming in and the chat can quickly turn off-topic or offensive. To effectively moderate these live comments, Facebook has published a guide that lists best practices for hosts and moderators.
Build vs. Buy: Chat Moderation
The “build vs buy” decision is paramount when discussing a company’s software needs. Building a custom moderation solution can provide a host of benefits, but comes at a cost. An intelligent chat moderation platform is a significant investment—building a comprehensive solution could involve years of development time with a hefty price tag. Consider the following when deciding whether to purchase an existing chat moderation technology or build it internally.
Building Moderation From Scratch
When building proprietary chat moderation software, you retain control of all aspects of product design, allowing you to create a customized solution to best fit your company's needs.
Advantages of building and keeping your moderation technology in-house include:
- Control enhancements and development schedule
- Avoid the costs associated with software license fees – and in some cases maintenance and support fees
- Fully customize to fit your project scope and needs
It is important to remember that by building your own moderation solution you assume the risk if it fails. Remember the Scunthorpe Problem introduced by AOL only to be repeated some years later by the Google filter and Facebook. These moderation issues produced an obscene amount of false positives which required significant manual moderation support to address, review, and correct.
Buying Moderation
When you purchase a ready-made chat moderation solution you benefit from a professionally developed and vetted technology with years of market use, iterations, documentation, customer support, and added intelligence.
Additional advantages of buying moderation software include:
- Offloading moderation allows you to focus resources on other core features of your app
- Consistent product upgrades and new features to enjoy and improve UX
- When bugs or errors are discovered, you can rely on the vendor to troubleshoot and fix them rather than exhaust internal resources
- Quick deployment time
Building a moderation solution requires extensive knowledge of natural language processing and language rules. If the filter you’ve built from scratch fails to understand complex language and produces false positives, it can harm the reputation of your app. It is easy to overlook the cost and time involved in developing new technology as complex as chat moderation. A homegrown solution requires costly development time and ongoing maintenance, moderation, and support.
Your App Users Deserve a Safe Community
Whether you’re looking to bolster existing moderation efforts or are simply seeking out information on how to create an effective solution from scratch, you now know the two types of moderation, best practices for each, the mediums of content that require review, and a few concrete examples of some use cases.
Regardless of whether you decide to build or buy a chat moderation solution, protect the hard work you put into developing a chat feature and give your users the fun, safe social experience they deserve.