Auto-Moderation Guide
Introduction to Auto-Moderation
Stream Chat AI’s Auto-Moderation feature provides sophisticated protection for your streaming environment, ensuring your chat remains a welcoming and appropriate space for all viewers. This powerful tool serves multiple purposes depending on your specific moderation needs:
Whether you’re new to streaming without established moderators, prefer an unbiased technological solution alongside human judgment, or need supplementary support for your existing moderation team during busy periods, our Auto-Moderation system offers comprehensive protection tailored to your requirements.
Understanding the Technology Behind Chat Moderation
Our Auto-Moderation system utilises advanced artificial intelligence to analyse messages in real-time, carefully evaluating each message against your selected moderation parameters. This process works through sophisticated content analysis:
The system examines incoming messages and assigns confidence scores regarding potential violations across various categories of inappropriate content. These scores are then compared against your configured threshold settings to determine which actions should be taken.
The confidence threshold setting allows you to fine-tune your moderation approach:
A higher confidence threshold creates stricter moderation parameters, resulting in fewer false positives but potentially missing some borderline violations. This setting is ideal for streams where maintaining chat flow is prioritised, with only clearly inappropriate content being removed.
Conversely, a lower confidence threshold implements more cautious moderation, identifying more potentially inappropriate messages but occasionally flagging innocent content. This setting suits streams where protecting viewers from any potentially harmful content takes precedence.
Importantly, your human moderators always retain override capabilities. They can reverse any automated actions taken by the system, ensuring that the final moderation decisions align with your community standards.
Setting Up Auto-Moderation
To configure this feature for your stream:
- Navigate to your Stream Chat AI Dashboard
- Locate and select the Auto-Moderation tab in the navigation menu
- This will direct you to the Bot Moderation configuration page
- Toggle the “Enable Chat Moderation” switch to activate the feature
- Upon activation, a comprehensive list of moderation options will appear
Available Moderation Categories
Stream Chat AI offers protection against a wide range of inappropriate content categories. Each category addresses specific types of harmful content that may appear in your chat:
Hate Speech
Content that expresses, incites, or promotes hatred based on protected characteristics including race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Note that hateful content targeting non-protected groups (such as hobbyists or enthusiasts of particular activities) falls under the harassment category instead.
Harassment
Content that expresses, incites, or promotes harassing language directed towards any individual or group, creating an unwelcoming or hostile environment.
Self-Harm Content
Content that promotes, encourages, or depicts acts of self-harm, including but not limited to suicide, self-injury, and eating disorders.
Self-Harm Instructions
More severe than general self-harm content, this category specifically identifies content that provides instructions or advice on how to perform acts of self-harm, posing a more immediate risk to vulnerable viewers.
Sexual Content Involving Minors
Sexual content that references or includes individuals under 18 years of age. This category receives zero tolerance in all streaming environments.
Graphic Violence
Content that depicts death, violence, or physical injury with explicit or graphic detail that may be disturbing to viewers.
Threatening Hate Speech
A more severe category that identifies hate speech which also includes threats of violence or serious harm towards groups based on protected characteristics.
Threatening Harassment
Harassment content that escalates to include threats of violence or serious harm directed at any target.
Self-Harm Intent
Content where the speaker expresses current engagement in or intention to engage in acts of self-harm, representing an immediate concern.
Sexual Content
Content designed to arouse sexual excitement, including descriptions of sexual activity or promotion of sexual services. This category excludes educational content about sexuality or sexual wellness.
Violence
Content that depicts death, violence, or physical injury, even without the graphic detail that would place it in the “Graphic Violence” category.
Implementing Your Moderation Strategy
After reviewing the available categories, determine which types of content are inappropriate for your particular streaming community. Consider your audience demographics, content theme, and community values when making these decisions.
Once you have made your selections:
- Toggle the switches corresponding to each category you wish to moderate
- Adjust confidence thresholds if available for your subscription level
- Click “Save Settings” to implement your moderation strategy
The system will immediately begin monitoring your chat according to your specified parameters. You may revisit and adjust these settings at any time as your community evolves or your moderation needs change.
Best Practices for Auto-Moderation
To achieve optimal results with Auto-Moderation:
Consider starting with more lenient settings and gradually increasing strictness as you observe system performance.
Regularly review moderation actions to ensure they align with your community standards.
Communicate your chat rules clearly to your viewers so they understand what content is unacceptable.
Combine Auto-Moderation with human moderators for the most effective coverage, particularly during high-traffic streaming periods.
Use the insights gained from moderation patterns to better understand your community and refine your approach over time.
With Stream Chat AI’s Auto-Moderation system properly configured, you can focus more on creating engaging content while maintaining a safe, welcoming environment for your entire community.