
RATINGS & TOOLKIT
On this page, you’ll find clear, detailed information about the S.O.S. ratings and icons, along with an at-a-glance view of how social media companies are performing against these standards. The goal is to help parents, youth, educators, and other stakeholders quickly understand what the ratings mean, how they’re applied, and how different platforms compare.
The S.O.S. Toolkit was developed with direct input from users to ensure it is easy to understand and practical to use. It includes tailored resources for parents, youth, educators, and healthcare providers, recognizing that each group engages with the SOS program in different ways and has different needs.

HOW S.O.S. RATINGS WORK

Use Carefully
-
Platforms and filters help reduce exposure to harmful or inappropriate content
-
Reporting tools are accessible and easy to use
-
Privacy, default and safety functions are clear and easy to set for parents

Partial Protection
-
Some safety tools exist on the platforms, but can be hard to find or use
-
Harmful content can be on the platform and missed by content moderation
-
Some functions such as endless scrolling can create stress for youth

Does Not Meet The Standards
-
Filters and content moderation do not reliably block harmful or unsafe content
-
Platform lacks policies, functions or transparency related to unsafe content
-
There are weak, missing or lacking privacy and safety protections

Under Review
This icon indicates that a company has completed a self-assessment and provided back-up documentation and evidence that is being reviewed by experts for scoring.

Application Pending
-
This icon indicates that a company has started but not yet completed the application process.

Standard Category
Policy
Functionality
Governance & Transparency
Digital Literacy & Well-Being
Content
Company A
Company B
Company C
Important Note: All social media platforms carry some level of risk. S.O.S. ratings reflect current evidence and practices and will evolve as research, expert insight, and user feedback grow. Even highly rated platforms benefit from parental involvement, open communication, and ongoing awareness.
The S.O.S. ratings are intended to guide families towards safer choices, initiate important conversations, and support youth mental health and suicide prevention. The ratings are not an endorsement of any company, product, or platform. These ratings do not address every possible issue young people might face. Parental involvement, open communication, and ongoing awareness remain essential across all of the ratings. This is the version 1.0 of the S.O.S. ratings and will evolve with new insights and user feedback.
Category Ratings: Color-Coded System
The category rating shows how a company performed in each of the five S.O.S. categories.
Meets Threshold
Partially Meets Threshold
Below Threshold in the Category
"What safety rules exist and how clearly they are defined"
Risk Categories: Policy category has three standards that assess the company's policies on user safety, adveritising and research on teens (13-19)
Risk Details: Generally, this category informs users on the accessibility, development (user input), evidence-based, enforceable actions, company response, of policies around mental health, suicide and self-harm issues, advertising and research on teens (13-19).
"How platform features, algorithms, and design choices affect teen safety"
Risk Categories: This category has four standards that are intended to assess various features and design choices that effect teen safety on the platform regarding harmful content and how to report it, amount of user time on the platform, and parental tools and controls provided by the platform to protect teens (13-19).
Risk Details: Generally this detail informs users on the specific functions of the platform (for example, ease and access to user reporting, responses from a company to a report, technology and user features that protect users from exposure to harmful content such as privacy settings and filters), time and break functions, as well as parental controls and settings to protect teens (13-19).
"How safety decisions are made, enforced, and shared with the public"
Risk Categories: This category has seven standards that are intended to assess the company’s Compliance Teams, how transparent they are about harmful content and user reports on their platform, what data is collected and/or publicly shared around research, and exposure to violence and advertising to teens (13-19).
Risk Details: Generally this detail informs users on the roles and qualifications of personnel at the company, external experts/advisors used by the company, and compliance/regulatory departments within the company, how transparent the company is regarding harmful content and exposure to violence users are, the type and frequency of research and advertising that is conducted and/or participated in by the company both internally and external to the company.
"How users are supported with education, mental health resources, and tools for healthy use"
Risk Categories: This category has five standards that are intended to assess how a company supports users with educational content, mental health resources and tools for healthy use.
Risk Details: Generally this detail helps users understand the educational content that the platform provides to users on crisis support and service, ease of access to educational content and how it was developed, how to delete personal data from the platform (the right to be forgotten), as well as what specific developmentally informed features exist to determine ability to use the platform.
"How harmful content is identified, moderated, and addressed"
Risk Categories: This category has five standards that are intended to assess how harmful content is identified and addressed, if credible sources of reliable information are provided to users, and how the company addresses content moderation.
Risk Details: Generally this detail informs users on what types of harmful content are prohibited by the company, how the company addresses credible vs. not credible sources of health-related information, specific guidelines developed for allowable content, how the company categorizes and ensures content is developmentally, age and culturally informed, respectful and inclusive content that does not discriminate against users, how the content might impact a user’s mental health and details regarding how content is moderated by the company.
RATINGS BY COMPANY
This toolkit brings together guidance, resources, and practical tools to support informed decisions in real situations. It organizes essential information in one place so users can spend less time searching and more time using it.
The toolkit is not meant to be read once and set aside. It is designed to be revisited, adapted, and used in real situations to support communication, decision-making, and implementation over time.
This toolkit is designed to be flexible it does not need to be read from start to finish. Different sections can be used independently depending on your needs and goals.
It can be revisited when questions arise, adapted to your needs, and shared as your work evolves.
The toolkit is meant to support ongoing work and make challenging situations easier to navigate over time.