top of page
ChatGPT Image Oct 26, 2025 at 03_20_14 PM.png

STANDARDS

What Are Standards?

Standards are documents that establish an agreed upon way of doing something.

Standards help ensure a common approach and repeatable outcome in the design and development of products, services and technologies.

A standard can be thought of as an agreed-upon norm used by people, industry, and government that outlines the best way to complete a task – whether it’s about developing a product, providing a service, controlling a process, or interacting with the world.

A standard is stated as a principle; it is a claim or assertion to be defended.

A standard describes something that exists in a well-running organization/business — it is a characteristic or essential feature.

STANDARDS DEVELOPMENT PROCESS

ProcessGraphic_12.4.2024 (2).png
  • Standards are established documents that define an agreed-upon method for accomplishing a task. They ensure a consistent approach and predictable outcomes in the design and development of products, services, and technologies. Essentially, a standard is a consensus-driven norm used by people, industries, and governments to outline the most effective way to complete a specific task, whether that involves product development, service delivery, process control, or interaction with the world. Standards are principles that assert how things should be done and describe the essential features of well-functioning organizations and businesses. They play a critical role in ensuring the safety, quality, and reliability of products and services, protecting public health, and improving systems and processes by reducing waste, cutting costs, and ensuring consistency. Moreover, standards provide a shared framework for expectations, allowing industries, especially in the technology sector, to align their product designs with these agreed-upon benchmarks.

     

    The rationale behind having standards varies depending on the context. In some instances, the rationale serves as the justification for the standard itself, often articulated as an argument where the conclusion is the statement of the standard. This rationale may include supporting evidence, explanations, or examples that bolster the need for the standard.

  • Standards ensure the safety, quality and reliability of products and services.

    Standards protect our health.

    Standards improve systems and processes, they reduce waste, cut costs and ensure consistency.

    Standards provide a basis to share the same expectations about a product or service (in this case for the tech companies to understand the target they should be shooting for in their product designs).

    In some cases, the rationale is the argument for the standard. In other cases, it provides information on the reasons for a standard.

    When the rationale is treated as an argument: It is written in such a way that the conclusion of the argument is the statement of the standard. 

    The rationale may include additional evidence (explanation, examples) supporting the standard. 

    Rationales are relatively short, e.g., 100-250 words. Not all standards have a rationale.

  • Standards development is a voluntary and collaborative effort.

    Standards development includes a diverse set of inputs and stakeholders.

    Standards are informed by scientific literature and expert input. 

    Standards development is based on a consensus-based approach, not unanimous agreement.

    Sub-Standard: How the principle in the standard may be achieved.

    • Typically reflect some kind of “action”.

    • Contain only one concept unless the ideas belong together or are inseparable.

SAFE ONLINE STANDARDS

Policy

Standard 1: The company has a policy on user safety that:

  • is readily accessible.

  • is easily understood (i.e. language).

  • is developmentally informed and age appropriate.

  • is aligned with evidence-based, best practices for mental health and suicide prevention.

  • includes and incorporates user input.

  • supports and promotes mental health and well-being.

  • describes enforceable actions by the company.


Addresses:

  • user flagging and reporting functions.

  • parental controls.

  • rule and/or policy violation or infringement.

  • a safe amount of daily time on platform for users.

  • mental health user engagement (the policy includes specific language to mental health, suicide and self-harm, mental wellness).

Content (harmful, dangerous, online challenges), and how the company:

  1. intervenes when harmful or dangerous SSI content is reported (e.g. content moderation, referral to law enforcement, sending resources, etc.).

  2. removes harmful SSI content.

  3. responds to feedback to users on content reported (timely, with feedback on report, including resources).

Standard 2: The company has a policy on advertising to users aged 13-19 that:

  • is readily accessible.

  • is easily understood (i.e. language).

  • is developmentally informed and age appropriate.

  • is aligned with evidence-based, best practices for mental health and suicide prevention (e.g. prohibits promotion of lethal means i.e. firearms, substances, etc.).

  • includes and incorporates user input.

  • supports and promotes mental health and well-being.


Addresses:

  • describes enforceable actions by the company.

  • dissemination of the policy.

  • performance standards (how the ads are performing, click throughs).

  • reporting standard at point of access.

  • the prohibiting of sale or commercial use of data obtained from users under age 18.

  • frequency of advertising to children under 18.

Standard 3: The company has a policy on research on suicide, self-harm, mental health and wellness that:

  • is readily accessible.

  • is easily understood (i.e. language).

  • is developmentally informed and age appropriate.

  • is aligned with evidence-based, best practices for mental health and suicide prevention.

  • includes and incorporates user input.

  • supports and promotes mental health and well-being.

  • describes enforceable actions by the company.


Addresses:

  • the value and support of research on mental health, suicide, and well-being.

  • what data, content and information will be made available for independent, external research.

  • there being a “no-profit” model for research on user data of youth under age 18.

Functionality

Standard 1: The platform has a reporting feature that:

  • is user friendly (easy to use and understand).

  • is easily accessible on the platform and readily identifiable.

  • includes and incorporates user input.

Provides:

A follow-up, action-feedback function for users or content that is reported to the company.
1. reporting feedback within 24 hours.
Pop-up/sensitive content warning features for reportable content
1. that allows users to initiate.
AI/ML to learn and improve reporting functions

Standard 2: The platform protects users that are exposed to harmful SSI content and:

  • includes technology to reduce exposure to harmful content.

  • includes functions for users to report harmful content on live streams.

  • includes functions to help viewers of harmful live streams with resources and support.

  • includes functions that prevent sharing of harmful live streams.

  • includes controls on privacy settings, filters and pushed content allows users to determine pushed/recommended content.

Standard 3: The platform includes features regarding user time that:

  • is easily understood (i.e. language).

  • incorporates age-relevant user input in the design.

  • incorporates evidence-informed, developmentally, and age-appropriate screen-time usage.

  • are readily available and accessible to users (not hidden).


Includes:

a voluntary “pause” feature (e.g. Take a 30-minute break)
1. that is readily identifiable.
2. easy to set.
a mandatory “take-a-break” feature
1. automatically activated by the platform.
2. upon activation prohibits use for a pre-determined, set amount of time.
3. the break time is disclosed to users at the time of activation.

  • functions that allow users to deactivate “endless scroll”.

  • functions that show users their history.

  • functions that screen users for externally expert-defined problematic social media use (e.g. lethal means instruction such as how to use a firearm to hurt self or others, cutting, strangulation or eating disorder behaviors).

Standard 4: The platform provides parental tools and controls that:

  • are based on age.

  • are developmentally informed.

  • includes and incorporates user input.

  • are readily available and accessible (not hidden).


Provides:

  • competency-based user assessments for users under 18 (e.g. must answer 2 questions to continue).

  • opt-in features that are easy to use.

parental controls:

  1. to set allowable time on platform for users aged 13-1

  2. to set contact parameters with other users (including unknown users)

  3. to set search functions and terms on the platform

  4. with guidance on actionable steps to adjust parental controls

  • include privacy filters that allow users to set privacy settings

Governance &
Transparency

Standard 1: The company employs a Trust and Safety Team that:

  • employs a full-time Chief or Global Head of Health Safety and Security.

  • employs a full-time Assistant Chief or Global Head Health Safety and Security Officer with a doctorate degree in psychology.

  • employs or contracts with a full-time Chief Child Protection Officer or equivalent expert.

  • employs or contracts with a team of diverse global mental health, child and adolescent development experts and subject matter experts (such as in means restriction).

  • involves a diverse (racial, gender, sexual orientation, geography, lived experience, etc.) panel of community members including adults, parents and youth in product development, monitoring and evaluation.

Standard 2: The company is transparent:

  • in reporting of user reports on harmful content.

  • in their use of AI/ML to identify, flag, report and remove harmful content.

  • in monitoring, identification, removal and compliance of child predators and suicide/self-harm/harm to other users on the platform.

  • in annual reporting of platform feature changes resulting from user or AI generated reports on self-harm or dangerous content.

With users in age-appropriate language:

  1. what the company does with the data collected.

  2. what data collection users can block or decide they do not want collected.

Standard 3: The company voluntarily participates in and/or supports:

  • providing anonymized data for independent, externally conducted research.

  • providing research data for internally conducted research.

  • transparency of research conducted regarding the platforms use and mental health by youth under 18.

  • health and well-being by including these terms in their mission statement.

Standard 4: The company voluntarily provides an annual:

Audit report on user safety to users

  1. that is readily accessible.

  2. that is easily understood.

audit report on user safety to an external, independent regulatory body

  1. participates in an independent review of new technology around mental health safety.

Standard 5: The company conducts an annual exposure to violence evaluation that:

  • is made public and transparent with the results.

  • includes results related to risk of exposure to content on the platform (e.g. platform has a 6% likelihood of exposure to suicide, self-harm, violence, etc.).

Standard 6: The company is transparent regarding advertising:

  • .policies to minors (under 18)

  • on the use of a Verified Advertiser model (verification of who the advertiser is).

  • on how it prohibits personalized or customized advertising to youth under 18.

Standard 7: The company has a compliance department that:

  • oversees internal audits.

  • coordinates with external auditors.

  • has regular and direct contact with senior management.

  • ensures compliance with regulatory bodies.


Provides:

  • at least quarterly audit regarding suicide, self-harm and violence on the platform.

  • at least quarterly report on actions taken by the company regarding suicide, self-harm and violence on the platform.

Digital Literacy
& Well-Being

Standard 1: The company educates users:

  • on crisis support, response and intervention service.

  • on how to find credible crisis support services in their area.

  • on how to talk with others about crisis situations (for suicide, self-harm, violence).

  • on how to disable or turn off reactions on platform.

  • on mental health and wellness.

Standard 2: Recognizing a user’s right to be forgotten (i.e. the right to erasure), the company:

  • educates users on the right to be forgotten.

  • deletes and removes data from platforms for users who ask to be forgotten.

  • informs users when their request to be forgotten has been completed.

Standard 3: The company proactively educates users on digital literacy and wellness that:

  • is readily accessible.

  • is easily understood (i.e. language).

  • is developmentally informed and age appropriate.

  • is aligned with evidence-based, best practices for mental health and suicide prevention.

  • that includes and incorporates user input.

  • provides an introduction to the platform and terms of use.


Addressing:

  • suicide and self-harm

  • mental health and well being

Standard 4: The company responsively provides digital literacy and wellness content that:

  • is readily accessible.

  • is easily understood (i.e. language, readability).

  • includes developmentally informed and age-appropriate information and resources.

  • provides users with educational information on parental monitoring functions at initial sign-up of account.

  • is aligned with evidence-based, best practices for mental health and suicide prevention.

  • includes and incorporates user input.

  • includes and references certified and credible sources of health-related information (e.g. Mayo Clinic, Cleveland Clinic, etc.).

Addresses:

  • parental monitoring functions

  • current/updated educational information on parental monitoring functions annually related to the age of child user

  • the promotion of mental health and well-being

  • suicide, self-harm and eating-related disorders


Standard 5: The company educates users on developmentally informed, age-appropriate use that:

  • includes recommended “on-ramps” for developmentally informed, age-appropriate use (e.g. 13-15, 16-19, 19+ or age gaiting).

  • includes developmentally informed features (e.g. quizzes, tasks) to determine access to content that changes over time.

  • educates users on digital rights (ownership of data, content created, personal information).

Content

Standard 1: The company employs a Trust and Safety Team that:

  • employs a full-time Chief or Global Head of Health Safety and Security.

  • employs a full-time Assistant Chief or Global Head Health Safety and Security Officer with a doctorate degree in psychology.

  • employs or contracts with a full-time Chief Child Protection Officer or equivalent expert.

  • employs or contracts with a team of diverse global mental health, child and adolescent development experts and subject matter experts (such as in means restriction).

  • involves a diverse (racial, gender, sexual orientation, geography, lived experience, etc.) panel of community members including adults, parents and youth in product development, monitoring and evaluation.

Standard 2: The company is transparent:

  • in reporting of user reports on harmful content.

  • in their use of AI/ML to identify, flag, report and remove harmful content.

  • in monitoring, identification, removal and compliance of child predators and suicide/self-harm/harm to other users on the platform.

  • in annual reporting of platform feature changes resulting from user or AI generated reports on self-harm or dangerous content.

With users in age-appropriate language:

  1. what the company does with the data collected.

  2. what data collection users can block or decide they do not want collected.

Standard 3: The company voluntarily participates in and/or supports:

  • providing anonymized data for independent, externally conducted research.

  • providing research data for internally conducted research.

  • transparency of research conducted regarding the platforms use and mental health by youth under 18.

  • health and well-being by including these terms in their mission statement.

Standard 4: The company voluntarily provides an annual:

Audit report on user safety to users

  1. that is readily accessible.

  2. that is easily understood.

audit report on user safety to an external, independent regulatory body

  1. participates in an independent review of new technology around mental health safety.

Standard 5: The company conducts an annual exposure to violence evaluation that:

  • is made public and transparent with the results.

  • includes results related to risk of exposure to content on the platform (e.g. platform has a 6% likelihood of exposure to suicide, self-harm, violence, etc.).

Standard 6: The company is transparent regarding advertising:

  • .policies to minors (under 18)

  • on the use of a Verified Advertiser model (verification of who the advertiser is).

  • on how it prohibits personalized or customized advertising to youth under 18.

Standard 7: The company has a compliance department that:

  • oversees internal audits.

  • coordinates with external auditors.

  • has regular and direct contact with senior management.

  • ensures compliance with regulatory bodies.


Provides:

  • at least quarterly audit regarding suicide, self-harm and violence on the platform.

  • at least quarterly report on actions taken by the company regarding suicide, self-harm and violence on the platform.

The United States has various standards for different types of media and products to ensure safety, appropriateness, and quality. Scroll through below for some notable ones:

Automobiles

National Highway Traffic Safety Administration (NHTSA): Provides safety ratings for vehicles based on crash tests and other safety evaluations.

 

Environmental Protection Agency (EPA): Provides fuel economy ratings and emissions standards for vehicles.

Books

Common Sense Media: Provides age-appropriate ratings and reviews for books, movies, TV shows, and games based on content.

Buildings & Construction

Building Codes: Various local, state, and national codes, such as the International Building Code (IBC) and National Electrical Code (NEC), ensure safety and quality in construction.

Energy Star: A program by the EPA and the Department of Energy that certifies energy-efficient products and buildings.

Food & Beverages

Food and Drug Administration

Nutrition Facts Label: Required by the Food and Drug Administration (FDA) to provide information on the nutritional content of food and beverages.

USDA Organic Certification: Ensures products labeled as organic meet the standards set by the United States Department of Agriculture (USDA).

Health & Pharmaceuticals

FDA Approval: Ensures that drugs, medical devices, and vaccines meet safety and efficacy standards before they can be marketed.

Music

Parental Advisory Label (PAL): Indicates explicit content in music recordings, managed by the Recording Industry Association of America (RIAA).

Online Content

COPPA (Children’s Online Privacy Protection Act): Regulates the collection of personal information from children under 13 by websites and online services.

 

CIPA (Children’s Internet Protection Act): Requires schools and libraries to implement internet safety policies and filter inappropriate content to receive federal funding.

OSHA

OSHA develops and enforces regulations known as standards to ensure workplace safety and health. These standards are found in Title 29 of the Code of Federal Regulations (CFR).

Television

TV Parental Guidelines: Provides ratings like TV-Y (All Children), TV-Y7 (Directed to Older Children), TV-G (General Audience), TV-PG (Parental Guidance Suggested), TV-14 (Parents Strongly Cautioned), and TV-MA (Mature Audience Only).

Toys & Products for Children

Children’s Product Safety: Regulated by the Consumer Product Safety Commission (CPSC), which ensures toys and products for children meet safety standards.

Video Games

Entertainment Software Rating Board (ESRB): Ratings include EC (Early Childhood), E (Everyone), E10+ (Everyone 10 and Older), T (Teen), M (Mature 17+), and AO (Adults Only 18+). I like this website.

Verified Carbon Standard

The Verified Carbon Standard (VCS) Program is the world’s most widely used greenhouse gas (GHG) crediting program.

OTHER STANDARDS

Bird's Eye View Meeting Table

IMPACT AND EVALUATION PLAN

SHORT TERM

Adoption by tech companies.
Endorsement by organizations.

MID TERM

Reported increase in face-to-face time with peers, family.
Reduction in consumption.
Reduction in number of safety incidents/reports.

LONG TERM

Changes in platforms, games, engagement.
Reports decrease in distress, depression, anxiety, eating disorders.
Reduction in suicide rates among target population.
bottom of page