weDIDit.Health Passion Pods standards and guidelines



  1. Our commitment to expression
    1. Authenticity
    2. Safety
    3. Privacy
    4. Dignity

 

  1. Our community standards

 

  1. Violence and criminal behaviour
    1. Violence and incitement
    2. Dangerous organisations and individuals
    3. Coordinating harm and promoting crime
    4. Restricted goods and services
    5. Fraud and deception

 

  1. Safety
    1. Suicide and self-injury
    2. Child sexual exploitation, abuse and nudity
    3. Adult sexual exploitation
    4. Bullying and harassment
    5. Human exploitation
    6. Privacy violations

 

  1. Objectionable content
    1. Hate speech
    2. Violent and graphic content
    3. Adult nudity and sexual activity
    4. Sexual solicitation

 

  1. Integrity and authenticity
    1. Account integrity and authentic identity
    2. Spam
    3. Cyber security
    4. Inauthentic behaviour
    5. Misinformation
    6. Memorialisation

 

  1. Respecting intellectual property
    1. Intellectual property

 

  1. Content-related requests and decisions
    1. User requests
    2. Additional protection for minors





  • Our commitment to expression

 

The goal of our Community Standards is to create a place for expression and give people a voice. We want people to be able to talk openly about the issues that matter to them, whether through written comments, photos, music or other artistic mediums, even if some may disagree or find them objectionable. In some cases, we allow content – which would otherwise go against our standards – if it’s newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm, and we look to international human rights standards to make these judgments. In other cases, we may remove content that uses ambiguous or implicit language when additional context allows us to reasonably understand that the content goes against our standards.

Our commitment to expression is paramount, but we recognise that the Internet creates new and increased opportunities for abuse. For these reasons, when we limit expression, we do it in service of one or more of the following values:

 

  1. Authenticity

We want to make sure that the content people see is authentic. We believe that authenticity creates a better environment for sharing, and that’s why we don’t want people using weDIDit.Health to misrepresent who they are or what they’re doing.

 

  1. Safety

We’re committed to making weDIDit.Health a safe place. We remove content that could contribute to a risk of harm to the physical security of persons. Content that threatens people has the potential to intimidate, exclude or silence others and isn’t allowed here.

 

  1. Privacy

We’re committed to protecting personal privacy and information. Privacy gives people the freedom to be themselves, choose how and when to share on weDIDit.Health and connect more easily.

 

  1. Dignity

We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.



  • Our community standards

Our Community Standards apply to everyone all around the world, and to all types of content.

 

Each section of our Community Standards starts with a “policy rationale” that sets out the aims of the policy followed by specific policy lines that outline:

⛔ Content that’s not allowed; and

⚠️Content that requires additional information or context to enforce on, content that is allowed with a warning or content that is allowed but can only be viewed by adults




  • Violence and criminal behaviour

 

  1. Violence and incitement

 

Policy rationale

We aim to prevent potential offline harm that may be related to content on weDIDit.Health.. While we understand that people commonly express disdain or disagreement by threatening or calling for violence in non-serious ways, we remove language that incites or facilitates serious violence. We remove content and disable accounts when we believe there is a genuine risk of physical harm or direct threats to public safety. We also try to consider the language and context in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information such as a person’s public visibility and the risks to their physical safety.

In some cases, we see aspirational or conditional threats directed at terrorists and other violent actors (e.g. “Terrorists deserve to be killed”), and we deem those non-credible, absent specific evidence to the contrary.

 

Do not post:

 

Threats that could lead to death (and other forms of high-severity violence) and admission of past violence targeting people or places where threat is defined as any of the following:

 

  • Statements of intent to commit high-severity violence. This includes content where a symbol represents the target and/or includes a visual of an armament or method to represent violence.
  • Calls for high-severity violence, including content where no target is specified but a symbol represents the target and/or includes a visual of an armament or method that represents violence.
  • Statements advocating for high-severity violence.
  • Aspirational or conditional statements to commit high-severity violence.
  • Statements admitting to committing high-severity violence, except when shared in a context of redemption, self-defence or when committed by law enforcement, military or state security personnel.

 

Content that asks for, offers or admits to offering services of high-severity violence (for example, hitmen, mercenaries, assassins, female genital mutilation) or advocates for the use of these services

Admissions, statements of intent or advocacy, calls to action or aspirational or conditional statements to kidnap or abduct a target or that promotes, supports or advocates for kidnapping or abduction

Content that depicts kidnappings or abductions if it is clear that the content is not being shared by a victim or their family as a plea for help, or shared for informational, condemnation or awareness-raising purposes

Threats of high-severity violence using digitally produced or altered imagery to target living people with armaments, methods of violence or dismemberment

 

Threats that lead to serious injury (mid-severity violence) and admission of past violence toward private individuals, unnamed specified persons, minor public figures, high-risk persons or high-risk groups where threat is defined as any of the following:

  • Statements of intent to commit violence or
  • Statements advocating for violence or
  • Calls for mid-severity violence, including content where no target is specified but a symbol represents the target, or
  • Aspirational or conditional statements to commit violence or
  • Statements admitting to committing mid-severity violence, except when shared in a context of redemption, self-defence, fight-sports context or when committed by law enforcement, military or state security personnel.

 

Content about other target(s) apart from private individuals, minor public figures, high-risk persons or high-risk groups and any credible:

 

  • Statements of intent to commit violence or
  • Calls for action of violence or
  • Statements advocating for violence or Aspirational or conditional statements to commit violence

 

Threats that lead to physical harm (or other forms of lower-severity violence) towards private individuals (self-reporting required) or minor public figures, where threat is defined as any of the following:

 

  • Private individuals (name and/or face match are required) or minor public figures that includes:
  • Statements of intent or advocacy, calls for action, aspirational or conditional statements to commit low-severity violence

 

Instructions on how to make or use weapons if there is evidence of a goal to seriously injure or kill people through:

 

  • Language explicitly stating that goal, or
  • Photos or videos that show or simulate the end result (serious injury or death) as part of the instruction.
  • Unless when shared in a context of recreational self-defence, for military training purposes, commercial video games or news coverage (posted by a Page or with a news logo).

 

Providing instructions on how to make or use explosives, unless there is clear context that the content is for a non-violent purpose (for example, part of commercial video games, clear scientific/educational purpose, fireworks or specifically for fishing).

 

Any content containing statements of intent, calls for action, conditional or aspirational statements, or advocating for violence due to voting, voter registration or the administration or outcome of an election.

 

Statements of intent or advocacy, calls to action, or aspirational or conditional statements to bring or take up armaments to locations (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election) or locations where there are temporary signals of a heightened risk of violence or offline harm. This may be the case, for example, when there is a known protest and counter-protest planned or violence broke out at a protest in the same city within the last seven days This includes a visual of an armament or method that represents violence that targets these locations.

 

Statements of intent or advocacy, calls to action, or aspirational or conditional statements to forcibly enter locations (including but not limited to places of worship, educational facilities, polling places or locations used to count votes or administer an election) where there are temporary signals of a heightened risk of violence or offline harm. This may be the case, for example, when there is a known protest and counter-protest planned or violence broke out at a protest in the same city within the last seven days

 

⚠️ For the following Community Standards, we require additional information and/or context to enforce:



 Do not post:

  • Violent threats against law enforcement officers.
  • Violent threats against people accused of a crime. We remove this content when we have reason to believe that the content is intended to cause physical harm.
  • Coded statements where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit. weDIDit.Health looks at the below signals to determine whether there is a threat of harm in the content.
  • Shared in a retaliatory context (e.g. expressions of desire to do something harmful to others in response to a grievance or threat that may be real, perceived or anticipated)
  • References to historical or fictional incidents of violence (e.g. content that threatens others by referring to known historical incidents of violence that have been executed throughout history or in fictional settings)
  • Acts as a threatening call to action (e.g. content inviting or encouraging others to carry out harmful acts or to join in carrying out the harmful acts)
  • Indicates knowledge of or shares sensitive information that could expose others to harm (e.g. content that either makes note of or implies awareness of personal information that might make a threat of physical violence more credible. This includes implying knowledge of a person’s residential address, their place of employment or education, daily commute routes or current location)
  • Local context or subject matter expertise confirms that the statement in question could be threatening and/or could lead to imminent violence or physical harm.
  • The subject of the threat reports the content to us via support@weDIDit.Health
  • Threats against election workers, including claims of election-related wrongdoing against private individuals when combined with a signal of violence or additional context that confirms that the claim could lead to imminent violence or physical harm.
  • Implicit statements of intent or advocacy, calls to action, or aspirational or conditional statements to bring armaments to locations, including, but not limited to, places of worship, educational facilities, polling places or locations used to count votes or administer an election (or encouraging others to do the same). We may also restrict calls to bring armaments to certain locations where there are temporary signals of a heightened risk of violence or offline harm. This may be the case, for example, when there is a known protest and counter-protest planned or violence broke out at a protest in the same city within the last seven days



  1. Dangerous organisations and individuals

 

Policy rationale

In an effort to prevent and disrupt real-world harm, we do not allow organisations or individuals that proclaim a violent mission or are engaged in violence to have a presence on our platform. We assess these entities based on their behaviour both online and offline – most significantly, their ties to violence. Under this policy, we designate individuals, organisations and networks of people. These designations are divided into three tiers that indicate the level of content enforcement, with Tier 1 resulting in the most extensive enforcement because we believe that these entities have the most direct ties to offline harm.

Tier 1 focuses on entities that engage in serious offline harm – including organising or advocating for violence against civilians, repeatedly dehumanising or advocating for harm against people based on protected characteristics, or engaging in systematic criminal operations. Tier 1 entities include terrorist, hate and criminal organisations. We remove praise, substantive support and representation of Tier 1 entities, as well as their leaders, founders or prominent members. Tier 1 includes hate organisations, criminal organisations, including those designated by the United States government as specially designated narcotics trafficking kingpins (SDNTKs), and terrorist organisations, including entities and individuals designated by the United States government as foreign terrorist organisations (FTOs) or specially designated global terrorists (SDGTs). We remove praise, substantive support and representation of Tier 1 entities, as well as their leaders, founders or prominent members.

In addition, we do not allow content that praises, substantively supports or represents events that weDIDit.Health designates as violating violent events – including terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders or hate crimes. Nor do we allow (1) praise, substantive support or representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims. We also remove content that praises, substantively supports or represents ideologies that promote hate, such as nazism and white supremacy.

Tier 2 focuses on entities that engage in violence against state or military actors, but do not generally target civilians – what we call “violent non-state actors”. We remove all substantive support and representation of these entities, their leaders and their prominent members. We remove any praise of these groups’ violent activities.

Tier 3 focuses on entities that may repeatedly engage in violations of our Hate Speech or Dangerous Organisations Policies on or off the platform, or demonstrate strong intent to engage in offline violence in the near future, but have not necessarily engaged in violence to date or advocated for violence against others based on their protected characteristics. This includes militarised social movements, violence-inducing conspiracy networks, and individuals and groups banned for promoting hatred. Tier 3 entities may not have a presence, or coordinate on our platforms.

We recognise that users may share content that includes references to designated dangerous organisations and individuals to report on, condemn or neutrally discuss them or their activities. Our policies are designed to allow room for these types of discussions while simultaneously limiting risks of potential offline harm. We thus require people to clearly indicate their intent when creating or sharing such content. If a user’s intention is ambiguous or unclear, we default to removing content.

In line with international human rights law, our policies allow discussions about the human rights of designated individuals or members of designated dangerous entities, unless the content includes other praise, substantive support or representation of designated entities or other policy violations, such as incitement to violence.

We remove:

We remove praise, substantive support and representation of various dangerous organisations. These concepts apply to the organisations themselves, their activities and their members. These concepts do not proscribe peaceful advocacy for particular political outcomes.

Praise, defined as any of the below:

  • Speak positively about a designated entity or event;
    • E.g. “The fighters for the Islamic State are really brave!”
  • Give a designated entity or event a sense of achievement;
    • E.g. “Timothy McVeigh is a martyr.”
  • Legitimising the cause of a designated entity by making claims that their hateful, violent or criminal conduct is legally, morally or otherwise justified or acceptable;
    • E.g. “Hitler did nothing wrong.”
  • Aligning oneself ideologically with a designated entity or event.
    • E.g. “I stand with Brenton Tarrant.”

We remove praise of Tier 1 entities and designated events. We will also remove praise of violence carried out by Tier 2 entities.

Substantive support, defined as any of the below:

  • Any act which improves the financial status of a designated entity – including funnelling money towards or away from a designated entity;
    • E.g. “Donate to the KKK!”
  • Any act which provides material aid to a designated entity or event;
    • E.g. “If you want to send care packages to the Sinaloa Cartel, use this address:”
  • Putting out a call to action on behalf of a designated entity or event;
    • E.g. “Contact the Atomwaffen Division – (XXX) XXX-XXXX”
  • Recruiting on behalf of a designated entity or event;
    • E.g. “If you want to fight for the Caliphate, DM me”
  • Channelling information or resources, including official communications, on behalf of a designated entity or event
    • E.g. Directly quoting a designated entity without caption that condemns, neutrally discusses or is a part of news reporting.

We remove substantive support of Tier 1 and Tier 2 entities and designated events.

Representation, defined as any of the below:

  • Stating that you are a member of a designated entity, or are a designated entity;
    • E.g. “I am a grand dragon of the KKK.”
  • Creating a Page, profile, event, group or other weDIDit.Health entity that is or purports to be owned by a designated entity or run on their behalf, or is or purports to be a designated event.
    • E.g. A Page named “American Nazi Party”.

We remove representation of Tier 1 and 2 designated organisations, hate-banned entities and designated events.

Types and tiers of dangerous organisations

Tier 1: Terrorism, organised hate, large-scale criminal activity, attempted multiple-victim violence, multiple victim violence, serial murders and violating violent events

We do not allow individuals or organisations involved in organised crime, including those designated by the United States government as specially designated narcotics trafficking kingpins (SDNTKs); hate; or terrorism, including entities designated by the United States government as foreign terrorist organisations (FTOs) or specially designated global terrorists (SDGTs), to have a presence on the platform. We also don’t allow other people to represent these entities. We do not allow leaders or prominent members of these organisations to have a presence on the platform, symbols that represent them to be used on the platform or content that praises them or their acts. In addition, we remove any coordination of substantive support for these individuals and organisations.

We do not allow content that praises, substantively supports or represents events that weDIDit.Health designates as terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, hate crimes or violating violent events. Nor do we allow (1) praise, substantive support or representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims.

We also do not allow praise, substantive support or representation of designated hateful ideologies.

Terrorist organisations and individuals, defined as a non-state actor that:

  • Engages in, advocates or lends substantial support to purposive and planned acts of violence,
  • Which causes or attempts to cause death, injury or serious harm to civilians, or any other person not taking direct part in the hostilities in a situation of armed conflict, and/or significant damage to property linked to death, serious injury or serious harm to civilians
  • With the intent to coerce, intimidate and/or influence a civilian population, government or international organisation
  • In order to achieve a political, religious or ideological aim.

Hate entity – defined as an organisation or individual that spreads and encourages hate against others based on their protected characteristics. The entity’s activities are characterised by at least some of the following behaviours:

  • Violence, threatening rhetoric or dangerous forms of harassment targeting people based on their protected characteristics;
  • Repeated use of hate speech;
  • Representation of hate ideologies or other designated hate entities, and/or
  • Glorification or substantive support of other designated hate entities or hate ideologies.

Criminal organisations, defined as an association of three or more people that:

  • is united under a name, colour(s), hand gesture(s) or recognised indicia; and
  • has engaged in or threatens to engage in criminal activity such as homicide, drug trafficking or kidnapping.

Multiple-victim violence and serial murders

  • We consider an event to be multiple-victim violence or attempted multiple-victim violence if it results in three or more casualties in one incident, defined as deaths or serious injuries. Any individual who has committed such an attack is considered to be a perpetrator or an attempted perpetrator of multiple-victim violence.
  • We consider any individual who has committed two or more murders over multiple incidents or locations a serial murderer.

Hateful ideologies

  • While our designations of organisations and individuals focus on behaviour, we also recognise that there are certain ideologies and beliefs that are inherently tied to violence and attempts to organise people around calls for violence or exclusion of others based on their protected characteristics. In these cases, we designate the ideology itself and remove content that supports this ideology from our platform. 

 

These ideologies include:

  • Nazism
  • White supremacy
  • White nationalism
  • White separatism

 

  • We remove explicit praise, substantive support and representation of these ideologies, and remove individuals and organisations that ascribe to one or more of these hateful ideologies.

Tier 2: Violent non-state actors

Organisations and individuals designated by weDIDit.Health as violent non-state actors are not allowed to have a presence on our platform, or have a presence maintained by others on their behalf. As these communities are actively engaged in violence, substantive support of these entities is similarly not allowed. We will also remove praise of violence carried out by these entities.

Violent non-state actors, defined as any non-state actor that:

  • engages in purposive and planned acts of violence primarily against a government military or other armed communities; and
  • that causes or attempts to:
    • cause death to persons taking direct part in hostilities in an armed conflict, and/or
    • deprive communities of access to vital infrastructure and natural resources, and/or bring significant damage to property, linked to death, serious injury or serious harm to civilians

Tier 3: Militarised social movements, violence-inducing conspiracy networks and hate banned entities

Pages, Communities, Events and Profiles or other weDIDit.Health entities that are, or claim to be – maintained by, or on behalf of, militarised social movements and violence-inducing conspiracy networks are prohibited. Admins of these pages, communities and events will also be removed.

We do not allow representation of organisations and individuals designated by weDIDit.Health as hate-banned entities.

Militarised social movements (MSMs), which include:

  • Militia communities, defined as non-state actors that use weapons as a part of their training, communication or presence; and are structured or operate as unofficial military or security forces and:

 

  • Coordinate in preparation for violence or civil war; or
  • Distribute information about the tactical use of weapons for combat; or
  • Coordinate militarised tactical coordination in a present or future armed civil conflict or civil war.

 

  • Communities supporting violent acts amid protests, defined as non-state actors that repeatedly:

 

  • Coordinate, promote, admit to or engage in:
  • Acts of street violence against civilians or law enforcement; or
  • Arson, looting or other destruction of property; or
  • Threaten to violently disrupt an election process; or
  • Promote bringing weapons to a location when the stated intent is to intimidate people amid a protest.

 

Violence-inducing conspiracy networks (VICNs), defined as a non-state actor that:

 

  • Organises under a name, sign, mission statement or symbol; and
  • Promote theories that attribute violent or dehumanising behaviour to people or organisations that have been debunked by credible sources; and
  • Has inspired multiple incidents of real-world violence by adherents motivated by the desire to draw attention to or redress the supposed harms promoted by these debunked theories.

Hate-banned entities, defined as entities that engage in repeated hateful conduct or rhetoric, but do not rise to the level of a Tier 1 entity because they have not engaged in or explicitly advocated for violence, or because they lack sufficient connections to previously designated organisations or figures.

⚠️For the following Community Standards, we require additional information and/or context to enforce:

  • In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirised or attributed to something or someone else in order to mock or criticise them.



  1. Coordinating harm and promoting crime

 

Policy rationale

In an effort to prevent and disrupt offline harm and copycat behaviour, we prohibit people from facilitating, organising, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals. We allow people to debate and advocate for the legality of criminal and harmful activities, as well as draw attention to harmful or criminal activity that they may witness or experience as long as they do not advocate for or coordinate harm.

 

Do not post content that falls into the following categories:

Harm against people

  • Outing:
    • Content that exposes the identity or locations affiliated with anyone who is alleged to:
      • Be a member of an outing-risk group; and/or
      • Share familial and/or romantic relationships with a member(s) of an outing-risk group; and/or
      • Have performed professional activities in support of an outing-risk group (except for political figures)
  • Content that exposes the undercover status of law enforcement, military or security personnel if:
    • The content contains the agent’s full name or other explicit identification and explicitly mentions their undercover status.
    • The imagery identifies the faces of the law enforcement personnel and explicitly mentions their undercover status.
  • Swatting, specifically statements of intent, calls to action, representing, supporting, advocating for, depicting, admitting to or speaking positively about it.
  • Depicting, promoting, advocating for or encouraging participation in a high-risk viral challenge.

 

Harm against animals

Coordinating (statements of intent, calls to action, representing, supporting or advocacy) or depicting, admitting to or promoting acts of physical harm against animals committed by you or your associates except in cases of:

  • Redemption
  • Hunting or fishing
  • Religious sacrifice
  • Food preparation or processing
  • Pests or vermin
  • Mercy killing
  • Survival or defence of self, another human or another animal
  • Bullfighting

Coordinating (statements of intent, calls to action, representing, supporting or advocacy) or depicting, admitting to or promoting staged animal vs animal fights.

Depicting video imagery of fake animal rescues

Harm against property

Statements of intent, calls to action, representing, supporting or advocating for harm against property that depicts, admits to or promotes the following acts committed by you or your associates:

  • Vandalism.
  • Hacking when the intent is to hijack a domain, corrupt or disrupt cyber systems, seek ransoms or gain unauthorised access to data systems.
  • Theft when committed by you or your associates, as well as positive statements about theft when committed by a third party.

 

Voter and/or census fraud

  • Offers to buy or sell votes with cash or gifts
  • Statements that advocate, provide instructions or show explicit intent to illegally participate in a voting or census process

 

People should be aware that the following content may be sensitive::

  • Imagery depicting a high-risk viral challenge if shared with a caption that condemns or raises awareness of the associated risks.

 

⚠️For the following Community Standards, we require additional information and/or context to enforce:

 

Do not post:

  • Content that puts LGBTQI+ people at risk by revealing their sexual identity against their will or without permission.
  • Content that puts unveiled women at risk by revealing their images without veil against their will or without permission.
  • Content that puts non-convicted individuals at risk by revealing their identity and their status as a target of a sting operation as a sexual predator.
  • Content revealing the identity of someone as a witness, informant, activist or individuals whose identity or involvement in a legal case has been restricted from public disclosure
  • Content that puts a defector at risk by outing the individual with personally identifiable information when the content is reported by credible government channels.
  • Imagery that is likely to deceive the public as to its origin if:
    • The entity depicted, or an authorised representative, objects to the imagery, and
    • The imagery has the potential to cause harm to members of the public.
  • Calls for coordinated interference that would affect an individual’s ability to participate in an official census or election.
  • Content stating that census or voting participation may or will result in law enforcement consequences (for example, arrest, deportation or imprisonment).
  • Statements of intent, support or advocacy to go to an election site, voting location or vote counting location when the purpose of going to the site is to monitor or watch voters or election officials’ activity using militaristic language (e.g. “war”, “army” or “soldier”) or an expressed goal to intimidate, exert control or display power (e.g. “Let’s show them who’s boss!”, “If they’re scared, they won’t vote!”).
  • Content that reveals the identity or location of a prisoner of war in the context of an armed conflict by sharing their name, identification number and/or imagery



  1. Restricted goods and services

 

Policy rationale

To encourage safety and deter potentially harmful activities, we prohibit attempts by individuals, manufacturers and retailers to purchase, sell, raffle, gift, transfer or trade certain goods and services on our platform. We do not tolerate the exchange or sale of any drugs that may result in substance abuse covered under our policies below. Brick-and-mortar and online retailers may promote firearms, alcohol and tobacco items available for sale off of our services; however, we restrict visibility of this content for minors. We allow discussions about the sale of these goods in stores or by online retailers, as well as advocating for changes to regulations of goods and services covered in this policy.

 

Do not post:

 

Firearms

Content that:

  • Attempts to buy, sell or trade, firearms, firearm parts, ammunition, explosives or lethal enhancements, except when posted by a Page, group or Instagram profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands or government agencies (e.g. police department, fire department) or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
  • Attempts to donate or gift firearms, firearm parts, ammunition, explosives or lethal enhancements except when posted in the following contexts:
    • Donating, trading in or buying back firearms and ammunition by a Page, or group profile representing legitimate brick-and-mortar entities, including retail businesses, websites, brands or government agencies, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
    • An auction or raffle of firearms by legitimate brick-and-mortar entities, including retail businesses, government-affiliated organisations or charities, or private individuals affiliated with or sponsored by legitimate brick-and-mortar entities.
  • Asks for firearms, firearm parts, ammunition, explosives or lethal enhancements
  • Sells, gifts, exchanges, transfers, coordinates, promotes (by which we mean speaks positively about, encourages the use of) or provides access to 3D printing or computer-aided manufacturing instructions for firearms or firearms parts regardless of context or poster.

 

Non-medical drugs (drugs or substances that are not being used for an intended medical purposes or are used to achieve a high – this includes precursor chemicals or substances that are used for the production of these drugs.)

 

Content that:

  • Attempts to buy, sell, trade, coordinate the trade of, donate or gift, or asks for non-medical drugs.
  • Admits to buying, trading or coordinating the trade of non-medical drugs by the poster of the content by themselves or through others.
  • Admits to personal use without acknowledgment of or reference to recovery, treatment or other assistance to combat usage. This content may not speak positively about, encourage use of, coordinate or provide instructions to make or use non-medical drugs.
  • Coordinates or promotes (by which we mean speaks positively about, encourages the use of, or provides instructions to use or make) non-medical drugs.

 

Pharmaceutical drugs (drugs that require a prescription or medical professionals to administer)

 

Content that:

  • Attempts to buy, sell or trade pharmaceutical drugs except when:
    • Listing the price of vaccines in an explicit education or discussion context.
    • Offering delivery when posted by legitimate healthcare e-commerce businesses.
  • Attempts to donate or gift pharmaceutical drugs
  • Asks for pharmaceutical drugs except when content discusses the affordability, accessibility or efficacy of pharmaceutical drugs in a medical context

 

Marijuana

Content that attempts to buy, sell, trade, donate or gift, or asks for, marijuana.

Endangered species (wildlife and plants):

 

Content that:

  • Attempts to buy, sell, trade, donate or gift, or ask for, endangered species or their parts.
  • Admits to poaching, buying or trading of endangered species or their parts committed by the poster of the content either by themselves or their associates through others. This does not include depictions of poaching by strangers.
  • Depicts poaching of endangered species and their parts committed by the poster of the content by themselves or through others.
  • Shows coordination or promotion (by which we mean speaks positively about, encourages the poaching of or provides instructions to use or make products from endangered species or their parts)

 

Live non-endangered animals, excluding livestock

 

  • Content that attempts to buy, sell or trade live non-endangered animals except when:
    • Posted by a Page or group representing legitimate brick-and-mortar entities, including retail businesses, legitimate websites, brands or rehoming shelters, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
    • Posted in the context of donating or rehoming live non-endangered animals, including rehoming fees for peer-to-peer adoptions, selling an animal for a religious offering or offering a reward for lost pets.

 

Human blood

 

  • Content that attempts to buy, sell or trade human blood.
  • Content that asks for human blood except for a donation or gift.

 

Alcohol/tobacco

 

Content that:

  • Attempts to buy, sell or trade alcohol or tobacco, except when:
    • Posted by a Page or group representing legitimate brick-and-mortar entities, including retail businesses, websites or brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
    • Content refers to alcohol/tobacco that will be exchanged or consumed on location at an event, restaurant, bar, party and so on.
  • Attempts to donate or gift alcohol or tobacco except when posted by a Page or group representing legitimate brick-and-mortar entities, including retail businesses, websites or brands, or a private individual sharing content on behalf of legitimate brick-and-mortar entities.
  • Asks for alcohol or tobacco

 

Weight loss products

 

  • Content about weight loss that contains a miracle claim and attempts to buy, sell, trade, donate or gift weight loss products.

 

Historical artefacts

 

  • Content that attempts to buy, sell, trade, donate or gift, or asks for, historical artefacts.

 

Entheogens

 

  • Content that attempts to buy, sell, trade, donate or gift, or asks for entheogens.
  • Note: Debating or advocating for the legality or discussing scientific or medical merits of entheogens is allowed.

 

Hazardous goods and materials

 

  • Content that attempts to buy, sell, trade, donate or gift, or asks for hazardous goods and materials

Except when any of the above occurs in a fictional or documentary context

For the following content, we restrict visibility to adults aged 21 years and over:

Firearms

 

  • Content posted by or promoting legitimate brick and mortar shop, entities, including retail businesses websites, brands or government agencies which attempt to buy, sell, trade, donate or gift (including in the context of an auction or a raffle) firearms, firearm parts, ammunition, explosives or lethal enhancements.

 

Alcohol/tobacco

 

  • Content posted by or promoting legitimate brick-and-mortar entities, including retail businesses websites or brands, which attempt to buy, sell, trade, donate or gift alcohol or tobacco products.

 

Bladed weapons

 

  • Content that attempts to buy, sell, trade, donate or gift bladed weapons.

 

Weight loss products and potentially dangerous cosmetic procedures

 

Content that

 

  • Attempts to buy, sell, trade, donate or gift weight loss products or potentially dangerous cosmetic procedures.
  • Admits to or depicts using a weight loss product or potentially dangerous cosmetic procedures, except when admitting to use in a disapproval context.
  • Shows coordination or promotion (by which we mean speaks positively, encourages the use of or provides instructions to use or make a diet product or perform dangerous cosmetic procedures).

 

Sex toys and sexual enhancement products

 

  • Content that attempts to buy, sell, trade, donate or gift sex toys and sexual enhancement products

 

Real money gambling

 

  • Content that attempts to sell or promote online gaming and gambling services where anything of monetary value (including cash or digital/virtual currencies, e.g. bitcoin) is required to play and anything of monetary value forms part of the prize.

 

Entheogens

 

  • Content that shows admission to personal use of, coordinates or promotes (by which we mean speaks positively about), or encourages the use of entheogens.
  • Except when any of the above occurs in a fictional or documentary context.

 

⚠️For the following Community Standards, we require additional information and/or context to enforce:

 

  • In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirised or attributed to something or someone else in order to mock or criticise them.



  1. Fraud and deception

 

Policy rationale

In an effort to prevent fraudulent activity that can harm people or businesses, we remove content that purposefully deceives, wilfully misrepresents or otherwise defrauds or exploits others for money or property. This includes content that seeks to coordinate or promote these activities using our services.

We allow people to raise awareness and educate others as well as condemn these activities unless this includes content that contains sensitive information, such as personally identifiable information.

⛔ Do not post:

 

Content that provides instructions on, engages in, promotes, coordinates, encourages, facilitates, recruits for or admits to the offering or solicitation of any of the following activities:

  • Deceiving others to generate a financial or personal benefit to the detriment of a third party or entity through:
  • Investment or financial scams:
    • Loan scams
    • Advance fee scams.
    • Gambling scams
    • Ponzi or pyramid schemes.
    • Money or cash flips or money muling.
    • Investment scams with promise of high rates of return.

 

  • Inauthentic identity scams:
  • Charity scams.
  • Romance or impersonation scams
  • Establishment of false businesses or entities.

 

  • Product or rewards scams:
    • Grant and benefit scams.
    • Tangible, spiritual or illuminati scams.
    • Insurance scams, including ghost broking
    • Fake jobs, or work-from-home or get-rich-quick scams.
    • Debt relief or credit repair scams.

 

  • Engaging and coordinating with others to fraudulently generate a financial or personal benefit at a loss for a third party, such as people, businesses or organisations through:
    • Fake documents or financial instruments by:
      • Creating, selling or buying of:
      • Fake or forged documents.
      • Fake or counterfeit currency or vouchers.
      • Fake or forged educational and professional certificates.
      • Money laundering

 

  • Stolen information, goods or services by:
    • Credit card fraud and goods or property purchases with stolen financial information
    • Trading, selling or buying of:
      • personally identifiable information.
      • Fake and misleading user reviews or ratings.
      • Credentials for subscription services.
      • Coupons.
    • Sharing, selling, trading or buying of:
      • Future exam papers or answer sheets.
  • Betting manipulation (for example, match fixing).
  • Manipulation of measuring devices such as electricity or water meters in order to bypass their authorised or legal use.

 

⚠️

For the following Community Standards, we require additional information and/or context to enforce:

Do not post:

 

Content that engages in, promotes, encourages, facilitates or admits to the following activities:

 

  • Bribery.
  • Embezzlement.

In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirised or attributed to something or someone else in order to mock or criticise them.

 

  1. SAFETY

 

  1. Suicide and self-injury

 

Policy rationale

We care deeply about the safety of the people who use our platforms. While we do not allow people to intentionally or unintentionally celebrate or promote suicide or self-injury, we do allow people to discuss these topics because we want weDIDit.Health to be a space where people can share their experiences, raise awareness about these issues, and seek support from one another.

We define self-injury as the intentional and direct injuring of the body, including self-mutilation and eating disorders. We remove any content that encourages suicide or self-injury, including fictional content such as memes or illustrations and any self-injury content that is graphic, regardless of context. We also remove content that identifies and negatively targets victims or survivors of suicide or self-injury seriously, humorously or rhetorically, as well as real-time depictions of suicide or self-injury. Content about recovery of suicide or self-harm that is allowed, but may contain imagery that could be upsetting, such as a healed scar, is placed behind a sensitivity screen.

Do not post:

 

Content that promotes, encourages, coordinates or provides instructions for

  • Suicide.
  • Self-injury.
  • Eating disorders.

Content that depicts graphic self-injury imagery

It is against our policies to post content depicting a person who engaged in a suicide attempt or death by suicide

Content that focuses on depiction of ribs, collar bones, thigh gaps, hips, concave stomach or protruding spine or scapula when shared together with terms associated with eating disorders

Content that contains instructions for drastic and unhealthy weight loss when shared together with terms associated with eating disorders.

Content that mocks victims or survivors of suicide, self-injury or eating disorders who are either publicly known or implied to have experienced suicide or self-injury

People should be aware that the following content may be sensitive and upsetting:

  • Photos or videos depicting a person who engaged in euthanasia/assisted suicide in a medical setting.

 

People should be aware that the following content may be sensitive and upsetting:

  • Content that depicts older instances of self-harm such as healed cuts or other non-graphic self-injury imagery in a self-injury, suicide or recovery context.
  • Content that depicts ribs, collar bones, thigh gaps, hips, concave stomach or protruding spine or scapula in a recovery context:



For the following Community Standards, we require additional information and/or context to enforce:

  • We may remove suicide notes when we have confirmation of a suicide or suicide attempt from a verified immediate family member or executor via support@weDIDit.Health



 

Policy rationale

We do not allow content or activity that sexually exploits or endangers children. We know that sometimes, people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images.

 

Do not post:

 

Child sexual exploitation

Content or activity that threatens, depicts, praises, supports, provides instructions for, makes statements of intent, admits participation in or shares links of the sexual exploitation of children (real or non-real minors, toddlers or babies), including but not limited to:

  • Sexual intercourse
    • Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person’s genitals or anus, where at least one person’s genitals are nude.
    • Implied sexual intercourse or oral sex, including when contact is imminent or not directly visible.
    • Stimulation of genitals or anus, including when activity is imminent or not directly visible.
    • Presence of by-products of sexual activity.
    • Any of the above involving an animal.
  • Children with sexual elements, including, but not limited to:
  • Restraints.
  • Focus on genitals.
  • Presence of an aroused adult.
  • Presence of sex toys.
  • Sexualised costume.
  • Stripping.
  • Staged environment (for example, on a bed) or professionally shot (quality/focus/angles).
  • Open-mouth kissing.
  • Content of children in a sexual fetish context.
  • Content that supports, promotes, advocates or encourages participation in paedophilia, unless it is discussed neutrally in an academic or verified health context.
  • Content that identifies or mocks alleged victims of child sexual exploitation by name or image.

Solicitation

Content that solicits

  • Child sexual abuse material (CSAM)
  • Nude imagery of children
  • Sexualised imagery of children
  • Real-world sexual encounters with children

Inappropriate interactions with children

Content that constitutes or facilitates inappropriate interactions with children, such as:

  • Arranging or planning real-world sexual encounters with children
  • Purposefully exposing children to sexually explicit language or sexual material
  • Engaging in implicitly sexual conversations in private messages with children
  • Obtaining or requesting sexual material from children in private messages

Exploitative intimate imagery and sextortion

Content that attempts to exploit minors by:

  • Coercing money, favours or intimate imagery with threats to expose intimate imagery or information.
  • Sharing, threatening or stating an intent to share private sexual conversations or intimate imagery.

Sexualisation of children

  • Content (including photos, videos, real-world art, digital content and verbal depictions) that sexualises children.
  • Groups, Pages and profiles dedicated to sexualising children.

Child nudity

Content that depicts child nudity where nudity is defined as:

  • Close-ups of children’s genitalia
  • Real nude toddlers, showing:
  • Visible genitalia, even when covered or obscured by transparent clothing.
  • Visible anus and/or fully nude close-up of buttocks.
  • Real nude minors, showing:
  • Visible genitalia (including genitalia obscured only by pubic hair or transparent clothing)
  • Visible anus and/or fully nude close-up of buttocks.
  • Uncovered female nipples.
  • No clothes from neck to knee – even if no genitalia or female nipples are showing.
  • Digitally-created depictions of nude minors, toddlers or babies, unless the image is for health or educational purposes.

Non-sexual child abuse

Imagery that depicts non-sexual child abuse regardless of sharing intent

Content that praises, supports, promotes, advocates for, provides instructions for or encourages participation in non-sexual child abuse.

People should be aware that the following content may be disturbing:

  • Videos or photos that depict police officers or military personnel committing non-sexual child abuse.
  • Imagery of non-sexual child abuse, when law enforcement, child protection agencies or trusted safety partners request that we leave the content on the platform for the express purpose of bringing a child back to safety.

 

People should be aware that the following content may be upsetting to some::

  • Videos or photos of violent immersion of a child in water in the context of religious rituals.

 

⚠️For the following Community Standards, we require additional information and/or context to enforce:

People should be aware that the following content may be sensitive: 

  • Imagery posted by a news agency that depicts child nudity in the context of famine, genocide, war crimes or crimes against humanity, unless accompanied by a violating caption or shared in a violating context, in which case the content is removed.

 

  1. Adult sexual exploitation

 

Policy rationale

In an effort to create space for this conversation and promote a safe environment, we allow victims to share their experiences, but remove content that depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation. We also remove content that displays, advocates for or coordinates sexual acts with non-consenting parties to avoid facilitating non-consensual sexual acts.

To protect victims and survivors, we remove images that depict incidents of sexual violence and intimate images shared without the consent of the person(s) pictured. 

Do not post:

 

In instances where content consists of any form of non-consensual sexual touching, necrophilia or forced stripping, including:

 

  • Depictions (including real photos/videos except in a real-world art context), or
  • Sharing, offering, asking for or threatening to share imagery, or
  • Descriptions, unless shared by or in support of the victim/survivor, or
  • Advocacy (including aspirational and conditional statements), or
  • Statements of intent, or
  • Calls for action, or
  • Admitting participation, or
  • Mocking victims of any of the above.
  • We will also take down content shared by a third party that identifies victims or survivors of sexual assault when reported to support@weDIDit.Health, by the victim or survivor.

Content that attempts to exploit people by any of the following:

  • Sextortion: Coercing money, favours or intimate imagery from people with threats to expose their intimate imagery or intimate information
  • Sharing, threatening, stating an intent to share, offering or asking for non-consensual intimate imagery that fulfils all of the three following conditions:
    • Imagery is non-commercial or produced in a private setting.
    • Person in the imagery is (near-)nude, engaged in sexual activity or in a sexual pose.
    • Lack of consent to share the imagery is indicated by meeting any of the signals:
      • Vengeful context (such as caption, comments or Page title).
      • Independent sources (such as law enforcement record) including entertainment media (such as leak of images confirmed by media).
      • A visible match between the person depicted in the image and the person who has reported the content to us via support@weDIDit.Health
      • The person who reported the content to us,  via support@weDIDit.Health shares the same name as the person depicted in the image.
  • Secretly taken non-commercial imagery of a real person’s commonly sexualised body parts (breasts, groin, buttocks or thighs) or of a real person engaged in sexual activity. This imagery is commonly known as “creepshots” or “upskirts” and includes photos or videos that mock, sexualise or expose the person depicted in the imagery.
  • Threatening or stating an intent to share private sexual conversations that meet the following criteria:
    • Lack of consent is indicated by:
      • Vengeful context and/or threatening context, or
      • A visible match between the person depicted in the image and the person who has reported the content to us.
      • The person who reported the content to us, via support@weDIDit.Health
      •  shares the same name as the person depicted in the image.

 

People should be aware that the following content may be disturbing:

Narratives and statements that contain a description of non-consensual sexual touching (written or verbal) that includes details beyond mere naming or mentioning the act if:

  • Shared by the victim, or
  • Shared by a third party (other than the victim) in support of the victim or condemnation of the act or for general awareness to be determined by context/caption.

Content mocking the concept of non-consensual sexual touching

⚠️For the following Community Standards, we require additional information and/or context to enforce:

In addition to our at-scale policy of removing content that threatens or advocates rape or other non-consensual sexual touching, we may also disable the posting account.

We may also enforce on content shared by a third party that identifies survivors of sexual assault when reported by an authorised representative or trusted partner.

  1. Bullying and harassment

 

Policy rationale

Bullying and harassment happen in many places and come in many different forms, from making threats and releasing personally identifiable information to sending threatening messages and making unwanted malicious contact. We do not tolerate this kind of behaviour because it prevents people from feeling safe and respected on our platform.

We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe, as well as certain attacks where the public figure is directly tagged in the post or comment. We define public figures as state- and national-level government officials, political candidates for those offices, people with over one million fans or followers on social media and people who receive substantial news coverage.

For private individuals, our protection goes further: We remove content that’s meant to degrade or shame, including, for example, claims about someone’s personal sexual activity. 

Context and intent matter, and we allow people to post and share if it is clear that something was shared in order to condemn or draw attention to bullying and harassment. In certain instances, we require self-reporting because it helps us understand that the person targeted feels bullied or harassed.Do not:

Tier 1: Target anyone maliciously by:

 

  • Repeatedly contacting someone in a manner that is:
    • Unwanted or
    • Sexually harassing or
    • Directed at a large number of individuals with no prior solicitation.
  • Attacking someone based on their status as a victim of sexual assault, sexual exploitation, sexual harassment or domestic abuse.
  • Calling for self-injury or suicide of a specific person or group of people.
  • Attacking someone through derogatory terms related to sexual activity (e.g. whore, slut).
  • Posting content about a violent tragedy, or victims of violent tragedies that include claims that a violent tragedy did not occur.
  • Posting content about victims or survivors of violent tragedies or terrorist attacks by name or by image, with claims that they are:
    • Acting/pretending to be a victim of an event.
    • Otherwise paid or employed to mislead people about their role in the event.
  • Threatening to release an individual’s private phone number, residential address, email address or medical records (as defined in the Privacy Violations policy).
  • Making statements of intent to engage in a sexual activity or advocating for them to engage in a sexual activity.
  • Making severe sexualised commentary
  • Sharing derogatory sexualised photoshopped imagery or drawings
  • Calling for, or making statements of intent to engage in, bullying and/or harassment.
  • Posting content that further degrades or expresses disgust towards individuals who are depicted in the process of, or right after, menstruating, urinating, vomiting or defecating
  • Creating Pages or groups that are dedicated to attacking individual(s) by:
    • Calling for death, or to contract or develop a medical condition.
    • Making statements of intent of advocating to engage in sexual activity.
    • Making claims that the individual has or may have a sexually transmitted disease.
    • Sexualising another adult.
  • Sending messages that contain the following attacks when aimed at an individual or group of individuals in the thread:
    • Attacks referenced in Tier 1, 2 and 4 of this policy.
    • Targeted cursing.
    • Calls for death, serious disease, disability, epidemic disease or physical harm.

Tier 2: Target private individuals, limited scope public figures (for example, individuals whose primary fame is limited to their activism, journalism or those who become famous through involuntary means) or public figures who are minors with:

  • Calls for death, or to contract or develop a medical condition.
  • Female-gendered cursing terms when used in a derogatory way.
  • Claims about sexual activity or sexually transmitted diseases except in the context of criminal allegations against adults about non-consensual sexual touching.
  • Pages or groups created to attack through:
    • Targeted cursing.
    • Negative physical descriptions.
    • Claims about religious identity or blasphemy.
    • Expressions of contempt or disgust.
    • Female-gendered cursing terms when used in a derogatory way.

Tier 3: Target public figures by purposefully exposing them to:

  • For adults and minors:
    • Calls for death, or to contract or develop a medical condition.
    • Claims about sexually transmitted diseases
    • Female-gendered cursing terms when used in a derogatory way.
    • Content that praises, celebrates or mocks their death or medical condition.
    • Attacks through negative physical descriptions.

 

  • For minors:
    • Comparisons to animals or insects that are culturally perceived as intellectually or physically inferior or to an inanimate object (“cow”, “monkey”, “potato”).
    • Content manipulated to highlight, circle or otherwise negatively draw attention to specific physical characteristics (nose, ear and so on).

Tier 4: Target private individuals or limited scope public figures (for example, individuals whose primary fame is limited to their activism, journalism or those who become famous through involuntary means) with:

  • Comparisons to animals or insects that are culturally perceived as intellectually or physically inferior or to an inanimate object (“cow”, “monkey”, “potato”).
  • Content manipulated to highlight, circle or otherwise negatively draw attention to specific physical characteristics (nose, ear and so on).
  • Attacks through negative physical descriptions.
  • Content that ranks individuals on physical appearance or personality.
  • Content sexualising another adult.
  • Content that further degrades individuals who are depicted being physically bullied except in self-defence and fight-sport contexts.
  • Content that praises, celebrates or mocks their death or serious physical injury.
  • In addition to the above, attacks through Pages or groups:
    • Negative character or ability claims.
    • First-person voice bullying only if the object targets more than one private individual.

Tier 5: Target private adults (who must self-report) or any private minors or involuntary minor public figures with:

  • Targeted cursing.
  • Claims about romantic involvement, sexual orientation or gender identity.
  • Coordination, advocacy or promotion of exclusion.
  • Negative character or ability claims, except in the context of criminal allegations and business reviews against adults. We allow criminal allegations so that people can draw attention to personal experiences or offline events. In cases in which criminal allegations pose offline harm to the named individual, however, we may remove them.
  • Expressions of contempt or disgust, except in the context of criminal allegations against adults.

Tier 6: Target private individuals who are minors with:

  • Allegations about criminal or illegal behaviour.
  • Videos of physical bullying shared in a non-condemning context.

Tier 7: Target private individuals (who must self-report) with:

  • First-person voice bullying.
  • Unwanted manipulated imagery.
  • Comparison to other public, fictional or private individuals on the basis of physical appearance
  • Claims about religious identity or blasphemy.
  • Comparisons to animals or insects that are not culturally perceived as intellectually or physically inferior (“tiger”, “lion”).
  • Neutral or positive physical descriptions.
  • Non-negative character or ability claims (including claims about mental illness).
  • Any bullying or harassment violation, when shared in an endearing context.
  • Attacks through derogatory terms related to a lack of sexual activity.

 

Videos of physical bullying against minors shared in a condemning context

⚠️For the following Community Standards, we require additional information and/or context to enforce:

 

Do not:

  • Post content that targets private individuals through unwanted Pages, groups and events. We remove this content when it is reported to us via support@weDIDit.Health by the victim or an authorised representative of the victim.
  • Create accounts to contact someone who has blocked you.
  • Post attacks that use derogatory terms related to female-gendered cursing. We remove this content when the victim or an authorised representative of the victim informs us of the content, even if the victim has not reported it directly.
  • Post content that would otherwise require the victim to report the content to us  via support@weDIDit.Health or an indicator that the poster is directly targeting the victim (e.g. the victim is tagged in the post or comment). We will remove this content if we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Post content praising, celebrating or mocking anyone’s death. We also remove content targeting a deceased individual that we would normally require the victim to report to us  via support@weDIDit.Health
  • Post content calling for or stating an intent to engage in behaviour that would qualify as bullying and harassment under our policies. We will remove this content when we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Post content sexualising a public figure. We will remove this content when we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Repeatedly contact someone to sexually harass them. We will remove this content when we have confirmation from the victim or an authorised representative of the victim that the content is unwanted.
  • Engage in mass harassment against individuals that targets them based on their decision to take or not take the COVID-19 vaccine with:
    • Statements of mental or moral inferiority based on their decision, or
    • Statements that advocate for or allege a negative outcome as a result of their decision, except for widely proven and/or accepted COVID-19 symptoms or vaccine side effects.
  • Remove directed mass harassment, when:
  • Targeting, via any surface, “individuals at heightened risk of offline harm”, defined as:
    • Human rights defenders
    • Minors
    • Victims of violent events/tragedies
    • Opposition figures in at-risk countries during election periods
    • Election officials
    • Government dissidents who have been targeted based on their dissident status
    • Ethnic and religious minorities in conflict zones
    • Member of a designated and recognisable at-risk group
  • Targeting any individual via personal surfaces, such as inbox or profiles, with:
    • Content that violates the bullying and harassment policies for private individuals, or
    • Objectionable content that is based on a protected characteristic
  • Disable accounts engaged in mass harassment as part of either
  • State or state-affiliated networks targeting any individual via any surface.
  • Adversarial networks targeting any individual via any surface with:
    • Content that violates the bullying and harassment policies for private individuals, or
    • Content that targets them based on a protected characteristic, or
    • Content or behaviour otherwise deemed to be objectionable in local context

 

  1. Human exploitation

 

Policy rationale

In an effort to disrupt and prevent harm, we remove content that facilitates or coordinates the exploitation of humans, including human trafficking. We define human trafficking as the business of depriving someone of liberty for profit. It is the exploitation of humans in order to force them to engage in commercial sex, labour or other activities against their will. It relies on deception, force and coercion, and degrades humans by depriving them of their freedom while economically or materially benefiting others.

Human trafficking is multi-faceted and global; it can affect anyone regardless of age, socioeconomic background, ethnicity, gender or location. It takes many forms, and any given trafficking situation can involve various stages of development. By the coercive nature of this abuse, victims cannot consent.

While we need to be careful not to conflate human trafficking and smuggling, they can be related and exhibit overlap. The United Nations defines human smuggling as the procurement or facilitation of illegal entry into a state across international borders. Without necessity for coercion or force, it may still result in the exploitation of vulnerable individuals who are trying to leave their country of origin, often in pursuit of a better life. Human smuggling is a crime against a state, relying on movement, and human trafficking is a crime against a person, relying on exploitation.

In addition to content condemning or raising awareness about human trafficking or smuggling issues, we allow content asking for or sharing information about personal safety and border crossing, seeking asylum or how to leave a country.

Do not post:

 

Content that recruits people for, facilitates or exploits people through any of the following forms of human trafficking:

 

  • Sex trafficking
  • Sales of children or illegal adoption.
  • Orphanage trafficking and orphanage voluntourism.
  • Forced marriages.
  • Labour exploitation (including bonded labour).
  • Domestic servitude.
  • Non-regenerative organ trafficking, not including organ removal, donation or transplant in a non-exploitative organ donation context.
  • Forced criminal activity (e.g. forced begging, forced drug trafficking).
  • Recruitment of child soldiers.

Content that offers to provide or facilitate human smuggling

Content that asks for human smuggling services

 

  1. Privacy violations

 

Policy rationale

Privacy and the protection of personal information are fundamentally important values for weDIDit.Health. We work hard to safeguard your personal identity and information and we do not allow people to post personal or confidential information about yourself or of others.

We remove content that shares, offers or solicits personally identifiable information or other private information that could lead to physical or financial harm, including financial, residential and medical information, as well as private information obtained from illegal sources. We also recognise that private information may become publicly available through news coverage, court filings, press releases or other sources. When that happens, we may allow the information to be posted.

We also provide people with ways to report imagery that they believe to be in violation of their privacy rights. Email support@weDIDit.Health.

Do not post:

 

Content that shares or solicits any of the following private information, either on weDIDit.Health or through external links:

Personally identifiable information about yourself or others

  • Personal identity: identifying individuals through government-issued numbers.
  • National identification number (for example Social Security Number (SSN), passport number, National Insurance/Health Service Number, Personal Public Service Number (PPS), Individual Taxpayer Identification Number (ITIN)).
  • Government IDs of law enforcement, military or security personnel.
  • Personal information: directly identifying an individual, by indicating the ID number or registration information and the individual’s name.
    • Records or official documentation of civil registry information (marriage, birth, death, name change or gender recognition and so on).
    • Immigration and work status documents (for example, green cards, work permits, visas or immigration papers).
    • Driving licences or licence plates, except when licence plates are shared to facilitate in finding missing people or animals.
  • Credit Privacy Number (CPN).
  • Digital identity: authenticating access to an online identity
  • Email addresses with passwords.
  • Digital identities with passwords.
  • Passwords, PINs or codes to access private information.

Other private information

  • Personal contact information of others such as phone numbers, addresses and email addresses, except when shared or solicited to promote charitable causes, find missing people, animals or objects, or contact business service providers.
  • Financial information.
  • Personal financial information about yourself or others, including:
    • Non-public financial records or statements.
    • Bank account numbers with security or PIN codes.
    • Digital payment method information with login details, security or PIN codes.
    • Credit or debit card information with validity dates or security PINs or codes.
  • Financial information about businesses or organisations, unless originally shared by the organisation itself, including:
    • Financial records or statements except when the financial records of the business are publicly available (for example, listed on stock exchanges or regulatory agencies etc.)
    • Bank account numbers accompanied by security or PIN codes.
    • Digital payment method information accompanied by login details, security or PIN codes.
  • Residential information
  • Imagery that displays the external view of private residences if all of the following conditions apply:
    • The residence is a single-family home, or the resident’s unit number is identified in the image/caption.
    • The city/neighbourhood or GPS pin (for example, a pin from Google Maps) are identified.
    • The content identifies the resident(s).
    • That same resident objects to the exposure of their private residence, or there is a context of organising protests against the resident (this does not include embassies that also serve as residences).

 

  • Content that exposes information about safe houses by sharing any of the below, unless the safe house is actively promoting information about their facility
    • Actual address (note: “Post Box only” is allowed).
    • Images of the safe house.
    • Identifiable city/neighbourhood of the safe house.
    • Information exposing the identity of the safe house residents.
  • Medical information
    • Records or official documentation displaying medical, psychological, biometric or genetic hereditary of others.
  • Information obtained from hacked sources.
    • Except in limited cases of newsworthiness, content claimed or confirmed to come from a hacked source, regardless of whether the affected person is a public figure or a private individual.

The following content also may be removed:

  • A reported photo or video,  via support@weDIDit.Health, of people where the person depicted in the image is:
    • A minor under the age of 13, and the content was reported to us via support@weDIDit.Health by the minor or a parent or legal guardian.
    • A minor between the ages of 13 and 18 years old, and the content was reported by the minor, to us  via support@weDIDit.Health
  • An adult, where the content was reported, via support@weDIDit.Health
  •  by the adult from outside the United States and applicable law may provide rights to removal.
  • Any person who is incapacitated and unable to report to us via support@weDIDit.Health the content on their own.

 

⚠️For the following Community Standards, we require additional information and/or context to enforce:

 

Do not post:

  • Depictions of someone in a medical or health facility if reported by the person pictured or an authorised representative.
  • Source material that purports to reveal non-public information relevant to an election shared as part of a foreign government influence operation.
  • We remove reporting on such a leak by state-controlled media entities from the country behind the leak.

In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirised or attributed to something or someone else in order to mock or criticise them.


  • Objectionable content

 

  1. Hate speech

 

Policy rationale

We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hate speech on weDIDit.Health. It creates an environment of intimidation and exclusion, and in some cases may promote offline violence.

We define hate speech as a direct attack against people – rather than concepts or institutions – on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. We also prohibit the use of harmful stereotypes, which we define as dehumanising comparisons that have historically been used to attack, intimidate or exclude specific groups, and that are often linked with offline violence. We consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks, though we do allow commentary and criticism of immigration policies. Similarly, we provide some protections for characteristics such as occupation, when they’re referenced along with a protected characteristic. Sometimes, based on local nuance, we consider certain words or phrases as frequently used proxies for PC groups.

We also prohibit the usage of slurs that are used to attack people on the basis of their protected characteristics. However, we recognise that people sometimes share content that includes slurs or someone else’s hate speech to condemn it or raise awareness. In other cases, speech, including slurs, that might otherwise violate our standards can be used self-referentially or in an empowering way. Our policies are designed to allow room for these types of speech, but we require people to clearly indicate their intent. If the intention is unclear, we may remove content.

Do not post:

Tier 1

Content targeting a person or group of people (including all groups except those who are considered non-protected groups described as having carried out violent crimes or sexual offences or representing less than half of a group) on the basis of their aforementioned protected characteristic(s) or immigration status with:

  • Violent speech or support in written or visual form
  • Dehumanising speech or imagery in the form of comparisons, generalisations or unqualified behavioural statements (in written or visual form) to or about:
  • Insects (including but not limited to: cockroaches, locusts)
  • Animals in general or specific types of animals that are culturally perceived as intellectually or physically inferior (including but not limited to: Black people and apes or ape-like creatures; Jewish people and rats; Muslim people and pigs; Mexican people and worms)
  • Filth (including but not limited to: dirt, grime)
  • Bacteria, viruses or microbes
  • Disease (including but not limited to: cancer, sexually transmitted diseases)
  • Faeces (including but not limited to: shit, crap)
  • Subhumanity (including but not limited to: savages, devils, monsters, primitives)
  • Sexual predators (including but not limited to: Muslim people having sex with goats or pigs)
  • Violent criminals (including but not limited to: terrorists, murderers, members of hate or criminal organisations)
  • Other criminals (including, but not limited to, “thieves”, “bank robbers” or saying “All [protected characteristic or quasi-protected characteristic] are ‘criminals'”).
  • Certain objects (women as household objects, property or objects in general; Black people as farm equipment; transgender or non-binary people as “it”)
  • Statements denying existence (including but not limited to: “[protected characteristic(s) or quasi-protected characteristic] do not exist”, “no such thing as [protected charactic(s) or quasi-protected characteristic]”)
  • Harmful stereotypes historically linked to intimidation, exclusion or violence on the basis of a protected characteristic, such as Blackface; Holocaust denial; claims that Jewish people control financial, political or media institutions; and references to Dalits as menial labourers
  • Mocking the concept, events or victims of hate crimes even if no real person is depicted in an image.

Tier 2

Content targeting a person or group of people on the basis of their protected characteristic(s) with:

  • Generalisations that state inferiority (in written or visual form) in the following ways:
  • Physical deficiencies are defined as those about:
    • Hygiene, including, but not limited to: filthy, dirty, smelly.
    • Physical appearance, including, but not limited to: ugly, hideous.
  • Mental deficiencies are defined as those about:
    • Intellectual capacity, including, but not limited to: dumb, stupid, idiots.
    • Education, including, but not limited to: illiterate, uneducated.
    • Mental health, including, but not limited to: mentally ill, retarded, crazy, insane.
  • Moral deficiencies are defined as those about:
    • Character traits culturally perceived as negative, including, but not limited to: coward, liar, arrogant, ignorant.
    • Derogatory terms related to sexual activity, including, but not limited to: whore, slut, perverts.
  • Other statements of inferiority, which we define as:
  • Expressions about being less than adequate, including, but not limited to: worthless, useless.
  • Expressions about being better/worse than another protected characteristic, including, but not limited to: “I believe that males are superior to females.”
  • Expressions about deviating from the norm, including, but not limited to: freaks, abnormal.
  • Expressions of contempt (in written or visual form), which we define as:
  • Self-admission to intolerance on the basis of protected characteristics, including, but not limited to: homophobic, islamophobic, racist.
  • Expressions that a protected characteristic shouldn’t exist.
  • Expressions of hate, including, but not limited to: despise, hate.
  • Expressions of dismissal, including, but not limited to: don’t respect, don’t like, don’t care for
  • Expressions of disgust (in written or visual form), which we define as:
  • Expressions suggesting that the target causes sickness, including, but not limited to: vomit, throw up.
  • Expressions of repulsion or distaste, including, but not limited to: vile, disgusting, yuck.
  • Cursing, except certain gender-based cursing in a romantic break-up context, defined as:
  • Referring to the target as genitalia or anus, including, but not limited to: cunt, dick, asshole.
  • Profane terms or phrases with the intent to insult, including, but not limited to: fuck, bitch, motherfucker.
  • Terms or phrases calling for engagement in sexual activity, or contact with genitalia, anus, faeces or urine, including, but not limited to: suck my dick, kiss my ass, eat shit.

Tier 3

Content targeting a person or group of people on the basis of their protected characteristic(s) with any of the following:

  • Segregation in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting segregation.
  • Exclusion in the form of calls for action, statements of intent, aspirational or conditional statements, or statements advocating or supporting, defined as
  • Explicit exclusion, which means things such as expelling certain groups or saying they are not allowed.
  • Political exclusion, which means denying the right to political participation.
  • Economic exclusion, which means denying access to economic entitlements and limiting participation in the labour market.
  • Social exclusion, which means things such as denying access to spaces (physical and online) and social services, except for gender-based exclusion in health and positive support groups.

Content that describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels for the above characteristics.

⚠️For the following Community Standards, we require additional information and/or context to enforce:

Do not post:

  • Content explicitly providing or offering to provide products or services that aim to change people’s sexual orientation or gender identity.
  • Content attacking concepts, institutions, ideas, practices or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic. weDIDit.Health looks at a range of signs to determine whether there is a threat of harm in the content. These include, but are not limited to: content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, we may also consider whether the speaker is a public figure or occupies a position of authority.
  • Content targeting a person or group of people on the basis of their protected characteristic(s) with claims that they have or spread the novel coronavirus, are responsible for the existence of the novel coronavirus or are deliberately spreading the novel coronavirus, or mocking them for having or experiencing the novel coronavirus.

In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirised or attributed to something or someone else in order to mock or criticise them.

  1. Violent and graphic content

 

Policy rationale

To protect users from disturbing imagery, we remove content that is particularly violent or graphic, such as videos depicting dismemberment, visible innards or charred bodies. We also remove content that contains sadistic remarks towards imagery depicting the suffering of humans and animals.

In the context of discussions about important issues such as human rights abuses, armed conflicts or acts of terrorism, we allow graphic content (with some limitations) to help people to condemn and raise awareness about these situations.

We know that people have different sensitivities with regard to graphic and violent imagery. People should be aware that there may be some graphic or violent imagery which may be sensitive, before they click through. 

Do not post:

Imagery of people

Videos of people or dead bodies in non-medical settings if they depict

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people unless in the context of cremation or self-immolation when that action is a form of political speech or newsworthy.
  • Victims of cannibalism.
  • Throat-slitting.

Live streams of capital punishment of a person

Sadistic remarks

  • Sadistic remarks towards imagery that is under this policy advising people that the content may be disturbing, unless there is a self-defence context or medical setting.
  • Sadistic remarks towards the following content which includes a label so that people are aware that it may be sensitive:
  • Imagery of one or more persons subjected to violence and/or humiliating acts by one or more uniformed personnel performing a police function.
  • Imagery of foetuses or newborn babies.
  • Explicit sadistic remarks towards the suffering of animals depicted in the imagery.
  • Offering or soliciting imagery that is deleted under this policy, when accompanied by sadistic remarks.

 

People should be aware that the following content may be disturbing:

Imagery of people

Videos of people or dead bodies in a medical setting if they depict:

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people, including cremation or self-immolation when that action is a form of political speech or newsworthy.
  • Victims of cannibalism.
  • Throat-slitting.

Photos of wounded or dead people if they show:

  • Dismemberment.
  • Visible internal organs; partially decomposed bodies.
  • Charred or burning people.
  • Victims of cannibalism.
  • Throat-slitting.

Imagery that shows the violent death of a person or people by accident or murder

Imagery that shows capital punishment of a person

Imagery that shows acts of torture committed against a person or people

Imagery of non-medical foreign objects (such as weDIDit.Healthl objects, knives, nails) involuntarily inserted or stuck into people causing grievous injury

Imagery of animals

The following content involving animals:

  • Videos depicting humans killing animals if there is no explicit manufacturing, hunting, food consumption, processing or preparation context.
  • Imagery of animal-to-animal fights, when there are visible innards or dismemberment of non-regenerating body, unless in the wild.
  • Imagery of humans committing acts of torture or abuse against live animals.
  • Imagery of animals showing wounds or cuts that render visible innards or dismemberment, if there is no explicit manufacturing, hunting, taxidermy, medical treatment, rescue or food consumption, preparation or processing context, or the animal is already skinned or with its outer layer fully removed.

 

People should be aware that the following content may be sensitive:

Imagery of non-medical foreign objects voluntarily inserted into people through skin in religious or cultural context

Imagery of visible innards in a birthing context

Imagery of foetuses and newborn babies that show:

  • Dismemberment.
  • Visible innards.
  • An abortion or abandonment context.

Imagery of newborn babies in an abandonment context

Imagery of animals in a ritual slaughter context showing dismemberment, visible innards or charring or burning

⚠️For the following Community Standards, we require additional information and/or context to enforce:

We remove:

  • Videos and photos that show the violent death of someone when a family member requests its removal.
  • Videos of violent death of humans where the violent death is not visible in the video but the audio is fully or partially captured and the death is confirmed by either a law enforcement record, death certificate, Trusted Partner report or media report.



  1. Adult nudity and sexual activity

 

Policy rationale

We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content. In addition, we default to removing sexual imagery to prevent the sharing of non-consensual or underage content. Restrictions on the display of sexual activity also apply to digitally created content unless it is posted for educational, humorous or satirical purposes.

We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause or for educational or medical reasons.

Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding and photos of post-mastectomy scarring. For images depicting visible genitalia or the anus in the context of birth and after-birth moments or health-related situations, people should be aware that the following content may be sensitive.

We also allow photographs of paintings, sculptures and other art that depicts nude figures.

Do not post:

  • Imagery of real nude adults, if it depicts:
  • Visible genitalia except in the context of birth giving and after-birth moments or if there are medical or health context situations (for example, gender confirmation surgery, examination for cancer or disease prevention/assessment).
  • Visible anus and/or fully nude close-ups of buttocks unless photoshopped on a public figure.
  • Uncovered female nipples except in the context of breastfeeding, birth giving and after-birth moments, medical or health context (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest.
  • Imagery of sexual activity, including:
  • Explicit sexual activity and stimulation
    • Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person’s genitals or anus, where at least one person’s genitals are nude.
    • Explicit stimulation of genitalia or anus, defined as stimulating genitalia or anus or inserting objects, including sex toys, into genitalia or anus, where the contact with the genitalia or anus is directly visible.
  • Implied sexual activity and stimulation, except in cases of medical or health context, advertisements and recognised fictional images or with indicators of fiction:
    • Implied sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person’s genitals or anus, when the genitalia and/or the activity or contact is not directly visible.
    • Implied stimulation of genitalia or anus, defined as stimulating genitalia or anus or inserting objects, including sex toys, into or above genitalia or anus, when the genitalia and/or the activity or contact is not directly visible.
  • Other activities, except in cases of medical or health context, advertisements and recognised fictional images or with indicators of fiction, including, but not limited to:
    • Erections
    • Presence of by-products of sexual activity.
    • Sex toys placed upon or inserted into the mouth.
    • Stimulation of naked human nipples.
    • Squeezing female breasts, defined as a grabbing motion with curved fingers that shows both marks and clear shape change of the breasts. We allow squeezing in breastfeeding contexts.
  • Fetish content that involves:
    • Acts that are likely to lead to the death of a person or animal.
    • Dismemberment.
    • Cannibalism.
    • Faeces, urine, spit, snot, menstruation or vomit.
    • Bestiality.
  • Adult sexual activity in digital art, except when posted in an educational or scientific context, or when it meets one of the criteria below.

 

  • Extended audio of sexual activity

 

People should be aware that the following content may be sensitive:

Imagery of visible adult male and female genitalia, fully nude close-ups of buttocks or anus, or implied/other sexual activity, when shared in medical or health context which can include, for example:

  • Birth-giving and after-birth giving moments, including both natural vaginal delivery and caesarean section
  • Gender confirmation surgery
  • Genitalia self-examination for cancer or disease prevention/assessment

 

This content should only be shown to adults:

  • Real-world art that depicts implied or explicit sexual activity.
  • Imagery depicting bestiality in real-world art, provided that it is shared neutrally or in condemnation, and the people or animals depicted are not real.
  • Implied adult sexual activity in advertisements, recognised fictional images or with indicators of fiction.
  • Adult sexual activity in digital art, where:
  • The sexual activity (intercourse or other sexual activities) isn’t explicit and is not part of the above specified fetish content.
  • The content was posted in a satirical or humorous context.
  • Only body shapes or contours are visible.



  1. Sexual solicitation

 

Policy rationale

As noted in Section 8 of our Community Standards (Adult Sexual Exploitation), people can use weDIDit.Health to discuss and draw attention to sexual violence and exploitation. We recognise the importance of and allow for this discussion. We also allow for the discussion of sex worker rights advocacy and sex work regulation. We draw the line, however, when content facilitates, encourages or coordinates sexual encounters or commercial sexual services between adults. We do this to avoid facilitating transactions that may involve trafficking, coercion and non-consensual sexual acts.

We also restrict sexually explicit language that may lead to sexual solicitation because some audiences within our global community may be sensitive to this type of content, and it may impede the ability for people to connect with their friends and the broader community.

Do not post:

Content that offers or asks for adult commercial services, such as requesting, offering or asking for rates for escort service and paid sexual fetish or domination services. (Content that recruits or offers other people for third-party commercial sex work is separately considered under the Human Exploitation Policy).

Attempted coordination of or recruitment for adult sexual activities, except when promoting an event or venue, including, but not limited to:

  • Filmed sexual activities.
  • Pornographic activities, strip club shows, live sex performances or erotic dances.
  • Sexual, erotic or tantric massages.

Explicit sexual solicitation by, including, but not limited to, the following, offering or asking for:

  • Offering or asking for sex or sexual partners (including partners who share fetish or sexual interests).
  • Sex chat or conversations.
  • Nude photos/videos/imagery/sexual fetish items.
  • Sexual slang terms.

We allow expressing desire for sexual activity, promoting sex education, discussing sexual practices or experiences, or offering classes or programmes that teach techniques or discuss sex.

Content that is implicitly or indirectly offering or asking for sexual solicitation and meets both of the following criteria. If both criteria are not met, it is not deemed to be violating. For example, if content is a hand-drawn image depicting sexual activity but does not ask or offer sexual solicitation, it is not violating:

  • Criteria 1: Offer or ask
  • Content that implicitly or indirectly (typically through providing a method of contact) offers or asks for sexual solicitation.

 

  • Criteria 2: Suggestive elements
  • Content that makes the aforementioned offer or ask using one of the following sexually suggestive elements:
    • Regional sexualised slang,
    • Mentions or depictions of sexual activity such as sexual roles, sex positions, fetish scenarios, state of arousal, act of sexual intercourse or activity (e.g. sexual penetration or self-pleasuring), commonly sexual emojis
    • Including content (hand drawn, digital or real-world art) that depicts sexual activity as defined in Adult Nudity and Sexual Activity policy
    • Poses,
    • Audio of sexual activity or other content that violates the Adult Nudity and Sexual Activity policy

An offer or ask for pornographic material (including, but not limited to, sharing of links to external pornographic websites).

Sexually explicit language that goes into graphic detail beyond mere reference to:

  • A state of sexual arousal (e.g wetness or erection) or
  • An act of sexual intercourse (e.g sexual penetration, self-pleasuring or exercising fetish scenarios).
  • Except for content shared in a humorous, satirical or educational context, as a sexual weDIDit.Health or as sexual cursing.

 

⚠️For the following Community Standards, we require additional information and/or context to enforce:

 

In certain cases, we will allow content that may otherwise violate the Community Standards when it is determined that the content is satirical. Content will only be allowed if the violating elements of the content are being satirised or attributed to something or someone else in order to mock or criticise them.



  • Integrity and authenticity

 

  1. Account integrity and authentic identity

 

Policy rationale

Authenticity is the cornerstone of our community. We believe that authenticity helps create a community where people are accountable to each other, and to weDIDit.Health, in meaningful ways. We want to allow for the range of diverse ways that identity is expressed across our global community, while also preventing impersonation and identity misrepresentation. 

 

In order to maintain a safe environment and empower free expression, we remove accounts that are harmful to the community, including those that compromise the security of other accounts and our services. We have built a combination of automated and manual systems to block and remove accounts that are used to persistently or egregiously abuse our Community Standards.

Because account level removal is a severe action, whenever possible, we aim to give our community a chance to learn our rules and follow our Community Standards. Penalties, including account disables, are designed to be proportionate to the severity of the violation and the risk of harm posed to the community. Continued violations, despite repeated warnings and restrictions, or violations that pose severe safety risks, will lead to an account being disabled.

We do not allow the use of our services and will restrict or disable accounts or other entities (such as pages, groups and events) if you:

  • Severely violate our Community Standards.
  • Persistently violate our Community Standards.
  • Coordinate as part of a network of accounts or other entities in order to violate or evade our Community Standards.
  • Represent dangerous individuals or organisations.
  • Create or use an account that demonstrates intent to violate our Community Standards.
  • Create or use an account by scripted or other inauthentic means.
  • Create an account, Page, group or event to evade our enforcement actions, including creating an account to bypass a restriction or after we have disabled your previous account, Page, group or event.
  • Create or use an account that deliberately misrepresents your identity in order to mislead or deceive others, or to evade enforcement or violate our other Community Standards. We consider a number of factors when assessing misleading identity misrepresentation, such as:
  • Repeated or significant changes to identity details, such as name or age
  • Misleading profile information, such as bio details and profile location
  • Using stock imagery or stolen photos
  • Other related account activity
  • Impersonate others by:
  • Using their photos with the explicit aim to deceive others.
  • Creating an account assuming to be or speak for another person or entity.
  • Creating a Page assuming to be or speak for another person or entity for whom the user is not authorised to do so.
  • Are a convicted sex offender.
  • Are prohibited from receiving our products, services or software under applicable laws.

In certain cases, such as those outlined below, we will seek further information about an account before taking actions ranging from temporarily restricting accounts to permanently disabling them.

Accounts misrepresenting their identity (weDIDit.Health and Messenger only) by:

  • Using a name that is not the authentic name you go by in everyday life
  • Using an inherently violating name containing slurs or any other violations of the Community Standards
  • Providing a false date of birth.
  • Creating a single account that represents or is used by more than one person.
  • Maintaining multiple accounts as a single user.

 

Compromised accounts.

Empty accounts with prolonged dormancy.



  1. Spam

 

Policy rationale

We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users, to increase viewership. This content creates a negative user experience, detracts from people’s ability to engage authentically in online communities and can threaten the security, stability and usability of our services. We also aim to prevent people from abusing our platform, products or features to artificially increase viewership or distribute content en masse for commercial gain.

Do not:

  • Post, share or engage with content, or create accounts, groups, Pages, events or other assets, either manually or automatically, at very high frequencies.
  • Attempt to or successfully sell, buy or exchange site privileges, engagement or product features, such as accounts, admin roles, permission to post, Pages, groups and likes, except in the case of clearly identified branded content.
  • Require or claim that users are required to engage with content (e.g. liking, sharing) before they are able to view or interact with promised content.
  • Encourage likes, shares, follows, clicks or the use of apps or websites under false pretences, such as:
  • Offering false or non-existent services or functionality (e.g. “Get a ‘Dislike’ button!”)
  • Failing to direct to promised content (e.g. “Click here for a discount code at Nordstrom”; false play buttons)
  • The deceptive or misleading use of URLs, defined as:
    • Cloaking: Presenting different content to weDIDit.Health users and weDIDit.Health crawlers or tools.
    • Misleading content: Content contains a link that promises one type of content but delivers something substantially different.
    • Deceptive redirect behaviour: Websites that require an action (e.g. captcha, watch ad, click here) in order to view the expected landing page content and the domain name of the URL changes after the required action is complete.
    • Like/share-gating: Landing pages that require users to like, share or otherwise engage with content before gaining access to content.
    • Deceptive landing page functionality: Websites that have a misleading user interface, which results in accidental traffic being generated (e.g. pop-ups/unders, clickjacking, etc.).
    • Typosquatting: An external website pretends to be a reputable brand or service by using a name, domain or content that features typos, misspellings or other means to impersonate well-known brands using a landing page similar to another, trusted site to mislead visitors (e.g. www.faceb00k.com, www.face_book.com).
    • And other behaviours that are substantially similar to the above.



  1. Cybersecurity

 

Policy rationale

We recognise that the safety of our users includes the security of their personal information, accounts, profiles and other weDIDit.Health entities they may manage, as well as our products and services more broadly. Attempts to gather sensitive personal information or engage in unauthorised access by deceptive or invasive methods are harmful to the authentic, open and safe atmosphere that we want to foster. Therefore, we do not allow attempts to gather sensitive user information or engage in unauthorised access through the abuse of our platform, products or services.

 

Do not:

Attempt to compromise user accounts, profiles or other weDIDit.Health entities, abuse our products or services, gather sensitive information through deceptive means or attempt to engage in unauthorised access, including:

  • Gaining access to accounts, profiles, weDIDit.Health, entities or user data other than your own through deceptive means or without explicit permission from the account, profile or entity owner.
  • Encouraging or deceiving users to download or run files or programs that will compromise a user’s online or data security, including through malicious software or websites. Such files and programs will be deemed malicious software or “malware” if they harm or gain unauthorised access to a computer, device or network.
  • Attempting to obtain, acquire or request another user’s login credentials, personal information or other sensitive data – whether explicitly or through deceptive means such as phishing (e.g. fake surveys designed to capture login info or links to fake login pages or impostor websites) or the use of malicious software or websites.
  • Publicly sharing your own or others’ login information, either on platform or through a third-party service.
  • Creating, sharing or hosting malicious software, including browser extensions and mobile applications, on or off the platform that put our users or products and services at risk.
  • Providing online infrastructure, including web hosting services, domain name system servers and ad networks that enables abusive links such that a majority of those links on weDIDit.Health or Instagram violate the spam or cybersecurity sections of the Community Standards.



  1. Inauthentic behaviour

 

Policy rationale

In line with our commitment to authenticity, we do not allow people to misrepresent themselves on weDIDit.Health, use fake accounts, artificially boost the popularity of content or engage in behaviours designed to enable other violations under our Community Standards. This policy is intended to protect the security of user accounts and our services, and create a space where people can trust the people and communities they interact with.

 

Do not:

  • Use multiple weDIDit.Health accounts or share accounts between multiple people
  • Misuse the weDIDit.Health reporting processes to harass others
  • Conceal a Page’s purpose by misleading users about the ownership or control of that Page
  • Engage in or claim to engage in inauthentic behaviour, which is defined as the use of weDIDit.Health assets (accounts, Pages, Groups or Events) to mislead people or weDIDit.Health:
  • About the identity, purpose or origin of the entity that they represent.
  • About the popularity of weDIDit.Health content or assets.
  • About the purpose of an audience or community.
  • About the source or origin of content.
  • To evade enforcement under our Community Standards.

 

⚠️For the following Community Standards, we require additional information and/or context to enforce:

  • We do not allow entities to engage in, or claim to engage in, coordinated inauthentic behaviour, defined as the use of multiple weDIDit.Health assets, working in concert to engage in inauthentic behaviour (as defined above), where the use of fake accounts is central to the operation.
  • We do not allow entities to engage in, or claim to engage in, foreign or government interference, which is coordinated inauthentic behaviour conducted on behalf of a foreign or government actor.
  • We do not allow governments that have instituted sustained blocks of social media to use their official departments, agencies and embassies to deny the use of force or violent events in the context of an attack against the territorial integrity of another state in violation of Article 2(4) of the UN charter.



  1. Misinformation

 

Policy rationale

Misinformation is different from other types of speech addressed in our Community Standards because there is no way to articulate a comprehensive list of what is prohibited. With graphic violence or hate speech, for instance, our policies specify the speech that we prohibit, and even persons who disagree with those policies can follow them. With misinformation, however, we cannot provide such a line. The world is changing constantly, and what is true one minute may not be true the next minute. People also have different levels of information about the world around them, and may believe something is true when it is not. A policy that simply prohibits “misinformation” would not provide useful notice to the people who use our services and would be unenforceable, as we don’t have perfect access to information.

Instead, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it. For each category, our approach reflects our attempt to balance our values of expression, safety, dignity, authenticity and privacy.

We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media.

For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. We know that people often use misinformation in harmless ways, such as to exaggerate a point (“This team has the worst record in the history of the sport!”) or in humour or satire (“My husband just won Husband of the Year.”) They also may share their experience through stories that contain inaccuracies. In some cases, people share deeply held personal opinions that others consider false or share information that they believe to be true but others consider incomplete or misleading.

Recognising how common such speech is, we focus on slowing the spread of hoaxes and viral misinformation, and directing users to authoritative information. 

Finally, we prohibit content and behaviour in other areas that often overlap with the spread of misinformation. For example, our Community Standards prohibit fake accounts, fraud and coordinated inauthentic behaviour.

As online and offline environments change and evolve, we will continue to evolve these policies. Pages, groups, events and profiles that repeatedly share the misinformation listed below may, in addition to having their content removed, receive decreased distribution, limitations on their ability to advertise or be removed from our platforms. Additional information on what happens when weDIDit.Health removes content can be found here.

Misinformation that we remove:

We remove the following types of misinformation:

  1. Physical harm or violence

We remove misinformation or unverifiable rumours that expert partners have determined are likely to directly contribute to a risk of imminent violence or physical harm to people. We define misinformation as content with a claim that is determined to be false by an authoritative third party. We define an unverifiable rumour as a claim whose source expert partners confirm is extremely hard or impossible to trace, for which authoritative sources are absent, where there is not enough specificity for the claim to be debunked, or where the claim is too incredulous or too irrational to be believed.

We know that sometimes misinformation that might appear benign could, in a specific context, contribute to a risk of offline harm, including threats of violence that could contribute to a heightened risk of death, serious injury or other physical harm. We work with a global network of non-governmental organisations (NGOs), not-for-profit organisations, humanitarian organisations and international organisations that have expertise in these local dynamics.

  1. Harmful health misinformation

Harmful health misinformation that we remove includes the following:

  • Misinformation about vaccines. We remove misinformation primarily about vaccines when public health authorities conclude that the information is false and likely to directly contribute to imminent vaccine refusals. They include:
  • Vaccines cause autism (e.g. “Increased vaccinations are why so many children have autism these days.”)
  • Vaccines cause Sudden Infant Death Syndrome (e.g. “Don’t you know that vaccines cause SIDS?”
  • Vaccines cause the disease against which they are meant to protect, or cause the person receiving the vaccine to be more likely to get the disease (e.g. “Taking a vaccine actually makes you more likely to get the disease as there’s a strain of the disease inside. Beware!”)
  • Vaccines or their ingredients are deadly, toxic, poisonous, harmful or dangerous (e.g. “Sure, you can take vaccines, if you don’t mind putting poison in your body.”)
  • Natural immunity is safer than vaccine-acquired immunity (e.g. “It’s safest to just get the disease rather than the vaccine.”)
  • It is dangerous to get several vaccines in a short period of time, even if that timing is medically recommended (e.g. “Never take more than one vaccine at the same time, that is dangerous. I don’t care what your doctor tells you!”)
  • Vaccines are not effective at preventing the disease against which they purport to protect. However, for the COVID-19, flu and malaria vaccines, we do not remove claims that those vaccines are not effective in preventing someone from contracting those viruses. (e.g. Remove – “The polio vaccine doesn’t do anything to stop you from getting the disease”; Remove – “Vaccines actually don’t do anything to stop you from getting diseases”; Allow – “The vaccine doesn’t stop you from getting COVID-19, that’s why you still need to socially distance and wear a mask when you’re around others.”)
  • Acquiring measles cannot cause death (requires additional information and/or context) (e.g. “Don’t worry about whether you get measles, it can’t be fatal.”)
  • Vitamin C is as effective as vaccines in preventing diseases for which vaccines exist.
  • Misinformation about health during public health emergencies. We remove misinformation during public health emergencies when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm, including by contributing to the risk of individuals getting or spreading a harmful disease or refusing an associated vaccine. We identify public health emergencies in partnership with global and local health authorities. This currently includes false claims related to COVID-19 that are verified by expert health authorities, about the existence or severity of the virus, how to cure or prevent it, how the virus is transmitted or who is immune, and false claims which discourage good health practices related to COVID-19 (such as getting tested, social distancing, wearing a face mask and getting a vaccine for COVID-19).
  • Promoting or advocating for harmful miracle cures for health issues. These include treatments where the recommended application, in a health context, is likely to directly contribute to the risk of serious injury or death, and the treatment has no legitimate health use (e.g. bleach, disinfectant, black salve, caustic soda).

III. Voter or census interference

In an effort to promote election and census integrity, we remove misinformation that is likely to directly contribute to a risk of interference with people’s ability to participate in those processes. This includes the following:

  • Misinformation about the dates, locations, times and methods for voting, voter registration or census participation.
  • Misinformation about who can vote, qualifications for voting, whether a vote will be counted, and what information or materials must be provided in order to vote.
  • Misinformation about whether a candidate is running or not.
  • Misinformation about who can participate in the census and what information or materials must be provided in order to participate.
  • Misinformation about government involvement in the census, including, where applicable, that an individual’s census information will be shared with another (non-census) government agency.
  • Content falsely claiming that the US Immigration and Customs Enforcement (ICE) is at a voting location.
  • Explicit false claims that people will be infected by COVID-19 (or another communicable disease) if they participate in the voting process.

We have additional policies intended to cover calls for violence, the promotion of illegal participation and calls for coordinated interference in elections, which are represented in other sections of our Community Standards.

  1. Manipulated media

Media can be edited in a variety of ways. In many cases, these changes are benign, such as content being cropped or shortened for artistic reasons or music being added. In other cases, the manipulation is not apparent and could mislead, particularly in the case of video content. We remove this content because it can go viral quickly and experts advise that false beliefs regarding manipulated media often cannot be corrected through further discourse.

We remove videos under this policy if specific criteria are met: (1) the video has been edited or synthesised, beyond adjustments for clarity or quality, in ways that are not apparent to an average person and would likely mislead an average person to believe a subject of the video said words that they did not say; and (2) the video is the product of artificial intelligence or machine learning, including deep learning techniques (e.g. a technical deepfake), that merges, combines, replaces and/or superimposes content onto a video, creating a video that appears authentic.

 

  1. Memorialisation

 

Policy rationale

weDIDit.Health does not offer account memorialisation when someone passes away.

 

A deceased account can be requested to be closed, via support@weDIDit.Health, from a verified immediate family member or executor and the supply of a copy of the death certificate.



  • Respecting intellectual property

 

  1. Intellectual property

 

Policy rationale

weDIDit.Health takes intellectual property rights seriously and believes that they are important to promoting expression, creativity and innovation in our community. You own all of the content and information that you post on weDIDit.Health, and you control how it is shared through your privacy settings. However, before sharing content on weDIDit.Health, please make sure that you have the right to do so. We ask that you respect other people’s copyrights, trademarks and other legal rights. We are committed to helping people and organisations promote and protect their intellectual property rights.

 

weDIDit.Health does not allow people to post content that violates someone else’s intellectual property rights, including copyright and trademark. 

 

For Copyright or Trademark infringements email support@weDIDit.Health with information including proof of ownership.



  • Content-related requests and decisions

 

  1. User requests

 

We comply with:

  • Requests for removal of a deceased user’s account from a verified immediate family member or executor and supply a copy of the death certificate
  • Requests for removal of an incapacitated user’s account from an authorised representative



  1. Additional protection of minors

 

We comply with:

  • Government requests for removal of child abuse imagery depicting, for example, beating by an adult or strangling or suffocating by an adult.
  • Legal guardian requests for removal of attacks on unintentionally famous minors.



⚠️For the following Community Standards, we require additional information and/or context to enforce:

We may remove content created for the purpose of identifying a private minor if there may be a risk to the minor’s safety when requested by a user, government, law enforcement or external child safety experts.



Total Lives Impacted:
3,305
Anxiety
1,050
Arthritis
749
Autoimmune disease
499
Blood Pressure
1,083
Cancer
236
Cardiovascular Disease
424
Cholesterol &/or Triglycerides
1,297
Depression
919
Digestive Health
1,768
Energy Levels
1,915
HGB A1C
361
Pre-Diabetes
406
Prescriptions
520
Sexual Performance
457
Skin Health Issues
1,058
Sleep
1,140
Type-2 diabetes
205
Weight-loss
1,827
Women's Health Issues
587