Content Moderation Policy

Last Updated: April 22nd, 2026


  1. Purpose

    We moderate Content and conduct to help keep Flirt1to1 lawful, safe, and authentic, and to support compliance obligations in the United States, the European Union, and the United Kingdom.

    This policy explains:

    • what we moderate and why;
    • how we detect potential violations;
    • what actions we may take;
    • how notices, complaints, and appeals work (by reference to your existing policies); and
    • how we provide required explanations for decisions (including under the EU Digital Services Act (DSA)).
  2. Scope

    This policy applies to all activity on Flirt1to1, including profiles, posts (if any), messages, call audio/video (whether live or recorded), and any images, videos, or other media shared during an Entertainer Interaction (“Content”).

    We moderate for compliance with:

  3. What We Moderate For

    Flirt1to1 permits lawful adult sexual content and adult conversations between consenting adults; however, we prohibit illegal content and the categories of content and conduct listed in the Acceptable Use Policy (including CSAM, non-consensual intimate imagery, trafficking/exploitation, commercial sexual services or arranging in-person sexual activity for compensation, and any extreme pornography prohibited by law).

    We may remove, restrict, or otherwise take action on Content or accounts that we reasonably believe violate our policies or applicable law, including:

    1. Illegal Content and High-Risk Harm

      Including content or conduct involving:

      • child sexual exploitation or abuse (CSAM);
      • trafficking/exploitation;
      • terrorism;
      • credible threats of violence;
      • intimate image abuse and unlawful sexual content;
      • fraud, payment evasion, or other financial crimes; and
      • other illegal content identified in the Acceptable Use Policy.
    2. Platform Integrity and Authenticity

      We enforce rules against:

      • AI personas, AI chat/voice simulation, or automated messaging used to simulate Entertainer interaction;
      • third-party chat operators or account operation by anyone other than the verified Entertainer; and
      • impersonation, account sharing, and misrepresentation of identity or affiliation.
    3. Off-Platform Solicitation and Prohibited Transactions

      We enforce rules prohibiting:

      • attempts to move Users off-platform for continued interaction;
      • requests for off-platform payments or alternative payment methods;
      • arranging in-person meetings or commercial sexual services; and
      • coded language or evasion designed to bypass these restrictions.

    We treat attempts to arrange or facilitate commercial sexual services (including in-person meetings for sexual activity in exchange for money or anything of value) as a priority enforcement category.

    We may act on coded language or indirect arrangements that reasonably appear intended to facilitate off-platform meetings or commercial sexual services.

  4. How Content is Flagged for Review

    We use a combination of:

    (a) User Reports

    Users can report Content or conduct via the mechanisms described in our Complaints Policy and related reporting tools.

    (b) Proactive Detection

    We may use proactive methods to detect violations and safety risks, including patterns associated with:

    • illegal content;
    • off-platform solicitation and payment evasion;
    • impersonation, account sharing, or third-party operation;
    • automated/AI-driven communications; and
    • coercion, exploitation, or trafficking indicators.

    (c) Notices from Authorities and Other Submissions

    We may act on notices from competent authorities and other third parties. In the EU, where applicable, we process notices consistent with the DSA notice mechanisms and may prioritize certain qualified submissions (including those from trusted flaggers, where applicable).

  5. Tools Used in Moderation

    Moderation may include human review, automated tools, or both.

    Automated tools may be used to help detect and triage potential violations (for example, identifying suspected illegal content, spam, scam patterns, or off-platform solicitation). Automated tools may also help identify authenticity risks (such as unusual account-operation patterns).

    Automated tools assist review but do not guarantee accuracy. Where required by law, we will disclose when automated tools played a material role in a restriction decision and provide a way to request human review through the Appeals Policy.

  6. Decision-Making Principles

    When evaluating Content or conduct, we may consider:

    • the type and severity of the suspected violation;
    • context (including whether it appears coercive, exploitative, or fraudulent);
    • the likelihood of harm;
    • whether the behavior appears intentional or part of an evasion pattern; and
    • whether the issue can be addressed through a less restrictive measure.

    We may act without prior notice where necessary to address illegal content, prevent harm, preserve evidence, or comply with legal obligations.

    We may also preserve records and cooperate with law enforcement, regulators, and, where relevant, our payment processors and banking partners to investigate suspected fraud, illegal content, or policy violations, consistent with applicable law.

  7. Actions We May Take

    Depending on the circumstances, we may:

    • remove or disable access to Content;
    • restrict visibility or sharing of Content;
    • restrict product features (e.g., messaging, calls, uploads);
    • interrupt, suspend, or terminate any live Entertainer Interaction (including a phone call or 1-on-1 video chat) and restrict access to live features where we reasonably believe it violates our policies or applicable law;
    • issue warnings or require acknowledgments;
    • require additional information or re-verification (including identity or age verification) as a condition of continued access or use;
    • temporarily suspend accounts;
    • permanently terminate accounts;
    • take steps to prevent ban evasion or coordinated abuse;
    • where permitted by law and our agreements, hold, delay, suspend, withhold, reverse, or adjust payouts or other funds where we reasonably suspect fraud, payment evasion, coercion/exploitation, policy evasion, unauthorized use, or chargeback/dispute abuse;
    • restrict or terminate accounts associated with excessive disputes/chargebacks or suspected unauthorized use.

    For serious violations (including suspected CSAM, trafficking, credible threats, or other imminent harm), we may take immediate action and may make reports or referrals where required or appropriate under applicable law.

  8. Notice, Complaints, and Appeals

    1. Complaints and User Reports

      How to submit a complaint or report Content is described in our Complaints Policy.

    2. Appeals

      How to appeal Content removals or account restrictions is described in our Appeals Policy.

    3. DMCA Notices

      Copyright takedown notices and counter-notices are handled under our DMCA Policy.

    4. TAKE IT DOWN Act Compliance

      Reports and requests covered by our TAKE IT DOWN Act Compliance Policy are handled under that policy.

    If multiple policies could apply to the same report, we may route and process the report under the policy that best fits the request and legal requirements.

  9. Statements of Reasons and User Notification

    Where required (including under the EU DSA for in-scope decisions), we will provide a statement of reasons for certain Content moderation or account restriction decisions. This may include:

    • the policy or legal basis for the action;
    • the type of restriction applied and its scope;
    • whether automated tools were used in a material way; and
    • available complaint and appeal options.
  10. Contact

    Questions about this Content Moderation Policy can be sent to support@flirt1to1.com.