1. Age Restrictions and Compliance
-
Lifebonder is not intended for children under 18 years old.
Users must verify their age before creating an account.
-
If we discover that a child under the minimum age has
created an account, we will immediately delete the account
and associated data.
-
Parents or guardians can request the removal of a minorβs
account by contacting our support team at
child-safety@lifebonder.com.
2. Protection Against Child Sexual Abuse Material (CSAM) and
Exploitation
Lifebonder maintains a zero-tolerance policy for Child Sexual
Abuse Material (CSAM), child exploitation, and any form of
child endangerment. The following measures are in place:
A. Prohibited Content and Behavior
-
Sharing, uploading, or distributing CSAM or any form of
child exploitation content will result in immediate
account termination and reporting to law enforcement.
-
Any attempt to groom, solicit, or exploit minors on the
platform will result in permanent bans and legal action.
The following activities are strictly prohibited on
Lifebonder and will result in
immediate removal of content, permanent bans, and legal
reporting
where applicable
-
Inappropriate interaction with a child:
Any form of
physical or sexual suggestiveness,
including inappropriate messages, groping, or caressing,
even in digital content
-
Child grooming: Befriending or
communicating with a child with the intent to
manipulate, exploit, or engage in sexual
contact
(online or offline).
-
Sexualization of minors: Any
images, discussions, or depictions that
portray children in a sexualized manner,
encourage sexual abuse, or promote child
exploitation.
-
Sextortion: Threats or blackmail
involving a child's intimate images, whether real or
fabricated.
-
Child trafficking: Any form of
solicitation, advertising, or participation in the
exploitation or trafficking of children
for commercial or sexual purposes.
B. Detection, Reporting, and Enforcement
-
Lifebonder uses AI-based content moderation and human
review to detect and prevent the spread of illegal or
harmful content.
-
All CSAM-related reports are immediately escalated to
moderators and relevant law enforcement agencies.
-
Users can report suspected child endangerment directly
within the app or via
https://lifebonder.com/contactus.html. Reports are
reviewed within 24 hours, and appropriate action is taken
immediately.
3. Child-Appropriate User Interactions
Lifebonder enforces strict community guidelines to prevent
online harms against minors:
-
No Direct Messaging by Strangers: To protect minors, direct
messages are only allowed between users who have mutually
accepted connection requests.
-
No Location Sharing: Users cannot share their real-time
location publicly or privately with other users.
-
Content Filtering: Sensitive and inappropriate content is
not allowed on LifeBonder.
-
AI and Human Moderation: Our moderation system proactively
removes inappropriate interactions and deletes any account
that is managed by a minor.
4. Reporting and User Safety Tools
To empower users and ensure a safe environment, Lifebonder
provides:
-
In-app Reporting: Users can report inappropriate content,
harassment, or child safety violations directly within the
app.
-
Fast Response Times: Reports related to child safety are
prioritized and reviewed within 24 hours.
-
Dedicated Safety Team: A specialized team monitors and
investigates child safety concerns and coordinates with
relevant authorities.
-
Parental & Guardian Support: Parents or guardians concerned
about a childβs activity on Lifebonder can contact
https://lifebonder.com/contactus.html
.
5. Consequences for Violations
-
Any account involved in child exploitation, CSAM, or child
endangerment will be immediately terminated and reported to
law enforcement.
-
Users engaging in harmful behavior toward minors will
receive permanent bans and may face legal action
6. Contact and Reporting Channels
If you come across content or behavior that threatens child
safety, report it through:
-
In-app reporting tool (available in all user profiles and
posts)
-
Email:
child-safety@lifebonder.com
We take every report seriously and act within 24 hours to
remove harmful content and escalate cases when necessary.
We continuously update our policies and safety measures to
protect minors and comply with local and international law. We
work closely with law enforcement, child protection agencies,
and industry partners to create a safer digital space for
everyone.
Content Moderation Policy
Lifebonder is committed to providing a safe, inclusive, and
user-friendly platform. Our
content moderation system ensures compliance
with the EU Digital Services Act (DSA), GDPR, ,
and Google Playβs Developer Policies,
particularly concerning
child safety, misinformation, harassment, and illegal
content.
1. Moderation Process and Enforcement
We use a
combination of automated detection, human moderation, and
user reports
to enforce community guidelines. Our system operates in
four stages:
A. Automated Content Detection
-
AI-based filters scan for
illegal, harmful, or inappropriate content
(e.g., CSAM, hate speech, threats, scams)
-
Flagged content is reviewed within
24 hours by a human moderator.
-
High-risk categories (child exploitation, terrorist content)
are immediately escalated
for removal and reporting to relevant authorities.
B. User Reports & Community Moderation
-
Users can report content via the
in-app reporting tool or
[support email].
-
Report Categories include:
-
Child safety violations (CSAM, grooming, solicitation)
- Hate speech, terrorism, or violence
- Misinformation and disinformation
- Harassment and bullying
-
Privacy violations (unauthorized sharing of personal
data)
- Spam, scams, or phishing attempts
-
Urgent cases (child exploitation, terrorist
threats) are reviewed within 24 hours
-
Standard reports are reviewed
within 48 hours
C. Manual Review & Decision
-
Content flagged by AI or user reports undergoes
human moderation.
-
Moderators assess context, intent, and severity before
making a decision.
-
Users receive a detailed notification of the action taken
D. Appeals & Correction Mechanism
-
Users can appeal moderation decisions
within 7 days.
-
Appeals are reviewed within 14 days by a
different moderator than the one who made the initial
decision.
-
If an appeal is successful, content is
reinstated immediately.
2. Notification Process
Whenever content is
removed, restricted, or actioned, users
receive a detailed notification explaining:
π’ Notification Example:
Subject: π¨ Content Removal Notification β
Lifebonder Moderation Team
Dear [User],
Your recent post/comment on Lifebonder has been
removed due to a violation of our
Community Guidelines.
π¨ Violation Category: [e.g., Child Safety
Violation, Hate Speech]
π¨ Content Removed: β[Exact text or media
preview]β
π¨ Reason for Removal: plain the rule
violated]
π¨
You have the right to appeal this decision within 7
days. If you believe this action was incorrect, you can submit an
appeal through [Appeal Link].
π¨ Appeal Review Timeline:
-
Appeal submission:
Within 7 days of this notification
-
Moderation review:
Within 14 days of appeal submission
-
Final decision:
You will be notified via email/app
π¨ Severe Violations (e.g., CSAM, Terrorist Content, Child
Exploitation)
-
Your account has been
permanently banned and
reported to authorities as required by law.
-
f you believe this is a mistake, contact
[appeal email].
Thank you for keeping Lifebonder a safe space.
- Lifebonder Moderation Team
3. Content Removal & Temporary Restrictions
π¨ Severe Violations β Immediate removal, permanent ban, and legal reporting (CSAM,
terrorism, threats).
π¨ Moderate Violations β Content removal, temporary bans (24 hours to 7 days).
π¨ Minor Violations β Warning, content restriction, or shadowban.
For all child safety-related violations, accounts are immediately suspended, and reports
are sent to law enforcement (INHOPE, Europol, or NCMEC).
4. Transparency & Compliance with the DSA
-
All moderation actions are logged and reviewed quarterly.
-
Users can access moderation reports and appeals history in their account
settings.
-
Lifebonder will publish annual transparency reports detailing content moderation
statistics as required under Article 42 of the DSA.