Safety

Moderation System

Updated 1/19/2026
1 min read

Moderation System

Postcard uses a multi-layered approach to ensure platform safety.

1. User Reporting

Any user can report a post. This creates an entry in the post_reports table and immediately adds the item to the Admin Moderation Queue.

2. Gemini AI Analysis

We utilize Google's Gemini Pro Vision model to automatically scan content.

  • Trigger: New posts are added to a daily batch.
  • Criteria: Scanned for nudity, violence, hate speech, and harassment.
  • Outcome: Content is assigned a safety score. High-risk items are auto-flagged for human review.

3. Admin Actions

Administrators can take two primary actions:

  • Dismiss: The report is invalid; keep the content.
  • Remove: The content is deleted, and the user may be issued a warning or ban.

4. User Bans

Banning a user immediately revokes their session and prevents them from logging in or viewing any content.