Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • funkwhale funkwhale
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 379
    • Issues 379
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 20
    • Merge requests 20
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • funkwhalefunkwhale
  • funkwhalefunkwhale
  • Issues
  • #580
Closed
Open
Issue created Oct 15, 2018 by Agate@agateMaintainer

[Meta] Designing the moderation andanti-harassment tools

I feel like we need input and feedback before working on that. Even if we already have #320 (closed) opened, I suggest we use this issue as a discussion starter around Funkwhale moderation and anti-harassment (AH) tools.

This issue is a draft of proposed tools / workflow, based on my reflexions and the subjects, and what is currently present and/or missing (at least IMHO) in the ecosystem. As such, it is probably biaised, incomplete, or worst and your input would be incredibly valuable to help with that.

Anything in this document is opened to discussion will be updated to reflect received feedback. What is not open to discussion, though, is that fact that we need moderation tools. If you don't need moderation tools on a personnal level, please don't make uncomfortable people who actually need them.

Why do we need moderation/AH tools?

Moderation may be helpful for:

  1. Instance owners:
  • a) to ensure they are not legally liable (e.g. if they end up broadcasting copyrighted or illegal content), this is linked to #308 (closed), since storing the licences would makes things easier on this side
  • b) to avoid federating with bad or malicious actors (who broadcast low quality, harmful content, or who impersonate creators, etc.)
  • c) to fight against spam or unsollicited activity or content
  1. End users:
  • a) to control who can access their content and/or activity (listening history, comments, etc.)
  • b) to avoid being exposed to harassment, unsollicited or unwanted content

Both personas may have similar and different concerns, and will sometimes operate at different levels, and the tools we'll design must take that in consideration. A concrete exemple: instance owners may want to moderate based on low-level information (such as IP address), while this usually makes less sense for end users.

A quick note about moderation/AH tools

Note: all of this is based on my personal analysis. If you have documentation and references about this, drop them in the comments!

Based on what exists in various networks and projects, we can see two kinds of tools:

  1. Proactive tools: those tools help you prevent problems before they become problems. Typical examples include: locking your account on Mastodon so only your followers can access your content, restricting the audience of one of your toot on a per-post basis, blocking a list of actors or IPs known to be malicious, etc.
  2. Reactive tools: those tools help you deal with problems when they occur. Typical examples include: reporting problematic content or actors, blocking or muting problematic actors and/or content

Ideally, we should have both kind implemented. Proactive tools are a first protection wall, and will reduce the scale of problems when they occur, but reactive tools will always be needed to deal with problems that are not stopped by our proactive tools.

Apart from that, I'd also like our moderation/AH tools to match those requirements:

  • Granularity: affect only the problematic actor/and or content. We do not want to block a 1000 users node if only one user is harassing someone (unless, of course, the node moderation team is unresponsive), or block a 10000 tracks library if only one track is problematic
  • Temporality: in the same vein, having a way to apply the moderation for a limited amount of time is also an interesting behaviour: if one node is subject to a wave of spam and the moderation team is overwhelmed, it makes sense to mute it for maybe a week and have the mute being unapplied automatically.
  • Security: this may be obvious, but those tools should improve the life of people using them, not make it worse. Typically, we should ensure that the act of moderation (such as a block) is not directly visible by the actor being blocked.
  • Reusability: as moderation require vast amounts of effort, I'd like to offer way to share this load under different actors / nodes. A common way to do that is to use block-lists, which could be federated, and a user could share their blocklist with trusted ones, or reuse external blocklists

Proposed tools

In the table below, you'll find the list of threats described earlier, and, for each one, the various tools that I suggest we implement to deal with it.

For instance owners

Threat Proactive tools Reactive tools
Being legally liable because of hosting / broadcasting content Do not serve library publicly unless they are marked as trusted DMCA / Takedown / Report form
Shared blocklists of actors / nodes known to share illegal or copyrighted content Blocking actors, nodes or specific content (done via !521 (merged))
Content fingerprinting to detect copyrighted content upfront and switch containing library to private
Spam / Unsollicited content Closed / invitation only instance (Done) Report form to flag content, actors or nodes
Shared blocklists of IPs / Email domains known for bad behaviour Blocking / muting of nodes with spam activity (temporary or permanent)
Email verification required before posting content Mass deletion tools to purge spamming actors

For end users

Threat Proactive tools Reactive tools
Unauthorized access to content Configurable visibility for content and activity occuring on the platform (private, followers, unlisted, etc.) Content deletion
Locked accounts with manual approve of followers to limit how activity is broadcasted Manually revoking follows from specific actors or nodes
Shared blocklists of follow bots or nodes known for their bad behaviour Blocking of actors or nodes
Harassment, abuse, aggression Shared blocklists of actors or nodes known for bad behaviour Report form to flag content, actors or nodes
Non-indexed content to limit the dicovery of targets by attackers using search Blocking / muting of nodes with bad behaviour (temporary or permanent)
Unsolicited/unwanted content Shared mute / filters lists of actors or nodes Muting of nodes / actors (temporary or permanent)
Content filters to hide content matching given criterias (partially done via !618 (merged))

As you can see, there is plenty of work to do, to figure out if those problems are real, well defined, if the proposed solutions are actually viable, and, finally, to actually do the implementation work!

Because of the insane amount of work this represent, we'll likely have to focus on some areas and release new features incrementally. As explained earlier, proactive tools are not enough by themselves, so maybe focusing on reactive tools first would be better.

In my opinion having block lists and a report/flagging system would be a good start, but I don't want to commit on anything before I hear what you have to say :)

Links

  • Fedeiverse thread: https://mastodon.eliotberriot.com/@funkwhale/100901668895629932
Edited Feb 20, 2019 by Agate
Assignee
Assign to
Time tracking