Research: Discourse.org

Meeting with New Public

I reached out to New Public for assistance. They have studied the moderation and related behavioural design of many social platforms. I began with the following Problem Statement:

The code I am currently engaged in porting from matrix's "Mjölnir/Draupnir" moderation tools is simplistic (little more than managing bans); we need to look beyond that.

In my thinking so far, it is basically a question of how to design a moderation system that is simple to start with but based on real best practices -- when we are starting from little and have limited resources. The system will have both some technical controls and some documented "policy" controls. What are the building blocks that we need for this, and how can we arrange them into a plan, with stages from simple to more sophisticated?

I can think of a bunch of big and small building blocks, individual software features -- Discourse's "trust levels", progressive "lock-down levels" to handle a spam attack, temporary or permanent bans, ability to ask a user to agree or disclose something before they may continue, to trace where and what they said, redact unacceptable messages; lots of things. But all these are just scattered blocks. We need a way to arrange a cohesive framework that includes such things.

One or two of the building blocks for us are going to relate to PubHubs-specific design features (pseudonyms, selective disclosure of attributes). However I strongly suspect the overall shape of it should not be special.

Can we share a scheme that's also used in other projects, rather than inventing our own unique one, both for efficiency of design/planning, and also so that moderators might see the same familiar patterns and options existing in different social platforms.

We met to discuss this with New Public in December 2023, following which they shared some of their recent research. The most relevant of this was their research into how different social platforms structure their moderation and related facilities to achieve the behaviours they want to see, such as civilised discourse.

Platform Facilities to Encourage Civilised Discourse

Sources include:

Learning from Discourse in particular, the following facilities could translate usefully to the context of PubHubs.

  1. Progressive responsibility and privilege:

    • trust levels
  2. Visibility, education:

    • user tips (pop up in the UI to guide new users)
    • prominent info about rules/norms like this FAQ section
    • admin team (and their roles) publicly visible
    • “Just in time messaging” to remind people of community rules/norms the first few times they post
    • Callouts indicating a new user / first post (to help ppl know to welcome newbies)
  3. Moderation (damage control) (more in: Norms: Features & Patterns Audit):

    • Users can flag posts/users (subject to rate limits) and those flagged multiple times are hidden immediately – useful for users to mark spam etc. quickly (with moderator intervention possibly coming later)
    • User restrictions: Slow mode (1 post every N minutes), Silence mode, Suspended
    • Chat/topic restrictions (temporary/permanent): change to private, unlisted
    • Whisper (discuss privately between mods and invited others) within a public discussion

Discourse Trust Levels (TL)

Perhaps the most respected and far-reaching facility is "trust levels", a system by which a user progressively gains more responsibility and privilege in a community. The progression can be partly automated and partly manual, and can be reversed if needed.

Discourse blog post: Understanding Discourse Trust Levels

In Discourse, a user can progress through five Trust Levels:

  • New (TL0), Basic (TL1), Member (TL2), Regular (TL3), Leader (TL4)

Also a setting for restrictions/simplifications on a user's "first day".

Also a "bootstrap mode" to promote growth by having less restrictions for new members while a group is small, and tighter restrictions after it reaches 50 members.

Trust levels are visible – displayed on user's account profile, etc.

Transitions and the associated rules/norms/expectations are communicated to the user by automated private messages.

There is a section in the blog post about "How do users learn about the trust system?" – mainly based on automated private messages explaining the system, and some other tools.

TL Restrictions

Example for Trust Level 0 (New – a visitor who just created an account). Cannot:

  • Send personal messages to other users
  • Flag posts
  • Post more than 1 image / 0 attachments
  • Post more than 2 hyperlinks or mention more than 2 users in a post
  • Post more than 3 topics or 10 replies
  • Edit their own posts after more than 24 hours

Also the UI is simplified… at least the forbidden actions are hidden.

TL Transitions

Transitions from one trust level to the next are mostly upward, and mostly automated.

Get to trust level 1 by…

  • automated: on entering 5 topics, reading 30 posts, and reading for 10 minutes
  • or manual promotion (by a Leader/mod/admin)
  • or by default in the "bootstrap mode"

Metrics for automatic transition to trust level 2...

  • visit on 15 days, give and receive 1 "like", reply to 3 topics, and some more.

Note how the Discourse metrics so far focus on positive interactions. They do not use "has not been reported" or "has not had spam detected". Possibly this may help avoid any incentive for bad actors trying to hurt someone else by reporting them or "baiting" them to behave badly.

Trust level 3 metrics do include limits on reported/confirmed bad behaviour. Trust level 3 has a slightly different automation, with metrics continuously evaluated over the last 100 days, and "unlike other trust levels, you can lose trust level 3 status if you dip below these requirements".

Trust level 4 is reached only by manual promotion.

In this way the Discourse Trust Levels automation shepherds people up towards TL2 or TL3 depending on each person's level of commitment, keeping TL4 as a distinct level with human gatekeepers.

Automation is key to being able to support large communities with minimal effort. However, there is human oversight and the ability to override the system.

Relevance to PubHubs

The system of capabilities and guidance based around Trust Levels seems valuable and relevant to PubHubs. We take the idea and translate it to PubHubs in the next section: PubHubs Trust Levels Design.