Civilised Discourse

We seek high a level design which helps hub leaders to encourage appropriate discussion and discourage unwanted behaviours.

In other words, successful moderation is not mainly about banning user accounts and blocking spam, but is rather about building for the desired behaviour on a foundation of gentle and positive guidance. It includes giving the community leaders the tools they need to act in ways which match how they would manage in-person discussions, such as telling people what is expected on their way in, putting up signs that everyone can see, allowing people to take a break, and being able to take someone aside for a quiet word.

This design will include a set of features, some of which may be classified as:

  • positive: encouraging desired behaviours
  • negative: moderation, limits, damage control

Summary

This work provides a high level sketch of a design framework on which to build a range of features that support community leaders in their tasks of moderation and community management.

This plan includes a list of suggested features that may be attached to this framework, without going into the details of each one. The idea here is to provide a starting point, a way of thinking about how to promote the desired behaviours, and a way of organising how individual features can be added to the system one at a time.

Research

We look for role models among platforms which have similar needs to PubHubs. A distinguishing trait of PubHubs is in being designed for independent communities, each with its own leaders and rules and norms, where each conversation is subject to a particular hub's authority. Among such platforms, we take primary inspiration from the open-source forum software "Discourse" because it is respected for its successful approach and is well documented and already studied.

Other insights are learned from other platforms too, each with its own balance of needs. However, we note that many well known platforms focus on public interactions, and their most pressing moderation needs differ substantially from those of PubHubs.

We spoke with New Public who have studied the behavioural and moderation design of many social platforms, and they assisted by providing their relevant research.

See Research: Discourse.org

Role Model: Discourse.org

Research suggested that Discourse.org provides our best role model.

The idea of "Trust Levels" provides the foundation of the approach used in Discourse. It creates a path of gradual increase of trust and power and responsibility for each person. Often, a platform is built with a hard division between the class of "users" and the class of "moderators" and/or "admins", and everyone in the class of ordinary users is treated the same no matter how new or how experienced they are, and promotion to "moderator" or "admin" is rare and difficult. In Discourse the more gradual approach has been found successful.

Jeff Atwood writes in Understanding Discourse Trust Levels:

The user trust system is a fundamental cornerstone of Discourse. Trust levels are a way of…

  • Sandboxing new users in your community so that they cannot accidentally hurt themselves, or other users while they are learning what to do.
  • Granting experienced users more rights over time, so that they can help everyone maintain and moderate the community they generously contribute so much of their time to.

Features Attached to Trust Levels

Here we list a suggested set of features relevant to the trust levels framework. Most of these features provide either a power or a restriction to the participant depending on their trust level. These are drawn mainly from among the features in Discourse.org that are also useful in the context of PubHubs.

In PubHubs, these features may be developed incrementally over time, starting with just a small number of them. The idea is that a community leader uses primarily social means (talking) to achieve their goals. Each new feature should make a community leader's life a little easier, by automating a task that they were previously trying to achieve by more awkward technical means or by social means. A successful system can be built with a few features initially, and expanded as new needs arise or as usage increases. Each new technical feature can be hooked in to the Trust Levels framework.

  • system sends welcome/transition messages (sign up, join a hub, join a room, change of TL)
  • can edit welcome/transition messages
  • can post "official" messages
  • automatically join some rooms (list curated by Leaders)
  • restricted list of rooms to join, to simplify the choice (list curated by Leaders)
  • can see a full list of users (vs. only those who already spoke)
  • can start a private conversation
  • change of user's avatar and nickname must be approved (by a Leader?)
  • rate limits for joining rooms and posting messages (and links, images, redacting, etc.)
  • can create rooms (and update and delete)
  • can use moderator tools
  • can redact messages sent by others
  • can curate lists of auto-join and may-join rooms
  • can approve change of user's avatar and nickname
  • can flag/report messages or users
  • flagged/reported messages or users are automatically hidden or marked as suspect
  • can view and respond to flags/reports

In the next section there is an example of how these powers and restrictions could be assigned based on trust levels.

Design: PubHubs Trust Levels

Let us map the Discourse Trust Levels system to PubHubs.

The design for Trust Levels talks about how levels are assigned and changed and communicated, and lists the responsibilities and powers and restrictions of a person at each level. Some of the responsibilities and powers and restrictions are socially enacted, through documentation and guidance through leaders and moderators and community norms. Some of the responsibilities and powers and restrictions are managed technically by the platform.

See PubHubs Trust Levels Design

Next Steps

Next steps towards implementing this plan:

  1. Define and implement a storage representation for the trust level of each user.
    • Decide how it fits with the existing 'administrator' privilege and with matrix protocol 'power levels'.
  2. Choose a (very) short list of the first few features to implement. Concentrate on:
    • ones that guide new users towards good behaviour;
    • ones that a human facilitator cannot easily perform without automation;
    • ones that already exist and can simply be "hooked in" to the Trust Levels;
    • ones that make a good impression to people evaluating the system (as users and as administrators).