Someone elsenet gave a very cogent explanation for why younger people want to police others, which I suspect is a major contributing factor: That is their only option on the types of platform they choose to inhabit. When a platform has little or nothing in the way of privacy and moderation tools, then users cannot control the content they are served. The only way to stop horrifying stuff from splattering their screen -- whatever it is they don't want to see -- is to prevent other people from putting it on the site. So they try to convince the platform owners to ban whole categories of content, and they attack people for sharing content they despise. Because no better option is available to them there.
"Choose another platform with better tools" is only helpful if there is a close equivalent with higher standards. When LiveJournal misbehaved, there were several alternatives including Dreamwidth where folks could do pretty much the same things. But with Twitter, there is no close analog.
Then when people who have learned these habits on boundary-hostile platforms come into other platforms, they bring those bad habits with them. They may not understand that the new platform HAS other options for them to use, let alone how to use those tools to manage their content stream and avoid things they dislike.
These are problems, because attack culture not only upsets people, it also undermines the safety we came to fandom for, and it discourages people from sharing content. So we need to work on these.
Actionable points from this observation:
1) Provide information about privacy and moderation tools on robust platforms like Dreamwidth. Point out that these tools are more effective than trying to convince the whole of online humanity to quit doing certain things. In the case where an individual's bothersome behavior is caused by ignorance, information can often solve it.
2) When we see people behaving in ways that are problematic, discourage it and recommend alternatives. Humans tend to be contextual creatures and can often, though not always, be convinced to adapt to a local group's customs. A helpful approach is, "Here we don't Y, we X." Like, "On Dreamwidth we don't tell other people what to post, we use the moderation tools (link to instructions) to block out content we don't want to see."
3) When we build new platforms, make sure to include robust privacy and moderation tools. Say we're making a meta warehouse. At minimum it needs a "Safe Search" button like browsers often have. Preferably it needs something like AO3's filter tools to block out unwanted content based on tags/warnings. This will be easy to implement if we use AO3's database type code; I am uncertain whether wiki type code can do similar stunts.
4) Once we have built such a platform, its user introduction, tool tutorial pages, FAQ list, etc. need to include explanations about how to use the site responsibly and respect other users with their different tastes. This way people will know how to manage their own use without bothering others -- and if they persist in harassment, they can be suspended or banned.
5) A platform also needs protection from malicious editing. Some wikis have a huge problem with this. The AO3 format is highly resistant to bottom-up tampering but more vulnerable to top-down tampering (e.g. banning people for posting meta, with ambiguous parameters). We need to make sure that what we create is difficult to destroy, since meta includes many things that upset people, just because it goes into all the different interpretations, headcanons, tropes, etc. that people often disagree about.
TLDR Details
The March Meta Matters Challenge is focused on not just new meta, but making sure older meta gets a chance to be read and remain a part of fandom history. Join us in March to start archiving your work!
Thoughts
Date: 2023-02-04 05:57 pm (UTC)"Choose another platform with better tools" is only helpful if there is a close equivalent with higher standards. When LiveJournal misbehaved, there were several alternatives including Dreamwidth where folks could do pretty much the same things. But with Twitter, there is no close analog.
Then when people who have learned these habits on boundary-hostile platforms come into other platforms, they bring those bad habits with them. They may not understand that the new platform HAS other options for them to use, let alone how to use those tools to manage their content stream and avoid things they dislike.
These are problems, because attack culture not only upsets people, it also undermines the safety we came to fandom for, and it discourages people from sharing content. So we need to work on these.
Actionable points from this observation:
1) Provide information about privacy and moderation tools on robust platforms like Dreamwidth. Point out that these tools are more effective than trying to convince the whole of online humanity to quit doing certain things. In the case where an individual's bothersome behavior is caused by ignorance, information can often solve it.
2) When we see people behaving in ways that are problematic, discourage it and recommend alternatives. Humans tend to be contextual creatures and can often, though not always, be convinced to adapt to a local group's customs. A helpful approach is, "Here we don't Y, we X." Like, "On Dreamwidth we don't tell other people what to post, we use the moderation tools (link to instructions) to block out content we don't want to see."
3) When we build new platforms, make sure to include robust privacy and moderation tools. Say we're making a meta warehouse. At minimum it needs a "Safe Search" button like browsers often have. Preferably it needs something like AO3's filter tools to block out unwanted content based on tags/warnings. This will be easy to implement if we use AO3's database type code; I am uncertain whether wiki type code can do similar stunts.
4) Once we have built such a platform, its user introduction, tool tutorial pages, FAQ list, etc. need to include explanations about how to use the site responsibly and respect other users with their different tastes. This way people will know how to manage their own use without bothering others -- and if they persist in harassment, they can be suspended or banned.
5) A platform also needs protection from malicious editing. Some wikis have a huge problem with this. The AO3 format is highly resistant to bottom-up tampering but more vulnerable to top-down tampering (e.g. banning people for posting meta, with ambiguous parameters). We need to make sure that what we create is difficult to destroy, since meta includes many things that upset people, just because it goes into all the different interpretations, headcanons, tropes, etc. that people often disagree about.