Jump to content

Talk:Community health initiative/Archives/2018

From Meta, a Wikimedia project coordination wiki

Anti-Harassment Tools team goals for January-March 2018

Hello all! Now that the Interaction Timeline beta is out and we're working on the features to get it to a stable first version (see phab:T179607) our team has begun drafting our goals for the next three months, through the end of March 2018. Here's what we have so far:

  • Objective 1: Increase the confidence of our admins for resolving disputes
    • Key Result 1.1: Allow wiki administrators to understand the sequence of interactions between two users so they can make an informed decision by adding top-requested features to the Interaction Timeline.
    • Key Result 1.2: Allow admins to apply appropriate remedies in cases of harassment by implementing more granular types of blocking.
  • Objective 2: Keep known bad actors off our wikis
    • Key Result 2.1: Consult with Wikimedians about shortcomings in MediaWiki’s current blocking functionality.
    • Key Result 2.2: Keep known bad actors off our wikis by eliminating workarounds for blocks.
  • Objective 3: Reports of harassment are higher quality while less burdensome on the reporter
    • Key Result 3.1: Begin research and community consultation on English Wikipedia for requirements and direction of the reporting system, for prototyping in Q4 and development in Q1 FY18-19.

Any thoughts or feedback, either about the contents or the wording I've used? I feel pretty good about these (they're aggressive enough for our team of 2 developers) and feel like they are the correct priority of things to work on.

Thank you! — Trevor Bolliger, WMF Product Manager 🗨 22:41, 7 December 2017 (UTC)

Update: We've decided to punt one of the goals to Q4. Here are the update goals:
  • Objective 1: Increase the confidence of our admins for resolving disputes
    • Key Result 1.1: Allow wiki administrators to understand the sequence of interactions between two users so they can make an informed decision by adding top-requested features to the Interaction Timeline.
    • Key Result 1.2: Allow admins to apply appropriate remedies in cases of harassment by beginning development on more granular types of blocking.
    • Key Result 1.3: Consult with Wikimedians about shortcomings in MediaWiki’s current blocking functionality in order to determine which improvements to existing blocks and new of blocking our team should implement in the first half of 2018.
  • Objective 2: Reports of harassment are higher quality while less burdensome on the reporter
    • Key Result 2.1: Begin research and community consultation on English Wikipedia for requirements and direction of the reporting system, for prototyping in Q4 and development in Q1 FY18-19.

Trevor Bolliger, WMF Product Manager 🗨 01:36, 20 December 2017 (UTC)

I suggest to follow the advice at writing clearly. Does point 1.1. actually mean "Add some highly requested features to the Interaction Timeline tool so that wiki administrators can make an informed decision with an understanding of the sequence of interactions between two users"? Or will administrators add something to the timeline tool content? --Nemo 13:53, 26 December 2017 (UTC)
The Anti-Harassment Tools software development team at the WMF will add the new features. I format these team goals with the desired outcome first, to help us keep in mind that our software should serve users, and that we're not just building software for the sake of writing code. — Trevor Bolliger, WMF Product Manager 🗨 18:42, 2 January 2018 (UTC)

Anti-Harassment Tools status updates (Q2 recap, Q3 preview, and annual plan tracking)

Now that the Anti-Harassment Tools team is 6 months into this fiscal year (July 2017 - June 2018) I wanted to share an update about where we stand with both our 2nd Quarter goals and our Annual Plan objectives as well as providing a preview for 3rd Quarter goals. There's a lot of information so you can read the in-depth version at Community health initiative/Quarterly updates or just these summaries:

Annual plan summary

The annual plan was decided before the full team was even hired and is very eager and optimistic. Many of the objectives will not be achieved due to team velocity and newer prioritization. But we have still delivered some value and anticipate continued success over the next six months. 🎉

Over the past six months we've made some small improvements to AbuseFilter and AntiSpoof and are currently in development on the Interaction Timeline. We've also made progress on work not included in these objectives: some Mute features, as well as allowing users to restrict which user groups can send them direct emails.

Over the next six months we'll conduct a cross-wiki consultation about (and ultimately build) Blocking tools and improvements and will research, prototype, and prepare for development on a new Reporting system.

Q2 summary

We were a bit ambitious, but we're mostly on track for all our objectives. The Interaction Timeline is on track for a beta launch in January, the worldwide Blocking consultation has begun, and we've just wrapped some stronger email preferences. 💌

We decided to stop development on from the AbuseFilter but are ready to enable ProcseeBot on Meta wiki if desired by the global community. We've also made strides in how we communicate on-wiki, which is vital to all our successes.

Q3 preview

From January-March our team will work on getting the Interaction Timeline to a releasable shape, will continue the blocking consultation and begin development on at least one new blocking feature, and begin research into an improved harassment reporting system. 🤖

Thanks for reading! — Trevor Bolliger, WMF Product Manager 🗨 01:29, 20 December 2017 (UTC)

Do I understand correctly that an "interaction timeline" tool has become the main focus of the project for an extended number of months? It's a bit weird: the idea that interaction history between two users has such a prime importance makes it look like we're encouraging users to get personal or that conflict resolution is actually a divorce tribunal. --Nemo 13:58, 26 December 2017 (UTC)
'Evaluation' is one of the four focus areas for our teams' work, in addition to Detection, Reporting, and Blocking. We have found that many reported cases of harassment are so complex that administrators or other users will not investigate or get involved because it is too much of a (often thankless) time commitment. We believe the Interaction Timeline will decrease the effort required to make an accurate assessment so more cases will be properly handled. More information on what lead us to prioritize this project can be found at Community_health_initiative/Interaction_Timeline#Project_information. — Trevor Bolliger, WMF Product Manager 🗨 18:42, 2 January 2018 (UTC)

Pet (stalking) projects

One problem that seems to pop up quite often is that some otherwise good user has a pet project, which sometimes is about stalking some person off-wiki. Often that person has done something semi-bad or fully-stupid thing. It is very tempting to give examples, but I don't think that is wise. Those contributors seems to focus more on collecting bad stuff about the persons in their biographies than writing real biographies. Asking the users about stopping that behaviour usually does not work at all. Giving the person a topic ban could work, but initiating a process like that would create a lot of anger and fighting.

So how can such a situation be solved? You want the user to continue, but not to continue with stalking the off-wiki person, and in such a way that you don't ignite further tension. Now this kind of situation can be solved by blocking the user, but I don't believe that is what we really want to do.

I've been wondering if the situation could be detected by inspecting the sentiment on page itself, as it seems like those users use harsh language. If the language gets to harsh, then the page can be flagged as such, or even better the contributions in the history can be flagged. Lately it seems like some of them has moderated their language, but shifted to cherry-picking their references instead. That makes it harder to identify what they are doing as it is the external page that must be the target for a sentiment analysis. In this case it is the external page that shoud somehow be flagged, but it is still the user that adds the questionable reference.

Another idea that could work is to mark a page so it starts to use some kind of rating system, and make it possible for any user to activate the system, and then make it impossible for the involved stalking user to remove it. Imagine someone turn the system on, and then it can only be turned off by an admin when the rating is good enough. There would be no user to blame, someone has simply requested ratings on the page. It would be necessary to have some mechanism to stop the stalking user (or friends) from gaming the system. A simple mechanism could be to block contributing users from giving rating simply by inspecting the IP-address. The weight of the given rating should be according to some overall creds, so a newcomer would be weighted rather little while an oldtimer would be weighted more.

Both could be merged by using the sentiment rating as a prior rating for the article. Other means could also be used to set a prior rating. — Jeblad 01:52, 27 December 2017 (UTC)

@Jeblad: I moved this section here from Talk:Community health initiative/Blocking tools and improvements because it was not on-topic about blocking. It has more to do with other areas of our work, such as Detection or Reporting.
Aside from harassment I agree that we could use deeper automated content analysis and understanding across all our wiki pages. Which pages are too complex for a standard reading level? Which pages seem promotional (and not encyclopedic?) Which pages are attack pages? (like you suggested.) This type of system is outside of our scope, as our team has no Natural Language Processing software engineers.
The AbuseFilter feature is often used to identify unreliable references and/or flag blatant harassing language, but we found that blatant harassment is far less common than people using tone or dog-whistle words to harass or antagonize another user. — Trevor Bolliger, WMF Product Manager 🗨 01:52, 3 January 2018 (UTC)

Reporting System User Interviews

The Wikimedia Foundation's Anti-Harassment Tools team is in the early research stages of building an improved harassment reporting system for Wikimedia communities with the goals of making reports higher quality while lessening the burden on the reporter. There has been interest expressed in building a reporting tool in surveys, IdeaLab submissions, and on-wiki discussions. From movement people requesting it, to us as a team seeing a potential need for it. Because of that, myself and Sydney Poore have started reaching out to users who have expressed interest over the years of talking about harassment they’ve experienced and faced on Wikimedia projects. Our plan is to conduct user interviews with around 40 individuals in 15-30 min interviews. We will be conducting these interviews until the middle of February and we will write up a summary of what we’ve learned.

Here are the questions we plan to ask participants. We are posting these for transparency in case there are any major concerns we are not highlighting, let us know.

  1. How long have you been editing? Which wiki do you edit?
  2. Have you witnessed harassment and where? How many times a month do you encounter harassment on wiki that needs action from an administrator? (blocking an account, revdel edit, suppression of an edit, …?)
  3. Name the places where you receive reports of harassment or related issues? (eg. arbcom-l, checkuser-l, functionaries mailing list, OTRS, private email, IRC, AN/I,….?)
    • Volume per month
  4. Name the places where you report harassment or related issues? (eg. emergency@, susa@, AN/I, arbcom-l, ….?)
    • Volume per month
  5. Has your work as an admin handling a reported case of harassment resulted in you getting harassed?
    • Follow question about how often and for how long
  6. Have you been in involved in different kinds of conflict and/or content disputes? Were you involved in the resolution process?
  7. What do you think worked?
  8. What do you think are the current spaces that exist on WP:EN to resolve conflict? What do you like/dislike? Do you think those spaces work well?
  9. What do you think of a reporting system for harassment inside of WP:EN? Should it exist? What do you think it should include? Where do you think it should be placed/exist? Who should be in charge of it?
  10. What kinds of actions or behaviors should be covered in this reporting system?
    • an example could be doxxing or COI or vandalism etc

--CSinders (WMF) (talk) 19:16, 11 January 2018 (UTC)

Translation: <tvar|audiences> is broken

My translation shows <tvar|audiences> for under Quarterly goals as template broken. Any fix available? --Omotecho (talk) 15:57, 31 January 2018 (UTC)

I think it is fixed now. :) Joe Sutherland (Wikimedia Foundation) (talk) 19:00, 31 January 2018 (UTC)

New user preference to let users restrict emails from brand new accounts

Hello,

Wikimedia user account preference set to not allow emails from brand new users
Tracked in Phabricator:
Task T138165

The WMF's Anti-Harassment Tools team introduced a user preference which allows users to restrict which user groups can send them emails. This feature aims to equip individual users with a tool to curb harassment they may be experiencing.

  • In the 'Email options' of the 'User profile' tab of Special:Preferences, there is a new tickbox preference with the option to turn off receiving emails from brand-new accounts.
  • For the initial release, the default for new accounts (when their email address is confirmed) is ticked (on) to receive emails from brand new users.
    • Use case: A malicious user is repeatedly creating new socks to send User:Apples harassing emails. Instead of disabling all emails (which blocks Apples from potentially receiving useful emails), Apples can restrict brand new accounts from contacting them.

The feature to restrict emails on wikis where a user had never edited (phab:T178842) was also released the first week of 2018 but was reverted the third week of 2018 after some corner-case uses were discovered. There are no plans to bring it back at any time in the future.

We invite you to discuss the feature, report any bugs, and propose any functionality changes on the talk page.

For the Anti-Harassment Tools Team SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 00:43, 9 February 2018 (UTC)

English Wikipedia Administrators' Noticeboard/Incident Survey Update

During the month of December, the WMF's Support and Safety team and the Anti-Harassment Tools team ran a survey targeted at admins on Wikipedia English Administrators' Noticeboard/Incidents and how reporting harassment and conflict is handled. For the past month of January, we have been analyzing the quantitive and qualitative data from this survey. Our timeline towards publishing a write up of the survey is:

  • February 16th- rough draft with feedback from SuSa and Anti-Harassment team members
  • February 21st- final Draft with edits
  • March 1st- release report and publish data from the survey on wiki

We are super keen to release our findings with the community and wanted to provide an update on where we are at with this survey analysis and report.

--CSinders (WMF) (talk) 01:15, 9 February 2018 (UTC)