Wikipedia:Village pump (proposals)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
  Policy   Technical   Proposals   Idea lab   Miscellaneous  
Shortcuts:

New ideas and proposals are discussed here. Before submitting:

« Older discussions, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128
Centralized discussion
Proposals: policy other Discussions Ideas

Note: inactive discussions, closed or not, should be archived.


Proposed: Tag / edit filter for talk page abuse[edit]

Proposal

Create a special tag / edit filter designed to catch talk page abuse. (Example: [1])

Envisaged benefits
  1. An edit filter could warn users before posting that their comment may need to be refactored to be considered appropriate.
  2. Editors could check recent changes for tagged edits, bringing much-needed third eyes to talk pages where an editor may be facing sexual harassment and other types of abuse.
  3. Prevention of talk page escalation.
  4. Improvement of talk page culture.
  5. Enhanced editor retention. Andreas JN466 15:51, 29 October 2015 (UTC)
  • Support. Gamaliel (talk) 17:01, 29 October 2015 (UTC)
  • Support without reservations.John Carter (talk) 18:39, 29 October 2015 (UTC)
  • Support. But I think I'm missing the background/ context to that example. To me it looked like it could easily be interpreted as funny/ ironic. Or is this intended mainly as a sexist abuse filter? Martinevans123 (talk) 18:53, 29 October 2015 (UTC)
  • Support. Excellent idea. Sarah (talk) 19:04, 29 October 2015 (UTC)
  • Nice sentiment. Of course we want to prevent abuse, but the edit filter is a relatively blunt tool, with a high chance of catching false positives on talk pages. What exactly is the test being proposed? -- zzuuzz (talk) 19:07, 29 October 2015 (UTC)
  • Not possible If regular expressions could identify abuse, what a world that would be. Rhoark (talk) 19:40, 29 October 2015 (UTC)
Following discussion below, I would support a system that would ask the user for confirmation if an edit contains likely personal attacks. Due to possible moral hazards, any stronger action, including tagging, should only be enabled in a target-specific way as a discretionary sanction at WP:AE or following existing practice from WP:LONG. Rhoark (talk) 15:37, 30 October 2015 (UTC)
  • Partial support Have to agree on Rhoark's point, particularly in the situation that as a uncensored work that in the course of productive discussion about certain topics, it may be necessary to use language that would easily trigger such a filter. Once also has to consider that what actually might be taken as harassing language by one editor will not be the same that others would consider harassing, and context can be everything. To make everyone happy, we'd have to include a lot of possible hits, and the more we'd have to include the more false positives we'd get. So a full site wide use of a filter would not really work. That said, I would support a filter that could be applied to editors that have been established through AN/AE that they may use language that borders on the uncomfortable that they should be warned if they veer into that again. Or to use on certain talk pages that have already been identified as a hot bed for near-personal attacks to warn users before posting in anger/haste. We'll still get false positives but they will be much more limited and would be much more manageable by admins. --MASEM (t) 20:09, 29 October 2015 (UTC)
This tool is apparently used to deal with cases of long-term abuse, and it might not be a bad idea to expand its use as a targeted discretionary sanction. Rhoark (talk) 20:16, 29 October 2015 (UTC)
I would say there is no shortage of admins and other editors who are willing to pick up on sanction violations without the need for filters or tags. -- zzuuzz (talk) 20:26, 29 October 2015 (UTC)
Many cases seem to call for a response stronger than a warning but less extreme than a topic ban. Nuanced sanctions however have proven to be an invitation for baiting the sanctioned editor and playing gotcha. An automatic referee could help in this. Even if not automatically enforced, a code-level specification of the restriction could lower the volume. Rhoark (talk) 20:55, 29 October 2015 (UTC)
Not knowing the technical way this can be done, an ideal use would be where if an editor makes an edit that triggers this filter, that instead of immediately applying the new edit, to provide a warning screen "Hey, this page is prone to issues with such language, please consider rewording your edit, or proceed if you take full responsibility for the language and may be subject to administrative actions if deemed inappropriate." If the editor proceeds, that then would tag the admin flag , if needed. As long as that is used on sets of talk pages aggressively prone to personal attacks and harassing-type language, that should hopefully make editors think twice before posting, cutting down the amount of work admins have to do. But I am not sure if an edit filter can trigger a secondary edit submission check. --MASEM (t) 21:05, 29 October 2015 (UTC)
I'm fairly certain it can. For example, if you put a sanctions notification on someone's talkpage, you'll be prompted to make sure its not a duplicate. I think that's done with this system. Rhoark (talk) 21:52, 29 October 2015 (UTC)
It can indeed; see 2. below (which is copied from Wikipedia:Edit_filter). And the first step might well be to pilot such a system on known problem pages. Andreas JN466 05:37, 30 October 2015 (UTC)
Even with a limited scope pilot I would still only recommend this (using #2 below) on pages known to be hotbeds as to be approved by community consensus, which is probably less than 1% of WP's total page count. The only exception: User talk pages should universally have this, as I cannot imagine a need where such language needs to be used on a user page, ever. It's still just an extra edit step/tag, so it doesn't prevent posting and allows legit cases, but hope makes editors think twice before sending off a nasty message. --MASEM (t) 23:07, 30 October 2015 (UTC)
  • Support, though it should not be preventing any edits, just tagging. The potential for false positives is high. GorillaWarfare (talk) 21:18, 29 October 2015 (UTC)
    • Available edit filter options are:
      • When an edit being saved "triggers" an active filter, the effect depends on a setting associated with that particular filter:
        1. Under the strongest setting, the edit is rejected, and the user sees a message to that effect. (A link is provided for reporting false positives.) It is also possible to have a user's autoconfirmed status revoked if a user trips the filter.
        2. Under a less severe setting, the user is warned via a customisable message that the edit may be problematic. The user then has the option to either proceed with the save or abandon the edit.
        3. Under an even lower setting, the edit is tagged for review by patrollers.
        4. Under the lowest setting the edit is merely added to a log. (This setting is also used in tests of new filters.)
    • I think we would be shooting for 2, i.e. merely a reminder to think about the edit before clicking Save. ClueBot is very smart these days; over time, this filter could become similarly smart, recognise verbal abuse, sexual harassment (there should rarely be a need for someone to say something like "you turn me on", per the example linked above), etc. The tag would simply help getting more eyes on talk page conversations that may have taken a problematic turn. Andreas JN466 22:47, 29 October 2015 (UTC)
After an extensive period of training and refining a filter, it might not be harmful to enable it site-wide at an advisory level only. I'm pretty sure it wouldn't have made a difference to Qworty though. Rhoark (talk) 00:56, 30 October 2015 (UTC)
The difference is that edits like that, happening in some backwater of Wikipedia, would have been flagged, and attracted other editors' attention much sooner. Andreas JN466 01:18, 30 October 2015 (UTC)
  • Comment What exactly is 'talk page abuse'? The edit filter is very literal and can't be told to just look for 'abuse'; if you can provide some examples of text strings which in the vast majority of cases imply abusive behaviour, which aren't already caught by an existing filter, then that would be more useful. Sam Walton (talk) 22:42, 29 October 2015 (UTC)
    • It would require an effort similar to the one that went into ClueBot and its descendants like ClueBot NG, and a lot of brainstorming. Women editors I am sure could provide examples of sexual harassment, belittlement or other gender-tinged exchanges they found offensive and wouldn't want to encounter again. Off the top of my head, words/strings like "rape you" or "stupid whore" are unlikely to have many bona fide uses. Similarly, "fuckwit", "fuckwad", "you fucking cunt", "you asshole", "fuck you", "you dumbass" etc. are most likely to be used as terms of abuse indicating (or indeed initiating) a breakdown in communication that would benefit from an uninvolved editor having a look at what's going on. Generally speaking, an edit filter warning asking people to think twice about posting these strings would in many cases prevent escalation and reduce admins' and oversighters' workload. At any rate, the strings to be caught should be based on actual user experience of the kinds of exchanges that tend to make talk page discussions go south and contribute to an off-putting atmosphere. I am under no illusion as to the amount of work this would require. But similar work has been done to cut down on article vandalism, with outstanding results, and improving the working climate would to my mind justify another such effort. Andreas JN466 05:24, 30 October 2015 (UTC)
    • Note: The above post of mine was not tagged in Recent changes, despite its abundance of strings that would be clearly problematic if used in an actual talk page discussion. Andreas JN466 05:26, 30 October 2015 (UTC)
  • strongly oppose until and unless someone can define "talk page abuse" in a fairly specific way, and can at least outline an algorithm that could identify such abuse with pretty near zero false positives. I don't believe that it can be done at the current state of the art, even by a major AI effort, much less the resources available to a Wikipedia edit filter. Remember that it must permit detailed discussion of the content of sexually explicit articles, and so cannot simply block a list of words that are "profane" or "offensive". Any such filter must be able to correctly detect the context that makes a remark "abusive". I don't think this is possible, and anythign less ins IMO unacceptable. DES (talk) 01:27, 30 October 2015 (UTC)
    • Exceptions could be defined (talk pages of specific articles, etc.). I suspect ClueBot has similar capabilities today, and similar exceptions have been defined for pictures of gore etc. that have in the past been used for vandalism. Also bear in mind that no posts would be blocked; you'd still be able to post "You're a fucking fuckwit". This would merely remind people to consider potentially abusive posts carefully before clicking Save a second time, and present a tag in Recent changes enabling patrollers to have a look at what is going on. Or you could have the Recent changes tag only. Does this still sound unacceptable to you? Andreas JN466 01:59, 30 October 2015 (UTC)
      • Note quite so much, but I would still want to see a clear and specific definition of what is to be considered "abuse", and a detailed technical plan for the filter before I would support this. DES (talk) 03:12, 30 October 2015 (UTC)
  • Oppose. In principle, this isn't a bad idea. In practice, I don't trust the subset of the community that will appoint themselves civility-tag first responders and go digging through potentially uncivil edits looking for people to wag their fingers at. I think you're creating a technical mechanism to reward superficially civil goading and baiting and punish the frustrated recipient of it. If you want to design an opt-in tool that will flag potentially problematic edits on your own talk page, great. Opabinia regalis (talk) 06:11, 30 October 2015 (UTC)
    • This does not apply to the edit filter idea, which would merely challenge contributors to express their disagreement more skilfully (or perhaps invoke a process like WP:3O in preference over getting into a slanging match – an approach the edit filter message could suggest), but it's a potentially valid concern with the tag idea. One would have to do a pilot to see whether the upsides outweigh the downsides. Andreas JN466 13:55, 30 October 2015 (UTC)
      • This seems to be a little confused about the technical side (or else I'm the one who's confused). You've mentioned ClueBot a few times in this thread, but an edit filter is just static regex; it doesn't get any smarter unless a human makes it smarter. Both of the ideas in the paragraph above, if I understand you correctly, are actually "edit filter ideas", implemented using edit filters; it's just that one warns and the other tags. But if the filter is public, then even without tags there will still be a log of the warnings for self-appointed wikicops to inspect.
        If you wanted to run a pilot, how would you analyze the data to determine whether it's a net benefit? Opabinia regalis (talk) 00:17, 31 October 2015 (UTC)
  • Comment. Firstly thanks to Andreas for the proposal. I believe that it is important that we think outside the box for these things. At this stage, I do not have an outright support or oppose, as there are a few questions which I feel might bear thought. 1) What would be the proposed process or procedure for changes to the filter - if regexp based, how would patterns be added & removed; what level of support for addition/removal would be required? (Note with the spamfilter addition seems significantly easier than removal) 2) Given that (as I understand), we already have a filter for obscene terms, what would be the difference; both in patterns matched & in proposed effect? (Does the obscenity filter currently block edits?) 3) As we have seen a tendency for other filtering tools, like the spamfilter, to be used, for reasons of operational expediency, outside the originally intended use - e.g. to implement failed proposed WP:BADSITES and to control/limit sourcing options. What controls would be in place to ensure that the filter is used only for the intended purpose? 4) Given a community problem with WP:TAGTEAM & offsite WP:CANVASSing, what controls would be in place to prevent the edit tagging being used to gang up on editors whose edits have been tagged, but who may not be genuinely abusive? Many thanks for your thoughts in reply. - Ryk72 'c.s.n.s.' 06:42, 30 October 2015 (UTC)
    • 1. ClueBot has a mechanism for reporting false positives. The same approach could be used here. 2. I posted a number of sample obscenities above. The edit was not tagged. I invite you to post a similar message yourself, as though in response to me. Face-smile.svg Then check whether your edit is flagged in Recent changes. 3. Ongoing community scrutiny, with changes as dictated by community consensus. 4. (a) Iterative optimisation of the edit tag settings to weed out false positives. (b) Self-control on the part of those posting; the edit filter message would give a warning that an edit will be tagged, so it's possible to avoid being tagged. (c) If problems like ganging up occur in spite of this, contributors would have the same sorts of debates about what is incivility that they have now, but perhaps less frequently. It might establish a better baseline for communication among regulars, in addition to giving newbies some protection from the worst kinds of assaults. Andreas JN466 14:14, 30 October 2015 (UTC)
  • Support as I always support rainbow farting unicorn creations. Even if only tagging, how does an editor go about removing the "false positive" tag that would undoubtedly be a built-in machine driven personal attack? What would the tag say? "possible misogynist?" "transphobe" "rape culture support" "safe-space violation?" --DHeyward (talk) 22:18, 30 October 2015 (UTC)
  • Support in a limited way; should tag only, because the actual judgment for what might need to be removed, or about which other action taken, does need to be made in context by people. And also limit to the most obvious ones that would be so considered by almost everyone, in order to decrease the number that need to be examined. If the usefulness is proven, and the number remains manageable, the list could be expanded. I think the various objections above could be dealt with by using restraint. DGG ( talk ) 22:54, 30 October 2015 (UTC)
  • Question on technical aspect - Assuming we go with #2, I am assuming that if the user does go on to post that the edit will still be tagged. Is there a process/script that tracks how many such edits a single editor has that get tagged that way? (Here's I'm thinking that if a single editor racks up, say, 10 such tagged messages in about an hour, something's probably happening to evaluate that user's behavior, and that should go to some admin-reviewed logo to determine if a temporary block is necessary or if there's legit use there.) I am only speaking of the case where the editor does go through with posting: an editor that gets the warning and decides not to post is not tracked (eg immediate forgiveness). Which leads to another question which I'm pretty sure I know the answer but want to check: if I edit and get this warning and then alter my edit from that resulting page to avoid the language, I assume that the new edit is rechecked fresh and the resulting edit is not tagged? --MASEM (t) 23:17, 30 October 2015 (UTC)
    • Indeed, that is how it should work. The proof of the pudding would be to draft an edit designed to trigger an edit filter of type 2 (is there a list somewhere?), modify it so the new version would not fall foul of the edit, save it, and then check Recent changes for presence or absence of a tag. At any rate, I would be very surprised if edits that have been fixed before saving still trigger the tag under the current edit filter settings. Andreas JN466 06:27, 1 November 2015 (UTC)
  • Stupid idea This will only serve as an annoyance akin to getting an edit conflict, yet with no net benefit. And who gets to control the filter? Just another power grab. Color me shocked that this literally retarded proposal has received support from those who have screwed the pooch with with the blacklist filter. User:100.8.121.118 Talk signature added
  • Oppose. This isn't a big problem here worth the time and effort. If a problem arises, it's taken care of on an individual basis. GenQuest "Talk to Me" 07:04, 31 October 2015 (UTC)
  • Oppose. This is a good idea with too many problems. If a user wants to harass someone with talk page abuse, they will always find a characteristic of the user to attack. If someone has black skin, they may be called a n_gger; if someone has red hair, they may be called a g_nger; if someone is female, they may be called a c_nt. If a user is associated with a minority or otherwise generally disliked group, they would be harassed for that. Any targeted harassment is unacceptable, not only "sexual harassment". The real problem lies in the impracticality of implementing such a filter. Whether something can be taken as harassment depends on the harasser, the harassed, and words used for harassment. This would trigger excessive false-positives, possibly even creating so much inconvenience that editors feel annoyed (so much for editor retention). This edit filter may also unintentionally prevent normal discussion on talk pages of WP:NOTCENSORED articles (for example, articles in the scope of WP:FOS, WP:SEX, WP:NETPOP, etc.), causing further annoyance. In short, the use of an edit filter will result in too many inconveniences to be worthwhile. sst✈discuss 15:44, 31 October 2015 (UTC)
  • Although ClueBot is sophisticated in combating vandalism, to succeed in automatically combating talk page abuse (while preventing false positives and other issues pointed out by opposers) would require something far more than a tag/edit filter, which basically only search for words. While some words may be deemed to be unacceptable in some discussions, they may be totally normal in other discussions, and tags or edit filters are unable to distinguish whether these words are appropriately used. We must remember that the English Wikipedia is not a melting pot; we have people with vastly different cultures, and we even use different English dialects on different pages. This is an excellent idea, but impractical with tags and edit filters. sst✈discuss 16:05, 31 October 2015 (UTC)
  • Oppose This NagBot™ unless and until programming becomes much more sophisticated. This looks like it would have a huge potential to produce false positives and just annoy people. Simply searching for words is a far too simplistic approach, and if someone is acting up ona talk page a human admin should be the one to intervene. I can't imagine a scenario where one user would call another a stupid whore or threaten to rape them that would not result in an immediate block. Better to let them make such an edit and show their true colors so they can be blocked for it than to suppress it ahead of time. Beeblebrox (talk) 20:11, 31 October 2015 (UTC)
    • Many such edits are not noticed by any third party at the time they occur. If an admin blocks two days later, or another editor removes the offensive message from the talk page some weeks down the line, the damage is already done. Andreas JN466 06:13, 1 November 2015 (UTC)
If you have any evidence that there is recurrent problem with rape threats and users being called whores wiere there is no rapid administrative response, I suggest you show said evidence to the rest of us, because that would be very compelling for your case. I am sure any responsible admin would be horrified if you were actually able to back up such a statement. You just saying it happens, not so much. Beeblebrox (talk) 22:42, 1 November 2015 (UTC)
What is "rapid"? You're probably right that the vast majority of messages of this type are deleted, revdelled or oversighted within a day or two, but a tag would help admins arrive on the scene quicker, and without the victim first having to summon administrative assistance. That would send a better message to the editor being abused. Similarly, an edit filter might help reduce frequency of occurrence.
I gave an example of a problematic talk page interaction above, never spotted and acted on at the time. Recalling the Wifione arbitration case, one of the violent threats Makrandjoshi received is still on his talk page today: [3] There was no message of support or concern on Makrandjoshi's talk page until a couple of days later. Makrandjoshi raised the matter at ANI; a newbie would not have known how to do that. As for outright rape threats in the data base, a quick search finds: [4] from 2009 (editor blocked two days later), [5] from 2012 (perhaps just a juvenile test edit) and [6] from 2013 (in response to a warning left by a sysop). [7] may have been a test edit; if so, the test proved that it is possible to call another editor a nigger without anyone coming to intervene. [8] from 2010. "Boo you, whore", on the talk page of an IP that has made both content and vandalism edits, present since 2012: [9]. Editor asking for help after being called a whore: [10]. Janejellyroll was repeatedly abused, called a "stupid whore", "big bitch" etc. on her talk page; she stopped editing a few months later, after 8607 edits. [11] When school kids post similar vandalism in mainspace, it is quickly dealt with. All I'm saying is – given that the hostile editing climate is often cited as a reason for both the gender gap and poor editor retention, why not explore whether Wikipedia could be more proactive about talk page interactions, instead of putting the onus on victims to come forward and point this stuff out to an admin? Andreas JN466 06:19, 2 November 2015 (UTC)
User:Beeblebrox, compared to, say, adding unsourced text, the problem is less common. That doesn't mean that it's unimportant. Also, I'm surprised to hear you argue in favor of people getting hurt, just so it's easier to justify blocking problematic users. "Give them enough rope" is only appropriate if that rope isn't being used to harm other people. "Give them enough rope" is a good model for people with different ideas. It's not a good model for people who are verbally abusing others.
And just in case it's not obvious, merely reading the message is harmful to its target. Transforming an editor from someone who is happy to edit Wikipedia into someone who knows that another person wants him or her to be raped, maimed, or murdered is actually harmful to that editor. Blocking the person who wrote the message doesn't erase your memory of that event or change the fact that the message existed. As Jayen says, by the time that happens, the damage to the targeted victim is already done. WhatamIdoing (talk) 19:52, 2 November 2015 (UTC)

────────────────────────────────────────────────────────────────────────────────────────────────────Replying first to to Andreas' examples: The example in the proposal itself is weak. Like, really weak. What, eactly, in that specific remark would a bot be clued into? And I think you can see for yourself that the rest of your examples are few and far between, some going back eight or nine years. This to me is not idicative of a widespread problem that something as sweeping as a filter on every single talk page edit would solve.

To Whatamidoing's comments: Let's look at the potential ways this could play out:

  • With the edit filter warning them not to do it: User writes up extremely nasty edit, hits "save". Edit filter stops the edit and warns the user that they were about to do something nasty. Horrible creep about to make a rape threat gets a chance to reconsider the consequnces, continues to be a more low-level creep on WP, now knowing that if he goes ovewr the line a bot will help him keep it in check. But nobody had to suffer through reading through the awful things he fully intended to say.
  • Without the edit filter: horrible creep makes horrible threat. Threat is removed, user is blocked indefinitely with talk page revoked, user who was subject of attack sees that while there will always be horrible people who say horrible things, these things are not tolerated here and persons who dothings that are that levelof awful don't get to continue being here.
  • Regarding the "harm" of even reading such things: You are only harmed if you let that person harm you. These are just words, not actions. Horrible words, and i wish there was some way we could just keep the sort of person who would make a rape threat or call another user racist or sexist things from even editing here, but we can't. Better to catch them and make an example, and at the same time show support to the target of the attack.
I actually have my own personal troll who comes to my talk page periodically and calls me a prostitute, suggests that I have sex with animals, and that my life is and endless pit of suffering and crying. Since I know none of this is true, and it has almost always been removed by somone else and the troll blocked again before I even see it, it doesn't harm me at all. In fact it makes me laugh because this pathetic loser actually thinks their nonsense could harm me. Perhaps others feel more harm from mere words with no basis in reality than I do, but really, these are just words. Letting them go ahead and be written, so we know what kind of horrible person we are dealing with and can remove them immediately, seems the best approach.
That being said, if you could write a bot intelligent enough that it was not just keying on words (because I don't see any words in the example given in the proposal that in and of themselves are a problem at all, it's the way they are put together and the tone that is the apparent problem) but could actually determine, with a high degree of accuracy the exact meaning and intent of a particular post, and that could therefore stop only the worst of the worst type of tp edits from being made, alert admins to the situation in some urgent way (not just an edit filter log) I would of course be in favor of that. As I've said, I do not believe we are at the point where a bot that sophistacted exists. Let us know when it does,. Beeblebrox (talk) 20:19, 2 November 2015 (UTC)
Yelling "Fire!" in a crowded place is "just words", and you can go to jail for it. Saying "I plan to bomb _____" is "just words", and you can go to jail for it. Actually, calling someone on the telephone and saying "I'm going to come to your house and rape you" is "just words", and you can go to jail for it. Are we putting these people into jail for no reason? Or is it possible that some kinds of words are not "just" words? WhatamIdoing (talk) 23:03, 2 November 2015 (UTC)
Right, so in those examples there is human being making that evaluation, assessing the context, etc, not just a robot that says "you are about to say the forbidden words". The inability of a bot to make such sophisticated distinctions is the basis of my opposition. Beeblebrox (talk) 20:24, 3 November 2015 (UTC)
This is why above in my partial support, I suggest that this should be limited to pages that the community has decided that have enough talk page problems to merit a "cool down" warning page. That choice to use the filter on those pages is the addition of a human element to decide when a page really needs it. And as long as we are using the #2 level (a warning but not preventing posting), and that determining if excessive hits on that filter is a cause for other admin action is based on human evaluation, it's a reasonable step; it will never be perfect but using it in exceptionally limited situations may help to defuse some talk page problems. --MASEM (t) 21:38, 3 November 2015 (UTC)
Beeblebrox, I think the boiling frog parable applies here. If you had been asked ten or twenty years ago whether you would want to work, voluntarily and without pay, in an environment where people regularly call you all the things you mention above, I think you would have said "no way". But you get used to it, accept it as part and parcel of doing this kind of work on the internet, and eventually learn to shrug it off. If you are male, then perhaps you even enjoy the challenge a bit, along with the sense that you have actually won the exchange, and the troll's posts are nothing but a sign of impotence. This is not something women contributors are likely to enjoy in the same manner; the male half of humanity thrives on conflict and competition in a way women, as a whole, do not. At any rate, by that time you have trained your responses to be quite different from those of a newbie. I recall one woman contributor telling me that the sort of abuse she has encountered in Wikipedia goes beyond anything she has ever encountered anywhere else, online or offline (including working environments that were even more male-dominated than Wikipedia). It is not an inviting environment for serious contributors interested in the project's vision.
As for examples being few and far between, you have to remember that most problematic posts are revision-deleted or oversighted. As an ordinary editor, I can only point you to examples that are still extant and visible by anyone. It is a very small tip of the iceberg.
As for the ability to program something sophisticated enough to tell real abuse from quotations etc., I am regularly amazed by how clever ClueBot has become at telling genuine edits and vandalism apart. The difficulty of making such determinations applies in mainspace just as much as it does on talk pages, because offensive words have many legitimate uses in mainspace, too.
I'd be interested in feedback from people like User:Cobi or User:Rich Smith as to feasibility. If it is feasible, then this is something I would like to see the Foundation investing in; in time, an open-source solution could even benefit other sites struggling with similar problems, allowing Wikimedia to take a leadership position on the web. It would be no mean achievement. Andreas JN466 20:44, 8 November 2015 (UTC)
I am aware that I have a somewhat thicker skin, not just because of my Wikipedia experience bu also because of working in the service industry for 25 years. You get used to bitter, unreasonable, possibly drunk persons making unwarranted attacks on you that say more about them than they do about you. So I could support this if I thought it was actually good enough to detect the context of a remark and effectively get admin attention. I have also marveled at ClueBot's abilities, but I have to disagree that even it is advanced anough to accurately parse talk page comments and detect only those that absolutely should not be posted. If we are going to talk about "prior restraint" of people's speech, I believe we must be very, very sure that it would produce almost no false positives. I am just not convinced this is currently possible. I would suggest that extensive testing would be a must before loosing such a bot on millions and millions of talk pages. Beeblebrox (talk) 21:02, 8 November 2015 (UTC)
  • Support. If we can take steps to protect our articles, we can take steps to protect our volunteers. Humans need to be involved in reviewing the initial results. It will take some experimentation to see what works. Most likely, ongoing adjustments will be needed as vandals attempt to foil the system. Bringing in some expertise on A-B testing could be useful. --Djembayz (talk) 21:59, 1 November 2015 (UTC)
  • Oppose While I fully support the thought behind it, getting a “you might be about to act like a jerk” warning will empower the jerks to be more sneaky. The jerks could just edit their word usage to make the abuse harder to find. The abuse would still be just as hurtful to the people targeted by their aggression. What if a bot looked for instances of abuse and compiled a list somewhere? People could then delete false positives from the list before attempting to using it for some purpose.Abel (talk) 00:37, 2 November 2015 (UTC)
    • A snide remark may be very annoying, but I suspect most people would prefer not to receive violent threats, or outright abuse. Andreas JN466 06:19, 2 November 2015 (UTC)
      • Which is my point. This would not stop violent threats, or outright abuse. It would empower the people using violent threats and outright abuse to craft their violent threats and outright abuse in ways that would be harder to find, yet would be no less effective at harming people. The idea behind it is very worthwhile, I am suggesting a different tactic for the same strategy. Abel (talk) 14:29, 2 November 2015 (UTC)
        • I think that an automated warning would stop some violent threats and abuse. Not all, but some. And I think that stopping (or softening) even a small fraction of those comments would be a valuable step that we should take. I'm also open to other tactics, and would be happy to hear more about your ideas. WhatamIdoing (talk) 19:52, 2 November 2015 (UTC)
          • I agree it would stop some and slow a few. I am concerned about the unintended consequence of how it would also help train the worst jerks in ways to be even more sneaky. If a bot looked for instances of abuse and compiled a list, then people could delete false positives from the list before attempting to using it for some disciplinary purpose. Think about how someone who wants to be a jerk could use this. Type type and get a warning. They edit until they no longer get a warning. Now they know how to abuse people in ways that will likely result in zero consequences. Not at all the goal we want. Abel (talk) 23:31, 2 November 2015 (UTC)
    • Point well taken, that warning won't stop people determined on being nasty, or eliminate arguments and bad feeling. It might help some people keep their tempers, though, and avoid saying something they regret. Getting a workable balance between tagging and warning, and coming up with appropriate messages, will probably take experimentation. Humor could either lighten things up, or make it worse, till you figure out how to apply it. Maybe stern and serious works better for some things. One objective here is that by keeping unwanted sexualized or violent language out of the mix, or at least reducing it, you prevent arguments from turning into something creepy or scary. Being upset over an argument can be expected sometimes. Being intimidated or frightened because comments start taking a sexual or violent tone isn't necessary. Even heated arguments and discussions don't have to be creepy or threatening. And of course, high levels of verbal abuse don't help with editor retention. --Djembayz (talk) 09:15, 2 November 2015 (UTC)
  • Also-- As the number of pages keeps increasing, there's a need to either automate patrolling or put more people on the task. Although some text strings will show up in automated results that are clearly abusive and unreasonable, there is no "automatic block" that can solve for all abuse, and many things will still need manual review. --Djembayz (talk) 09:51, 2 November 2015 (UTC)
  • Support tagging and warning, but not using an edit filter to prevent the edits. I like the deterrent advantage of warnings ("Am I too drunk to be posting?") and the tag: ("Hmm, this thing's going to get tagged. Some admin or oversighter might be checking the tags right now. I don't know if he's even going to read this before it gets rev-del'd and I get blocked.") An ideal system might have two tags: edits that tripped the filter and were cleaned up (high potential for finding frustrated but well-meaning editors), and edits that tripped the filter and weren't (higher potential for finding false positives). WhatamIdoing (talk) 20:07, 2 November 2015 (UTC)
  • Oppose. Apparently nobody studies history anymore. The creation of this filter is just the first step. First they come for the potty mouthers, the cussers and the swearers, the rude and uncouth. Then they come for the critics and the nonconformists, the politically incorrect and the religiously obtuse. Then they come for the humorists, the comedians and the satirists, the poets and the prosodists. Then they come for the sad ones and the depressed ones, the angry and disenchanted, the silly and the sublime. Finally they come for the writers, and resolve that all discussion is now prohibited under arbcom decree seven slash five dot two, by order of Her Right Honorable Adminbot Eliza B. Viriditas (talk) 21:07, 2 November 2015 (UTC)
    • I believe that history actually has moved in the opposite direction. Public use of profanity, especially profanity directed at a woman or a child, was illegal for centuries in most of the Western world (USA examples: [12][13]). It is now generally legal (but still not, I believe, while driving a car in Germany). Penalizing "potty mouth, cussing, and swearing" has not yet seemed to degenerate into making criticism, non-conformism, rudeness, or obtuseness illegal. Whether legally accepting drunk men screaming obscenities at strangers on public sidewalks constitutes an improvement is something you will have to decide for yourself, but I'm not buying the slippery slope argument. WhatamIdoing (talk) 23:17, 2 November 2015 (UTC)
  • Question this has been described again elsewhere as "ClueBot for talk pages", but there is still no substantive information in this thread about how this proposal is intended to be implemented. Most of the discussion is around the use of edit filters. Edit filters are much simpler and dumber than ClueBot. What it is, technically, that you're actually suggesting, and how do you plan to evaluate whether it works? Opabinia regalis (talk) 00:56, 3 November 2015 (UTC)
    • Opabinia regalis, at the moment, I am envisaging a two-pronged approach: 1. a tag that shows up in recent changes, flagging "potentially problematic talk page edits", 2. a type-2 edit filter that if triggered brings up a reminder to look over an edit again before saving. In other words, no edits would be automatically reverted (unlike ClueBot), and no one would be prevented from posting anything they desire. However, the kind of artificial intelligence effort and refinement that would go into this to catch truly problematic edits while avoiding false positives would be quite similar to ClueBot (incl. user reporting of false positives). As for evaluating whether it works, it could be applied to pages that are known to be problematic, much as Masem has suggested in this discussion. The decision whether or not to implement the feature more widely would be made on the basis of performance in the pilot (feedback from participants and metrics like number of admin actions, including revdels and blocks. I am happy for people to modify and refine the idea; however, at present I think it really needs a two-pronged approach, i.e. first getting people to think about whether what they're about to post will really help move discussion into the desired direction, and secondly getting outside eyes on problematic discussions more quickly and without someone having to complain to an admin first.
    • Social dynamics have tipping points; sometimes shifting cultural standards just a little bit is enough to make the overall culture change a lot, as the kinds of constructive people who presently leave may stay on, and abusers may engage in more self-examination and/or find themselves subject to more scrutiny. A feature like this has the potential to achieve such a shift, reinforcing desired and discouraging unwanted behaviours. Andreas JN466 20:25, 8 November 2015 (UTC)
      • Thanks for the response, but I am still missing something. The tags that show up in recent changes are applied by edit filters. Your "type 2 edit filter" just sounds like an edit filter set to "warn". Edit filters are static regex with tight performance requirements. You could certainly design some sort of machine learning approach to identify potentially abusive edits, but it could not use tags or warn editors at the time of submission, because edit filters don't work that way.
        Furthermore, you're talking about "culture" in a way that seems to conflate multiple forms of problematic talk-page behavior. The kind of thing an edit filter could catch is very simple, unambiguous abuse. We do have problems with that, but it's usually coming from trolls or one-off accounts, and it's almost always removed and its source blocked as soon as an admin notices. Whatever problems there may be with Wikipedia's general culture, this kind of unambiguous trolling and abuse is not widely accepted or defended. The problem you really seem to want to solve has to do with aggressive, unnecessarily confrontational, personally insulting comments from actual editors, not plain old trolls. That's the kind of problem that is not usefully addressed by searching for strings of text in an edit. I'd guess that most people here who are complete assholes manage to do it with a minimum of curse words. Opabinia regalis (talk) 22:38, 8 November 2015 (UTC)
        • To address the latter point first, trolls and regular editors form a continuum – there have always been editors who fall between the two extremes. The more troll characteristics they have, the shorter their tenure; the more bona fide edits they make, the longer they last. Similarly, counterproductive talk page conduct forms a continuum, ranging from outright abuse to civil vexatiousness. Cutting down on the outright abuse end seems worthwhile, given how much it can inflame discussions or turn people off. As far as I understand from Wikipedia:Edit_filter#Basics_of_usage, edit filters can warn or tag. The idea is to use both: give users an opportunity to edit before submission, and tag if an apparently problematic is made regardless. Whether or not that can be achieved with the current edit filter/tagging functionality I don't know. Andreas JN466 13:53, 9 November 2015 (UTC)
          • This proposal has scope creep written all over it, and I think a lot of that is coming from the lack of technical detail letting people's imaginations run away with them. Yes, edit filters can warn or tag. They can't do so on the basis of machine learning applications like ClueBot. I think you are more likely to see success for this proposal with someone on board to develop a prototype, because right now the supporters are a mix of people endorsing edit filters, people endorsing a hypothetical sophisticated ClueBot equivalent, and people who don't really know the difference. The proposed implementation is too vague and unclear to evaluate. Opabinia regalis (talk) 06:38, 10 November 2015 (UTC)
  • Tentative support - there's a limitation to what you can realistically do with edit filters as they stand. I have been thinking for some time there may be value in having one (or two) for talk pages that gives "bad word" warnings - enabled for all users.
    • We have bad-word lists already.
    • A suitably worded warning will help people think twice, perhaps even encourage them to find a co-operative action instead of a combative one.
    • We might indeed retain both more of the short-tempered editors, and more of those people are short tempered with.
On the "con" side folks might take "no warning" as an OK. That fallacy has to be made clear from day one.
It might also make the bad-words that are used legitimately subject to false deprecation. Again bad words is not a subset of incivility - context is key, which is why it's a warning only
All the best: Rich Farmbrough, 01:12, 3 November 2015 (UTC).
  • Oppose. To talk fairly, our community isn't dumb. We are responsible and smart. We don't need nannies who want to censor our speech. As South Park (crudely) summed it up, "the world isn't one big liberal arts campus". --Kiyoshiendo (talk) 02:22, 3 November 2015 (UTC)
  • oppose I don’t see how anything can be robust enough to catch most abuse without catching far more false positives, and being easily worked around by anyone with any ability in English. If it is issues warnings it is even less useful: many editors already happily skip past the 'no edit summary' warning and will do the same with this. A minority might be prompted to change their post but only a small minority.--JohnBlackburnewordsdeeds 02:41, 3 November 2015 (UTC)
    Is it much of a hardship to have an extra click, say, every 1,000 talk page edits? While only a proportion of offensive edits would be stopped that might save a substantial amount of drama and aid editor retention. The log/tagging would also help human agents nip unpleasantness in the bud. All the best: Rich Farmbrough, 16:15, 3 November 2015 (UTC).
  • Support This is a way to face up to a real set of problems. If Wikipedia had real, broad–base organizational strength, then technical solutions would not be necessary. But it does not. At a minimum, this proposal is an attempt to take the problems seriously—and the effort is minor considering the magnitude of the problems. Complete automation should not be allowed; tagging only, followed by community review and evaluation is reasonable. — Neonorange (talk) 22:42, 3 November 2015 (UTC)
    • So, we say something that might be not be considered "clean", and we have an audience attracted to us for whatever reason. Sounds like an unnecessary microscope for a discussion relevant only to two people. --Kiyoshiendo (talk) 00:20, 4 November 2015 (UTC)
      • This is actually a fair point. I can see vindictive editors examining an editor's history and counting every one of these tagged edits and using that to try to oust that editor. I can also see it as being a means for people with the drive and effort to figure out what exactly the edit filter is filtering on so they know how to purposely avoid it. Is it possible to limit knowing when an edit has been tagged by this to admins only, who should be the only people judging if a pattern of these flagged edits is something actionable? --MASEM (t) 00:40, 4 November 2015 (UTC)
        • This is the first spin on this proposal that makes sense, but I fear the false positive rate would still be a mitigating, overwhelming factor. Meaning and subtext structures need to be considered here, and I'm afraid a bot just can't make meaningful decisions on just what language is indicative of, or constitutes "harassment," or "bullying," or "abuse," or what have you. Still oppose it for those reasons, but the idea of enforcing such a policy (if EVER possible) is moving on the right track with User:Masem's idea above. GenQuest "Talk to Me" 01:00, 4 November 2015 (UTC)
          • If the false positive rate is too high, then we will either change it to limit it further, or we'll just turn the thing off. It's not like we're swearing to keep this system in place until the WP:DEADLINE passes. We can try it out, and if we hate it, then we can turn the thing off again. I believe that disabling an edit filter is a two-click operation for admins. WhatamIdoing (talk) 19:38, 4 November 2015 (UTC)
            • Update. We have just had a pretty major lapse in our patrolling for bad edits. If our patrollers are spread so thin that a situation like that one slips through the cracks, we need to address this problem! With 5 million pages, we may be reaching the point where patrolling is breaking down. Even if it turns out that the talk page "filter" gets turned off for too many false positives, and the tool is only useful for tagging edits for human attention, it's still a way to save the time of patrollers. We don't want patrollers to be so over-stretched that they miss more serious situations like the one above. --Djembayz (talk) 16:29, 7 November 2015 (UTC)
  • Oppose Not even sure I agree in principle, but this implementation is a bad idea, as no matter how good the software used is, the number of false positives is likely to be too high to offset any potential benefit. Personal attacks can, and should, be handled as they always have. By telling users who commit them to knock it off, and then by blocking them when they persist. --Jayron32 17:21, 6 November 2015 (UTC)
  • Support a trial. There is much discussion here about needing effort put in to learning, and about false positives. However, I see no downside to trialling a filter in log-only mode to see how many false positives there are and see how much effort is required to train it. If after a while of training the effort required is still too great we can end the trail as unsuccessful without significant loss. If however the effort involved is seen as worthwhile by those expending it and we end up with something useful then we have made a significant gain. With careful wording and an explanatory link, a tag need not be anything bad - e.g. my edit creating Mets (disambiguation) was tagged with "possible Michael Jackson vandalism" (although the tag seems no longer visible?) but a simple examination shows it to be a false positive, and it has never been brought up by anyone until now. Thryduulf (talk) 01:08, 10 November 2015 (UTC)
  • Comment: Editors may be interested to learn of a very similar machine-learning project carried out in an online gaming community, as described in the following article: "Doing Something About the ‘Impossible Problem’ of Abuse in Online Games". It's reported to have had a major positive impact on cultural norms in that community. One very interesting finding is that the vast majority of negative behavior (which ranges from trash talk to non-extreme but still generally offensive language) did not originate from the persistently negative online citizens; in fact, 87 percent of online toxicity came from the neutral and positive citizens just having a bad day here or there. (Hat tip to Denny and Jorm, who brought this study to the attention of the Wikimedia-l mailing list. [14]) Andreas JN466 16:18, 14 November 2015 (UTC)
  • Oppose. Nanny filters have been tried. Many, many times. They lead to clbuttic and the Scunthorpe problem. (Did I just get a tag if this filter were in place?) Natural-language parsing is one of the hardest problems there is. And someone who's in the habit of swearing for emphasis, but not at people, will still get nagged and tagged. This is well-intentioned, I'm sure, but there are far too many ways it can go wrong and it's an idea known not to work. At this point, human cognition is required to interpret human speech. Seraphimblade Talk to me 16:22, 14 November 2015 (UTC)
    • What is proposed here is considerably more sophisticated than a bot that replaces occurrences of the string "ass" with the string "butt", or the string "tit" with the character string "breast". It's a lot of work, but the study mentioned above appears to validate the idea that if you start with a large enough body of interactions identified as genuinely problematic by human cognition, machine-learning can be brought to bear on the dataset in a way that is meaningful. It's been done. Andreas JN466 16:32, 14 November 2015 (UTC)
      • The abuse filter has nowhere near that type of machine learning capacity. (To my knowledge, it has none at all.) And if you look at the linked article, it was still heavily dependent on human interpretation. You just can't do what you are proposing with a regex. If you want to propose a different solution, code it up, let the code be reviewed, do a test run, and let's see how well it works before deciding whether we want to implement it. It is one thing to conceptualize a system that does something, it is a very different question to properly implement it. Let's see code, not concepts. Seraphimblade Talk to me 17:42, 14 November 2015 (UTC)
        • It's something I would like to see a Foundation-led team working on, possibly with outside input, as in the gaming example. (I've added it to the wishlist survey.) The team could start by building a database of oversighted and revision-deleted talk page posts. There is a wealth of material there that admins deemed unacceptable. Andreas JN466 12:45, 15 November 2015 (UTC)
  • Support, worth a try. • • • Peter (Southwood) (talk): 18:53, 14 November 2015 (UTC)
  • Oppose, nanny filters are silly and have never proven effective. Even if such a thing were possible, it wouldn't be effective for very long; the abuse would simply change/evolve to bypass it, and then the filter would be a pain to every editor. Ciridae (talk) 17:10, 18 November 2015 (UTC)
  • Support – Talkpages shouldn't be littered with profanity. There's just no reason to countenance trash talk or obscene scribbling on Wikipedia: this is an academic site, and anyone who feels it's necessary to advance his or her arguments with abusive language is probably not here to build an encyclopedia. I've scrubbed away a lot of unhelpful profanity on talkpages because, unlike article vandalism, it all seems to just sit there, often for years: it sits and stays and ends up tolerated (and even archived), with admins and editors stepping around it like goose droppings. But it shouldn't stay there; it shouldn't even be there in the first place. It makes the inner workings of WP look repellent, and leaves a terrible impression on new editors. I realize this little filter won't prevent all that many actual instances, but it does announce to the community that we do not want abuse, profanity, or incivility on our project. It's important to keep that basic credo held high for all to see. SteveStrummer (talk) 20:19, 22 November 2015 (UTC)
    SteveStrummer You see such comments left because removing them is a violation og WP:TALK, where it says under "appropriately editing others' comments" Removing harmful posts, including personal attacks, trolling and vandalism. This generally does not extend to messages that are merely uncivil; deletions of simple invective are controversial. If you make a habit of editing other editors' comments to remove invective and profanity, you might well be blocked for such editing. DES (talk) 13:36, 23 November 2015 (UTC)
    DESiegel Thanks for your concern. I know the policies quite well, but do feel free to pore over all my talkpage removals and examine them to your heart's content: they're all indefensible cases of abuse, trolling, and vandalism. Yes, I certainly found many others that I would have liked to remove, but I stayed within my remit as a non-admin. You, on the other hand, could be using that mop of yours to clean the project more effectively, instead of spending your time making undeserved ominous remarks to longtime volunteer editors. SteveStrummer (talk) 17:22, 23 November 2015 (UTC)
  • Support a trial, at the very least then we don't have to have the uninformative arguments about what may occur and we gather actual data. Alanscottwalker (talk) 12:27, 23 November 2015 (UTC)
  • Oppose. This is copulating ridiculous. I strongly resent being intimidated by a machine demanding political correctness in my personal choice of words. If you want to violate me have some guts and do it w/o having to misuse some obscure software (which doesn't solve the problem anyhow, in cases where there is one).--TMCk (talk) 18:35, 23 November 2015 (UTC)
I'm wondering if you would re-read your own post and see either: 1) how ineffective (dulled) your communication was, or 2) how silly it would appear to almost all people, if anyone took the time to write like that? Alanscottwalker (talk) 19:40, 23 November 2015 (UTC)
"you communication" - Too bad, isn't it?--TMCk (talk) 21:50, 23 November 2015 (UTC)

Category:User en-0[edit]

This is basically a proposal to overturn Wikipedia:Categories_for_discussion/User/Archive/September_2007#September_23, which deleted Category:User en-0; it's here, rather than at WP:DR, because the purpose of DR for a deleted page is to say that the deletion was wrong, not merely to say that I think I have found a good reason to have the deleted page back. For those of you who don't know, the babel templates transclude a category for users by their language ability; for example, Category:User en-1 is filled with users who speak limited English but more than nothing. Before this category was deleted, it was filled with userpages of people who didn't speak English, many of whom were active at other Wikipedias but not here. Comparable pages exist in some other wikis; for example, my French userpage is in fr:Catégorie:Utilisateur fr-0.

At the deletion discussion, the primary reason for deletion was that nobody needed to categorise users by a language that they don't speak, and anyway you could truthfully fill your userpage with categories for languages that you don't speak. However, I propose that en-0 be treated differently: not for the sake of humans leaving notices or seeking collaboration, but for the sake of bots and automated scripts. Quite often, we see bots and scripts leaving big notices on various grounds, and for users who don't speak English, these notices are hard to understand at best; even with machine translation, it would be difficult to understand many notices. If we had a Category:User en-0, bots and scripts could be reprogrammed to look for such a category on a userpage or user's talk page, and when finding it, they could leave a significantly simpler message. This is the first use that comes to mind, although I suppose that there are other automated uses that aren't coming to mind.

Nyttend (talk) 14:33, 13 November 2015 (UTC)

  • Huh. WP:DRVPURPOSE specifically says that overturning a prior deletion if new information has come to light is one of the reasons somebody may file a deletion review. I've seen such requests frequently on that page.Jo-Jo Eumerus (talk, contributions) 15:09, 13 November 2015 (UTC)
  • I've always seen that as being a case of something verifiably being the case now and not previously. For example, "Jo-Jo Eumerus is a youth football player" gets deleted in 2007, and then I file a DR request in 2015 because "Jo-Jo Eumerus just yesterday played his first professional match with Chelsea F.C." — the situation is completely different now, so we might as well undelete. I came here because it's a newly-suggested reason for the category, but someone could have suggested that reason before, while a suggestion that you're a professional footballer would have been obviously wrong eight years ago. Nyttend (talk) 15:14, 13 November 2015 (UTC)
  • I have restored it. After 8 years a totally new CFD would be appropriate if people don't want it. Graeme Bartlett (talk) 07:40, 14 November 2015 (UTC)
  • However if no one want to use it it could be chopped. Graeme Bartlett (talk) 07:42, 14 November 2015 (UTC)
Bots and automated scripts can look for transclusions of {{User en-0}} just as easily as for Category:User en-0. And while I can certainly understand the need for a userbox indicating that a specific user doesn't speak English, I see no reason tyo associate these users with each other by means of a category. עוד מישהו Od Mishehu 22:15, 15 November 2015 (UTC)
They would also have to look for {{#babel:xx|en-0}} and other ways of doing the same thing. WhatamIdoing (talk) 22:55, 20 November 2015 (UTC)

No objection to recreation - this was not a purpose for the category that occurred to anyone at the time of last CfD. WJBscribe (talk) 23:56, 15 November 2015 (UTC)

Allow all Users to Close AfD and RfD Discussions - while letting Admins still do the actual deletions[edit]

There is no possibility the this will be enacted based on the comments below. -- GB fan 16:02, 17 November 2015 (UTC)

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Currently there is a policy that a non-admin from closing selected discussions in circumstances where they don't have the tools to complete the followup action (delete an article or redirect mainly). While there is some logic to this, it puts admins on a higher platform then mere users who can't be trusted to assess consensus on a delete. However, the same mere users can assess consensus to keep or relist the same discussion.

I propose that the restrictions on non-admin closure be written out of the rules as unnecessary for the following reasons:

1. Currently, the tools are provided to everyone to close discussions including:

  • G6 XfD in Twinkle with a box to put a link for the deletion discussion
  • templates and detailed instructions for their use.
  • Detailed instructions [15] and [16]
  • Accessible lists of Current and Old AfD discussions [[17]]

2. Practicing closing discussions is good experience for future Admins where they can establish a track record. If they make a mistake that can be pointed out and they can learn it.

3. The part the future Admin can't do - the actual deletion - can and should be G6 xfd'd into the admin que with a link to the closed discussion. This function is already built into twinkle.

4. There is no chance of any damage because the actual deletion is still done by an Admin who can do a quick check to confirm the close is correct. Just like a speedy or a PROD an Admin has the final say.

5. Admins are in theory, accountable for their closes, and so are all other users already.

Legacypac (talk) 15:18, 16 November 2015 (UTC)

So the admin would have the full responsibility for reviewing the non-admin user's close to confirm that it is correct. That seems like redoing the work. --GRuban (talk) 16:03, 16 November 2015 (UTC)
  • @Legacypac: I think this is a perennial proposal that fails due to going against WP:NACD and WP:BADNAC, the most important contrast to this proposal on those pages stating that non-admins should not be closing discussion to which they do not have the technical ability to carry out. (The only exception I have found that contradict this is for WP:RM; the closing instructions there say, in one way or another, that non-admins can close discussions to "move" even if there is a page blocking the move, which requires a deletion.) Steel1943 (talk) 16:20, 16 November 2015 (UTC)
  • Support - we already do this with TfDs, although the process is a little different there. All an admin would have to do is determine if the close is appropriate per WP:NAC (a clear result, basically) and then either revert the close and relist the discussion, or carry out the deletion. Since NACs are only for clear results then there shouldn't be much extra admin load. NAC already advises non-admins to leave closing questionable discussions to the experience of an administrator, and non-admins who ignore that advice are frequently admonished for it, which is good because it's a learning experience for everyone. There's even a tag for requesting that non-admins don't close a discussion, but I can't remember what it is at the moment. This might actually lighten admin load by relieving admins from the time-sink of reviewing the bulk of deletion discussions which are clear results. This isn't allowing non-admins to delete, so it's actually pretty harmless. Ivanvector 🍁 (talk) 16:26, 16 November 2015 (UTC)
  • Support for AfD, Oppose for RfD. AfD requires time to assess consensus, saving admin time is good. RfDs should be simple. All the best: Rich Farmbrough, 16:36, 16 November 2015 (UTC).
  • oppose. I don't see any indication of the problem this is meant to solve. There is not a backlog or excessive workload that needs dealing with. And the current policy exists for very good reason. Not only the technical part, that an admin is needed to do the actual deletion, but the authority part, that an admin is an admin because their peers have judged them to be mature enough and have good enough judgement to handle such decisions. Changing it to let any editor close such discussions would just mean more discussions get poorly closed, end up being quickly re-run or appealed to deletion review, and rather than saving time more of everyone's time gets wasted. Absent an actual problem that needs solving this seems like a very bad idea,.--JohnBlackburnewordsdeeds 16:53, 16 November 2015 (UTC)
  • Oppose. See Wikipedia:Administrators' noticeboard/Incidents#User:Legacypac -- NAC closes as "delete" for some recent background. One of the insufficiently many things RFA usually gets right is assessing the candidate's understanding of deletion policy. The Big Bad Wolfowitz (aka Hullaballoo) (talk) 17:04, 16 November 2015 (UTC)
  • Oppose - "all users" is an invitation for socking. Also, admins are very well aware of WP:INVOLVED and its consequences. There are no such consequences for regular editors. --NeilN talk to me 17:12, 16 November 2015 (UTC)
  • @NeilN: Funny you should say that: I've actually long believed that a version of WP:INVOLVED needs to be added as a policy or guideline somewhere that applies to WP:NACs. I think it is quite odd that one dies not exist (at least that I have been able to find over the years.) Steel1943 (talk) 19:04, 16 November 2015 (UTC)
  • @NeilN and Ivanvector: Right, but WP:BADNAC is an essay. I'm more referring to some sort of outlined possible list of consequences for not following WP:NACD that is a policy. I mean, as a suggestion, maybe WP:BADNAC needs a bit of polish to list consequences, then maybe get the page promoted to policy? Steel1943 (talk) 19:27, 16 November 2015 (UTC)
  • Well, what are the consequences? In one case, misunderstanding-but-good-faith editor is administered the cluebat from more experienced users, they come out more knowledgeable (hopefully), and a small amount of cleanup is done. In the other case, disruptive editor is blocked, and a small amount of cleanup is done. It's not like we can take anyone's editor bit away for having occasionally poor judgement. Ivanvector 🍁 (talk) 19:41, 16 November 2015 (UTC)
  • @Ivanvector: Technically, there is a way to take away the "editor bit" called a "block". But, that aside, maybe a mention of a "up to being blocked" consequence can be listed there. I mean, one of the worst case scenarios would be a topic ban enforced by sanctions (not counting site banned), but now that I think of it, all of this is already outlined elsewhere. I just presently think the current way the information is set up for, let's say, an editor not familiar with Wikipedia performing their first or second edit ever as a NAC, makes it a bit difficult and maybe overwhelming for them. But, in the other hand, maybe WP:BADNAC is sufficient and maybe it should be discussed to be promoted to policy. Steel1943 (talk) 19:51, 16 November 2015 (UTC)
  • Oppose - If an admin disagreed with the discussion it would end up being reverted and thus creating more unneeded work for them, To be totally honest I think it's fine as it is. –Davey2010Talk 17:31, 16 November 2015 (UTC)
  • Comment I don't think this would save much admin time, because the admin would still have to assess the consensus before deleting. Admins would also still have to look through the open discussions, because other editors wouldn't always choose to do a close. However, provided that these closed-but-not-actioned discussions were properly flagged, one difference might be that in some cases the pages would be deleted sooner. This is either a good thing or not, depending on one's point of view. The total time spent by two assessors on one closure might be more. On the other hand, assessing discussions resulting in deletion that are then looked over by an admin is a safe way to gain experience for editors who later become admins, and might result in them participating more in closing discussions once they are admins. The point about socking above is a good one, although more often socks are trying to get pages kept rather than deleted. A compromise might be to allow pending changes reviewers do this. I'm not supporting this, just mentioning it as a possibility.—Anne Delong (talk) 17:33, 16 November 2015 (UTC)
  • Comment I actually see this as necessary at some point in the future, either along the lines of TFD or with some sort of unbundling of Delete from the admin toolkit. The current admin corps is not significantly increasing, and deletions are one area that is *and* requires manual admin interaction. At the moment AFD deletes require an admin to assess consensus and press the delete button - which can be a lengthy process. (It is also by no means a sure thing that admins themselves are better at assessing consensus, in many cases they are a lot worse.) If there was a template along the lines of TFD/Speedy that could be added - the admin burden would be 'check closure has been noted correctly, click delete'. If its contested, the usual process applies. Questions about incorrect judging of consensus etc can be dealt with by peer review regardless if the person judging consensus is an admin or a plain editor. Ideally the ability to delete pages would be unbundled from the admin toolkit and handed out to editors who have a proven track record of judging consensus discussions at AFD. Only in death does duty end (talk) 17:43, 16 November 2015 (UTC)
  • Oppose as usual. The question of whether someone's judgment is good enough to be trusted to close AfD discussions is a big part of RfA -- and a big reason for many unsuccessful bids. AfD closure is just so often fraught and subject to dispute. Ideally, there would be no need for non-admin AfD closes, but we allow them for obvious keeps (and certain other instances where the need for nuanced judgment is minimal) because the load on admins is just too high. It's a sensible way to distribute the work (and I'm glad other ways to do so are being discussed within the RfA reform threads). Gray area is the biggest issue, but before getting to that there's also the matter of obvious delete closes (i.e. unanimity with good participation). The only practical benefit an NAC has in an obvious close is to save an admin the trouble of closing it -- and that's pretty minimal since an admin still has to review it before speedy deleting (and we know that sort of review doesn't always happen like it's supposed to -- and why would it? for an admin to properly evaluate a close saves hardly any time relative to what it takes to close in the first place). But the big gray area in between obvious keep/deletes is the big issue, and because it's gray, and because it's admins we entrust with the judgment to make tough calls, that gray area needs to be treated as vast. The simple fact is it's not sensible to say that two people can exercise the same form of judgment when one party is only allowed to act in one of the two available ways. It discounts things like time investment -- where, once one decides to act and becomes invested in a closure, it's not reasonable to think they'll be able to modify their judgment along the same spectra as someone who has every option available. — Rhododendrites talk \\ 17:59, 16 November 2015 (UTC)
  • DOA oppose As in dead on arrival. We keep seeing these half-baked proposals to give non-admins "moar powah" that would still require admins to be responsible for the result. This is why we have admins in the first place. They have been vetted by the community and found to have the skills and judgement necessary to assess these situations. No responsible admin would do this unless they were also satisfied that the consensus favored deletion, so this doesn't help anyone do anything, it would just be a pointless extra layer that would inevitably lead to drama when admins adisagrree with the closing user's findings. Beeblebrox (talk) 18:31, 16 November 2015 (UTC)
  • Oppose as this is taking no burden off the admin who (eventually) deletes the article. Kharkiv07 (T) 19:13, 16 November 2015 (UTC)
  • Oppose per Anne Delong. There is no time saving for admins because the AFD would still require thorough review in order to determine the validity of the close. And we shouldn't be looking to "save time" in deletion discussions, we should be looking to get them right. -- Euryalus (talk) 19:16, 16 November 2015 (UTC)
  • Oppose - At best, you've changed nothing as a deleting admin would still have to review to ensure consensus was reached. At worst, someone without the proper tools closing AFDs as delete would dramatically increase the odds of such articles falling through the cracks. Resolute 19:49, 16 November 2015 (UTC)
  • Oppose This would end up actually increasing the admin backlog since they would have to review the discussion anyway and determine if the close was correct and if incorrect, overturn it (with a rationale, etc). TFD is a special case where allowing NADCs effectively reduces the admin backlog since then, non-admins can take up the time-consuming task of removing the transclusions. Cenarium (talk) 20:02, 16 November 2015 (UTC)
  • Comment It is not going to work for all users, but as an unbundling tool and with a usual NAC clausure (if someone gets unhappy, they may ask for an admin reassessment) it might actually work. Here, we do not need a technical tool, just a gadget marking the user as a "AfD closer" or smth.--Ymblanter (talk) 06:33, 17 November 2015 (UTC)
  • Oppose as admins would have to delete as well as check the close. People that would like to close these sort of discussion should stand at RFA. Graeme Bartlett (talk) 07:52, 17 November 2015 (UTC)
  • Pile-on oppose per basically the last couple times it's been suggested. ansh666 10:36, 17 November 2015 (UTC)
  • Support in principle, oppose in actual practice. I agree with the sentiment; there is no inherent reason why any editor-in-good-standing would be incapable of assessing consensus on any discussion they were not part of. However, this has no practical effect, because there are only two outcomes, both of which make the action irrelevant.
1) Any admins who follow up would agree with the decision the non-admin made: If this is the case, the admins would have closed the discussion the same way, and the closure doesn't save any work; now two people have spent time doing an action it only needed one to do. Pointless.
2) Any admins who follow up would disagree with the decision the non-admin made: So either they don't enact the decision, which is a problem, or they override the decision, either generating unneeded drama or simply making the non-admin's action irrelevant.
In summation, I agree that we should not strictly forbid non-admins from any action that does not need the admin tools to make happen. In practice, we should also not simply make work to make work, or create a situation where processes become more complex just for the sake of complexity. So, the sentiment is entirely correct. This proposal doesn't work, however. --Jayron32 15:44, 17 November 2015 (UTC)
  • Oppose No admin should ever use their tools based on another person's interpretation of consensus. An admin would have to check the AfD so thoroughly they might as well just close it. HighInBC 15:47, 17 November 2015 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Should the default Math appearance preference be changed from PNG to MathML?[edit]

Hi I just wanted to post a like here to my RFC: Should the default Math appearance preference be changed from PNG to MathML? Hungryce (talk) 20:25, 17 November 2015 (UTC)

Red false links[edit]

I propose that for false links a lighter pink should be used rather than the deeper red now used. It should make the name barely visible until the square brackets are removed. In many cases people deliberately use the red to highlight a name, making it more prominent than for a person acknowledged as prominent! Jzsj (talk) 20:32, 17 November 2015 (UTC)

Have you any actual examples of "people deliberately use the red to highlight a name"? In a decade on Wikipedia, I've never once seen this, other than in the sense that a redlink indicates that someone feels the topic is deserving of its own article. A link being blue doesn't mean the subject is "acknowledged as prominent", it means the Wikipedia article on the topic has been written—there should be no difference in notability between red and blue links. (If the target of a redlink is demonstrably so non-notable that an article about the topic should never be written, than the link shouldn't exist at all and the colour ceases to be an issue.) Making links "barely visible until the square brackets are removed" is never going to happen; why would we want to make articles intentionally unreadable? Besides, Wikipedia—and wikis in general—are so well established that virtually all readers are now aware of the "blue means written, red means unwritten" principle. ‑ iridescent 20:39, 17 November 2015 (UTC)
^^ The red link is relatively significant to Wikipedia's identity -- in fact, several books and papers about Wikipedia in general or its interface in particular have gone into some detail on the subject. Also, we don't want to make any text unreadable (or hard to read). — Rhododendrites talk \\ 23:35, 17 November 2015 (UTC)
I don't even know what to say to such a wrong-headed proposal. Our policy on the subject (you have read that before proposing this, right?) already defines when it is and is not appropriate to add or remove redlinks. This "solution" to the perceived problem of strategic adding of redlinks, to make them cause eye strain instead of just unlinking them or finding an appropriate target to redirect them to, is so bad it does not merit discussion. Beeblebrox (talk) 03:42, 21 November 2015 (UTC)
Is it really necessary to be so bitter, Beeblebrox? We should expect more of an administrator. SteveStrummer (talk) 21:43, 22 November 2015 (UTC)
I reckon red links should be invisible, until a page of the name of link is created, then a visible blue link is created.Theoosmond(talk)(warn) 18:40, 23 November 2015 (UTC)
Red links have the following effects:
  • indicating to people that an article for a notable subject is missing
  • making it easy for any editor to start the article (just by clicking on the link)
  • preventing new pages from being orphaned at the moment of their creation
  • causing Wikipedia to grow
Given all of those benefits, why should we make them invisible? WhatamIdoing (talk) 01:09, 24 November 2015 (UTC)