This is The Signpost Newsroom, a place where The Signpost team can coordinate with writers, both regular and occasional, and people who have suggestions for topics to cover. See the boxes below if you have suggestions (something for the team to write about in regular columns), proposal/submissions (for articles you want to write/have written yourself), or want to create a pre-formatted draft article in your userspace, with helpful links and easy-to-edit syntax. Discussion occurs both here and in the SignpostDiscord.
Discussion of upcoming issues is done at the newsroom talk page. For general feedback on The Signpost as a whole, go to our talk page. To learn more about The Signpost, see our about page.
The Signpost currently has 5462 articles, 686 issues, and 13195 pages (4310 talk and 8885 non-talk).
To suggest a topic to be covered by The Signpost, simply click on the button below or post to our suggestions page manually. Example of good topics are
Editors who have done something extraordinary/wonderful
If you have an idea for an article you would like to write, you can submit it for review by the editorial team. You can do so by clicking the button below or by posting to our submissions page manually.
News articles should be kept relatively neutral and report on a specific piece of actual news. They can be on any topic of interest to Wikipedians, from general events, to technical news.
Opinion pieces are evaluate on originality, relevance to Wikipedians, and the quality of the arguments. They should provoke thought and encourage productive discussion.
Special pieces cover things that don't fall neatly in the above two categories. If it's interesting to you, it's likely interesting to someone else as well. Check with us and we'll see what can be done!
Create a draft
To create a draft of an article in your userspace, simply copy-paste {{subst:Wikipedia:Wikipedia Signpost/Templates/Story-preload}} at User:USERNAME/Signpost draft (replacing USERNAME with your own username).
You can also use the button below. This will preload a form, which you can then save and edit. We recommend saving without making any edits to the preloaded form before starting to write your article.
Note -- It is possible that there are additional articles not listed below. When copyediting an issue, make sure that all of the articles in this list are accounted for.
Note: The below discussions are automatically transcluded from the newsroom talk page; to comment on a draft or submission, go there and create a section with the same name (i.e. "News and notes", "Arbitration report" etc).
@HaeB: Just saw this: "Toxic comments are associated with reduced activity of volunteer editors on Wikipedia, PNAS Nexus, Volume 2, Issue 12, December 2023". Seems slightly interesting but also kind of bizarre. Mostly, what they do is carry out sentiment analysis on a bunch of user talk page diffs, run them through a black-box algorithm to determint their "toxicity", and then compare them to user contribs graphs. The conclusion they draw is that, since many users' contributions cease after receiving a "toxic message", "Wikipedians might reduce their contributions or abandon the project altogether, threatening the success of the platform".
I am not so sure about this. Their methodology is somewhat troubling. They go into great detail on what a shame it is that people are leaving Wikipedia due to these toxic comments, and how the community needs to take a strong stand against "toxicity", but do not actually give examples of what is or isn't a "toxic" comment. Most user warning templates, like {{uw-vandalism4}}, have pretty harsh language ("You may be blocked from editing without further warning the next time you vandalize Wikipedia").
This becomes very important when combined with the next omission, which is rather disturbing: they do not seem to have made any attempt whatsoever to determine whether the users were blocked. The words "Block" or "ban" does not appear anywhere in the paper and the topic is never mentioned; it seems like they simply assumed that a user's contributions stopping at a certain date indicated that they "abandoned the platform".
It doesn't really seem to me like the data demonstrates what they're claiming it does, at least based on the graphs they show. They've done a bunch of regressions on the data, sure, but it seems equally likely that what the data actually demonstrates is "user accounts that get warned for vandalism often stop editing afterwards", which is trivially true, and definitely isn't evidence for this:
A suggested solution to this problem has been the red-flagging of harassment and harassers (46). However, the opinion that toxic comments are negligible and should be seen as merely over-enthusiastic participation is still present among editors (25). Furthermore, various anti-harassment measures have been de-clined multiple times by the community, as they were seen to slow the process of content creation (57, 58). Based on our find-ings, we believe there is a need to reevaluate these policies, and more research attention is required to understand the impact of potential interventions.
Of course, this isn't false, but it really seems questionable to me whether any of the data in this paper is actually evidence to support it. What do you think? jp×g🗯️ 00:16, 6 December 2023 (UTC)Reply[reply]
Just quickly: I haven't read the full paper yet, so I don't have an opinion on several of the issues you mention. But the preprint version, which came out some months ago, has been on the the to-do list for RR for some months already. As noted there, this comment (by a researcher from EPFL) seems worth considering: "I've found the title of this paper a bit misleading — the finding is purely correlational: there is no attempt to control for the confounders that might be present."
Now, that title was changed in the now published peer-reviewed version, from "Toxic comments reduce the activity of volunteer editors on Wikipedia" to "Toxic comments are associated with ..." (my bolding). But evidently tons of people still interpret it as a causal claim. Earlier today, after (re)tweeting the paper from @WikiResearch, I reached out to the authors (or the two of them I was able to find on Twitter) about their thoughts on this issue, let's see if they respond.
If you want to write up a review for RR, I'd be happy to assist.
PS: The supplementary material contains some concrete examples of comments with their toxicity ratings.
The 0.72 toxicity comment is this, which I guess was discussed extensively on the mailing list (it's not in the user talk namespace so I don't know why it's in their dataset but maybe I'm misreading).
Paper doesn't say a whole lot about how they got these diffs -- they say 57 million user talk page comments ever, but how many of those were automated messages or et cetera? I was trying to do some research into all the newsletters that went defunct and it was actually rather hard to parse out massmessages from normal comments (especially from way back in the old days when Special:MassMessage didn't exist and people would just run scripts to do it).
They're going based on publicly viewable diffs so some stuff may be missing (revdel or oversight), this isn't really mentioned. Maybe they have researcher roles or something and aren't affected by this (?).
These diffs go back a very long way (some as far back as 2004). But then the paper says stuff like: "A suggested solution to this problem has been the red-flagging of harassment and harassers (46). However, the opinion that toxic comments are negligible and should be seen as merely over-enthusiastic participation is still present among editors (25)." The idea, I guess, or at least the implication, is that Wikipedians have never done anything about civility, or that we think it isn't a problem, or that we need to take action because there's so much of it, et cetera -- but the evidence of this is a bunch of comments that people already got blocked for, and also some people were assholes in 2005... jp×g🗯️ 02:54, 6 December 2023 (UTC)Reply[reply]
Looking into this deeper: googling the string from the 0.99 toxicity comment gives https://storage.googleapis.com/kaggle-forum-message-attachments/300575/8842/oof_greatest_hits.xlsx which is an Excel file called "oof_greatest_hits.xlsx". In the fifth tab, "threat_ranked_high", the topmost row is this: Please stop. If you continue to ignore our policies by introducing inappropriate pages to Wikipedia, you will be blocked. Much to think about. jp×g🗯️ 06:03, 6 December 2023 (UTC)Reply[reply]
I am not sure about the provenance of that file, although it does seem to match up rather nicely with the data they give. But nonetheless, I have obtained developer access to the Perspectives API, and I can confirm that the text of {{Uw-own3}} does indeed give 0.8039029 for ATTACK_ON_AUTHOR (above the .80 threshold they give). I will have to look into this in some more detail. jp×g🗯️ 08:52, 6 December 2023 (UTC)Reply[reply]
Not sure what the ultimate source of the corpus was. meta:Research:Detox/Data_Release seemed likely; it was used in a Jigsaw paper in 2017. The comments in it are from between 2001 and 2015. But one of the diffs above is from 2021(?) so I don't really know. jp×g🗯️ 09:46, 6 December 2023 (UTC)Reply[reply]
Move these up to the appropriate position as required (e.g. adjacent to News and Notes). Copy the section header from the submission page into the |Submission= parameter so that the "Check status" button appears and works correctly.