✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
Tags:
[…] Dass wir endlich handlungswirksame Konsequenzen ziehen müssen, scheint mir unvermeidlich. Dass die Forderung nach einem neuen DGPs-Vorstand allerdings Unfug ist, scheint mir unübersehbar: Es hiesse den Einfluss der DGPs auf die konkrete Gestaltung eigener Forschung überschätzen. Hier ist jede einzelne Forscherin, jeder einzelne Forscher gefordert, sich in den verschiedenen Rollen (als Autor, Antragsteller, Betreuer, Herausgeber, Gutachter, etc.) Gedanken zu machen und daraus Taten resultieren zu lassen. Natürlich können auch Institute Selbstverpflichtungen eingehen, wie dies in München durch die Einrichtung eines Open Science Committee an der LMU erfolgt ist (siehe hier). […]
26.10.2022 18:54Comment on Introducing: The Open Science Committee at our department by Joachim Funke - ...not always solving problems - Harsche Kritik an DGPs-Statem...In reply to <a href="https://www.nicebread.de/research-software-in-academic-hiring/#comment-1975">Adina Wagner</a>. I agree on the observation "major vs. minor software release". Actually, good practice is to not do major releases very often as they might break your users' codes. I would replace "Date of most recent major release" with "Date of most recent minor or major release".
22.7.2022 10:05Comment on Research Software in Academic Hiring and Promotion: A proposal for how to assess it by Benjamin UekermannGreat idea. "cranlogs" (as well as paper citation) can be manipulated, e.g. if one writes a script, that downloads the package every day many times. This is not very difficult. So the "Downloads or users per month" is a very nice metric, but one should be aware, that it can be manipulated. The vision should be, that software development should be recognized in general. E.g. Michel Lang has done much more than others for the scientific community in R (https://www.statistik.tu-dortmund.de/lang.html) but he will never be a professor, if he does not publish more maybe unnecessary papers. Best regards, Philipp
25.5.2022 11:29Comment on Research Software in Academic Hiring and Promotion: A proposal for how to assess it by Philipp ProbstThis is an excellent framework. I love the idea of getting credit for all of the work we do. That said, I have 1 large and 1 smaller concern (right now). 1. Large. This could be really useful in departments where software development is common, but that's the minority of Psychology departments in my experience. In my department of 60+ full-time faculty, there are perhaps 5 of us who would ever want to use this for our own research or, maybe more importantly, would be able to navigate assessing someone else with it -- there just aren't enough people who are familiar enough with any of it. So I could see the contributions being minimized or eliminated, just because some people don't understand them. Similar to what happens with a lot of community outreach that ends up just lumped into service. 2. Smaller. Some of the criteria are more fine-grained than we use to assess articles (for better or for worse...). For example, there are conventions that first and last author are places of prestige for articles, so we often have "lead" and "non-lead" author distinctions. Any time there are more options, as there are here, it's more difficult for people to judge. Maybe something as simple as "major" and "minor" contributor? Or something like grants -- PI, co-I, key personnel? But something simpler and possibly more aligned with other systems we already use.
24.5.2022 21:12Comment on Research Software in Academic Hiring and Promotion: A proposal for how to assess it by SC[…] article was first published on R – nicebread.de, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) […]
24.5.2022 06:11Comment on Research Software in Academic Hiring and Promotion: A proposal for how to assess it by Research Software in Academic Hiring and Promotion: ...Thanks for this great draft! I have a few unconnected thoughts (and apologies in case I have missed how they are already addressed in the draft - I couldn't find Appendix A so it may all be addressed already): 1) While R may currently be the predominantly used language in psychology, other languages might take its place in the future or get at least more common. I think its valid to have R-based examples, but a language agnostic general operationalization could potentially improve the draft. For example using a term like "Software available in language-appropriate software repositories or package indices/registries" to not only include CRAN for R in the examples, but also make room for different registries (e.g., PyPi for Python, npm for js, juliapackages Julia, GitHub for go etc.) and would allow for future developments of different package indices in the future. 2) I believe that the number of downloads of a package is a very intuitive indication of scientific impact, but I want to note that this index can be highly conflated by continuous integration testing, which can increase the number of downloads significantly. In our project we performed an alternative calculation based on https://popcon.debian.org (for the Debian operating system, users can opt-in to submit information on which packages they install to a popularity contest, based on which one could extrapolate). In any case, getting an accurate count of usage/users is a difficult problem - as an open source software maintainer myself, I would also dislike to have to incorporate any usage tracking into my software to pacify hiring committees (I don't have a solution, but wanted to voice this concern (: ). And lastly, different packages have different installation patterns - A python package that is reinstalled in every virtual environment of a researcher will get more downloads than a system package for a compute cluster that's only upgraded once a year. 3) A different metric for impact may be discipline-specific Technology Readiness level https://en.wikipedia.org/wiki/Technology_readiness_level, although I am not aware of a psychology-specific one 4) I stumbled over the point "Date of most recent major release" - I know quite a few projects (Python, I'm not well-versed in the R universe) that took decades to arrive at their first major release (this is if we're talking about semantic versioning https://semver.org 2.0.0, with a major release being for example 2.0.0, a minor one being of 0.2.0, and a patch release being 0.0.2). For example, scipy 1.0 was released almost 20 years after its initial release, MNE Python (https://github.com/mne-tools/mne-python/) just recently released 1.0 after 13 years, pymer4 hasn't reached a major release yet (https://github.com/ejolly/pymer4), etc. Focusing on a major release only would in those cases paint a wrong picture (or sabotage proper semantic versioning), and maybe a minor release may be better suited to capture a package's maintenance. Thanks for the great proposal, I hope my comments contain some relevant thoughts. :)
23.5.2022 13:51Comment on Research Software in Academic Hiring and Promotion: A proposal for how to assess it by Adina WagnerIn 2021, the German Psychological Society (DGPs) signed the DORA declaration. In consequence, they recently installed a task force with the goal to create a recommendation how a responsible research assessment could be practically implemented in hiring and promotion within Continue Reading ...
23.5.2022 11:38Research Software in Academic Hiring and Promotion: A proposal for how to assess itLieber Felix, spannende Analyse, mit deren Schlussfolgerungen ich voll und ganz übereinstimme. Das Ganze erinnert mich an ein Paper, das ich mal vor über 10 Jahren zur Begutachtung bekam und das eine ähnliche Strategie verfolgte. In diesem Fall war es nicht das Alter, sondern die Tageszeit, die als Proxy für den Testosteronpegel verwendet wurde, da Testosteronwerte, aber auch alle möglichen anderen hormonellen und physiologischen Systeme einer circaidanen Rhythmik unterliegen. Die Schlussfolgerungen bestanden aus ähnlich steilen Thesen, die dann natürlich nur auf Testosteron abgestellt waren. Ich hab damals die Ablehnung empfohlen, und das Paper wurde auch tatsächlich nicht in dem betreffenden Journal veröffentlicht. Aber ich bin mir sicher, dass es noch irgendwo bei einer anderen Zeitschrift untergekommen ist. Als Publikations-Zombie. Herzliche Grüße, Oliver
17.12.2021 13:31Comment on Bullshit-Bingo: Wie “Testosteron” Manager beeinflusst by Oliver C. SchultheissI want to invest my reviewing work in research that is worth to be reviewed. Furthermore, I do not want to further increase the billion dollar donations to premium publishers any more. When deciding whether to accept or reject a review, I apply the following heuristics: Continue Reading ...
18.11.2021 07:18My personal reviewing policy: No more billion-dollar donations[…] Felix Schönbrodt’s blog. (n.d.). Retrieved January 23, 2020, from https://www.nicebread.de/introducing-p-hacker/ […]
19.2.2021 09:47Comment on Introducing the p-hacker app: Train your expert p-hacking skills by P-hacking References – @malc_i[…] tool you could try: a prototype app that trains you to be a better p-hacker! Start with the introductory blog post here and see what you can […]
19.2.2021 09:46Comment on Introducing the p-hacker app: Train your expert p-hacking skills by PRACTICAL P-VALUES – @malc_i[…] Figure 2. The Goldilocks principle of optimal understanding. Illustration courtesy of Dr. Lexis “Lex” Brycenet […]
31.7.2020 16:09Comment on Amazing fMRI plots for everybody! by On the origin of psychological research practices, with special regard to self-reported nostril width ...by Felix Schönbrodt & Roland Ramthun The open availability of scientific material (such as research data, code, or other material) has often been identified as one cornerstone of a trustworthy, reproducible, and verifiable science. At the same time, the actual Continue Reading ...
4.2.2020 12:55A (non-viral) copyleft/sharealike license for open research dataThis is a guest post from Felix Suessenbach. Link to publication: https://doi.org/10.1002/per.2184 Download preprint for free here: https://psyarxiv.com/vquyh Reproducible R-scripts, codebooks, and open data for all studies: https://osf.io/uxtq2/ DOWNLOAD THE DOMINANCE, PRESTIGE, AND LEADERSHIP SCALES HERE: English: https://osf.io/n4pr5/ German: https://osf.io/hrnsz/ Continue Reading ...
17.12.2018 12:14Differentiate the Power Motive into Dominance, Prestige, and Leadership: New Tool and Theoryby Angelika Stefan & Felix Schönbrodt Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking Continue Reading ...
14.11.2018 09:00Gazing into the Abyss of P-Hacking: HARKing vs. Optional StoppingIn 2015, the psychology department at LMU Munich for the first time announced a professorship position with an “open science statement” (see original job description here): Our department embraces the values of open science and strives for replicable and reproducible Continue Reading ...
25.6.2018 11:17Hiring Policy at the LMU Psychology Department: Better have some open science track recordtl;dr: Publication bias and p-hacking can dramatically inflate effect size estimates in meta-analyses. Many methods have been proposed to correct for such bias and to estimate the underlying true effect. In a large simulation study, we studied which methods do Continue Reading ...
1.6.2017 08:24Correcting bias in meta-analyses: What not to do (meta-showdown Part 1)Recently, Matt Motyl et al. (2017) posted a pre-print paper in which they contrasted the evidential value of several journals in two time periods (2003-2004 vs. 2013-2014). The paper sparked a lot of discussion in Facebook groups [1][2], blog posts Continue Reading ...
9.5.2017 08:50Assessing the evidential value of journals with p-curve, R-index, TIVA, etc: A comment on Motyl et al. (2017) with new datatl;dr: The German Psychological Society developed and adopted new recommendations for data sharing that fully embrace openness, transparency and scientific integrity. Key message is that raw data are an essential part of an empirical publication and must be openly shared. Continue Reading ...
15.2.2017 14:44German Psychological Society fully embraces open data, gives detailed recommendationsby Angelika Stefan & Felix Schönbrodt This is the second part of “Two meanings of priors”. The first part explained a first meaning – “priors as subjective probabilities of models”. While the first meaning of priors refers to a global Continue Reading ...
17.1.2017 15:53Two meanings of priors, part II: Quantifying uncertainty about model parameters