Project priorities – the skeleton vs the meat

In talking to many grad students and postdocs I have found that quite a few find it difficult to prioritize experiments in their projects. Creating a tree of life of a project should help some, but still, the tree has many branches, and in principle one could start with any of them to get to the root. What I found to be a simple solution was to divide the project into “skeleton” and “meat”. The skeleton is the parts of the project that really answer the key question that the project is all about. Think of it this way – would the title of your paper change if the experiment came out one way or the other? If it would, this is your skeleton experiment and it is your top priority. Continue reading

The use and abuse of metrics in science – the Polish perspective

I have just returned from a debate at the Nencki Institute about the use of metrics in science evaluation. The debate consisted of two presentations and an open discussion with the audience. The first presentation was by Prof. Leszek Kaczmarek from the Nencki, who has had a lot of experience being a referee for both Polish and European granting agencies, and is actively involved in policymaking for the Polish National Science Centre. The second presentation was by Prof. Karol Życzkowski from the Jagiellonian University and Center of Theoretical Physics, who has also served on many review committees for Polish and International grants. Those of my readers who follow research-related blogs and the usual suspects on Twitter will not be unfamiliar with the main points covered in the discussion – the hegemony of the Impact Factor and how it is a terrible metric of the quality of science, the impossible task of judging and ranking the increasingly specialized and esoteric achievements of our peers, the heterogeneity of research fields and differences in citation practices etc. A few points discussed, however, I think, are specific for Poland, and I would like to dwell on them a little longer. Continue reading

Functions of scientific publishing

This is a really exciting time for scientific publishing. The shift from traditional paper-based journals to on-line distribution has had a huge impact on the industry, and has resulted in the rise of new publishing paradigms, the most prominent among them being open access publishing. In contrast to traditional subscription-based distribution, open access makes articles available to all interested parties and covers the publication costs with other means, most often by requiring that authors pay for publication. I have recently argued that this model of scientific publishing forces the editors of journals to be somewhat less selective in what they allow to be published in a specific journal, because only the published article brings in money, whereas a rejected manuscript is a “waste” of the editor’s time, which is probably the most costly resource in publishing nowadays. Many have argued that the selectivity of journals, especially those at the very top of the journal prestige ladders, is in fact deleterious to science – it delays publication and favors “glammy” papers that oversell the data and bend reality to fit a nice “story”. There are definitely cases of that happening in super-prestigious journals, and there is no doubt that paper review takes a long time and is often a very frustrating experience for the authors. So should we give up on the idea of selectivity altogether? To answer that question, we first must address another more fundamental question – what is the purpose of scientific publishing as it stands now? Continue reading

Should open access be paid for by funding agencies?

Lenny Teytelman came out with a blog post arguing that closed-access scientific publishing is anti-science based on the premise that closed-access publishers refused to share protocols published in their journals with his open access (but for profit) protocol website protocols.io.

I don’t think we can call either publishing model anti-science – publishers have to make ends meet (or make a profit) and they accomplish it in several different ways. Continue reading

Selectivity is important in research publishing

I have had an interesting discussion with Mick Watson on his blog, opiniomics, regarding open vs. closed access publishing models. Mick is a huge advocate of open access, and many of his points in support of this model are valid, but we couldn’t agree on one aspect of the open vs. closed access debate. My argument in defense of the closed access paywalled model of scientific publishing is that the incentives in this model are so aligned, that the publisher is motivated to be selective – they will pick the best papers most likely to be read and cited by their clients – the readers. On the other hand, the open access model forces the publisher to lower their standards – if the standards are too high, none of their prospective clients – the authors, will want to publish there. Please note the important dichotomy – in the closed access model the client is the reader, in the open access model the client is the author. Continue reading

eLife comes out with incremental “Research Advances”

Another interesting idea from the editors of eLife. After introducing “meta” peer-review, where the reviewers talk to each other and try to reach a consensus on whether the paper is suitable for publication, they are now experimenting with an idea that many of us would like very much indeed – publication of incremental advances over another original research paper. There is a catch though, the original paper had to be also published in eLife. Continue reading