As smart as we think we are, we humans are hopeless at objectivity. This struck me as I was reading about peak oil on Wikipedia. I mean the issue is pretty serious. Within the next few decades we could potentially run out of the single most important natural resource that literally and figuratively fuels our civilization. And people, sometimes very smart people, cannot agree on whether something ought to be done right now or whether we have centuries of uninterrupted oil supply ahead of us. Continue reading

On the 40h work week in academia

There has been a bit of talk on Twitter and among my colleagues IRL and online about work-life balance in academia. See for example this excellent post by @TheNewPI. The gist of it is: is a normal 40h work week, which is the norm everywhere, feasible for academics? I think the answer is yes and no. It really depends on what your goals and priorities are. If you really want to develop an independent research program as PI of a lab at a decent research university or institute, then I don’t think sticking to a strict 9-5 schedule will serve you very well. The more time (up to a certain limit) you put into your work, the better you will become, the more you will accomplish. However, I think there are a few points worth elaborating on here. Continue reading

Grieving for failed ideas

As scientists we always come up with ideas – solutions to more or less well known problems in our areas of interest. The ideas may be applicable only to our specific project, or they may be general solutions to a much broader set of problems. The more specific the idea is to our area of expertise, the more likely it is to be feasible, but sometimes an outsider can see the problem from a completely new angle and come up with a novel solution that no-one had thought of before. That’s why it’s sometimes good to go out of your comfort zone and try to come up with solutions to problems that are a bit outside your immediate sub-field. The flip-side is that the solutions that you will come up with typically have a fundamental flaw that makes them unfeasible.

I have recently spend a couple of days trying to devise what I thought was an interesting strategy to address a relatively broad set of problems in molecular biology. Turns out, similar solutions had been tried and work only in a very limited set of circumstances. Continue reading

Hacking the microscope

You know those super-expensive accessories that research equipment manufacturers make you pay through the nose for? A while back I bought a pretty expensive Olympus microscope. Since my budget was limited and I really wanted a best-of-the-best objective lens and a state-of-the art camera, I had to find savings wherever I could.  For that reason I did not get a filter wheel/shutter hand switch, which is a must if you want to operate the scope without the software running. Having to launch the the cellSens software to switch fluorescence filters was so frustrating that I caved in and requested a quote for the switch. Believe it or not, but this plastic box with a couple of buttons costs almost $1000. Continue reading

Can we solve research misconduct?

Just the other day I was reading a Nature opinion piece on deliberate research misconduct: how it affects the reproducibility science, and what we can do about it. The authors proposed a number of solutions, but most of them focused on punishing the perpetrators. Punishments should be more severe, they argued, the PIs should be held accountable for their trainees’ misconduct, and institutions should be forced to give back money gained by research dishonesty. I’m not sure I agree with this solution. Continue reading

The gory details – experiment design guide for beginners

In the previous post I’ve outlined the strategy that will allow beginner scientists to understand the complexity of a research project – something that they often struggle with. From the “tree of life” view of the research project there is a rather straightforward path to understanding the significance and interpreting the results of each experiment that they perform. I recommend that each beginner scientist answers (ideally in writing) the following questions before they even touch a pipette:

  1. What is the goal of this experiment?
  2. What is the hypothesis?
  3. What is the approach?
  4. What are the experimental groups and controls?
  5. What are the expected results?

Continue reading

The big picture – project perception guide for beginner scientists

One of the things I have learned the hard way during my first year as group leader is how easy it is to overestimate the ability of beginner scientists – graduate students, interns, undergraduates – to grasp the “big picture” of the project. Beginners are usually quite good at performing experiments and processing results, but they typically find it very hard to visualize where the experiment fits in the grand scheme of their project. I would like to propose a framework that will make it easier for beginners and their mentors to stay on the same page when it comes to the big picture of their projects. Continue reading

Academaze – the awesome new book by Xykademiqz

The road of an academic scientist is long and tortuous. Many pitfalls await the hapless young researcher, so any advice is always a great bonus. I have been relying on Twitter and a bunch of great blogs (see my blogroll) to get my fix of do’s and don’ts in academia, and it has served me well – I now have the coveted group leader position at a major university in Poland and I use the blogroll as a constant source of information and support in my ivory tower woes. One of my favorite blogs has always been xykademiqz, written by a successful mid-career group leader in the physical sciences. The blog is filled with great advice and down-to-earth musings on the “human” side of research. That’s why I was really excited when I heard that xykademiqz is coming out with a book, which, as I understand, is a collection of her most successful blog entries organized in a way that will make for an interesting narrative. The book is here, and I got a glimpse of it as an early access reader. Let me tell you this – it is pure gold! Continue reading

The use and abuse of metrics in science – the Polish perspective

I have just returned from a debate at the Nencki Institute about the use of metrics in science evaluation. The debate consisted of two presentations and an open discussion with the audience. The first presentation was by Prof. Leszek Kaczmarek from the Nencki, who has had a lot of experience being a referee for both Polish and European granting agencies, and is actively involved in policymaking for the Polish National Science Centre. The second presentation was by Prof. Karol Życzkowski from the Jagiellonian University and Center of Theoretical Physics, who has also served on many review committees for Polish and International grants. Those of my readers who follow research-related blogs and the usual suspects on Twitter will not be unfamiliar with the main points covered in the discussion – the hegemony of the Impact Factor and how it is a terrible metric of the quality of science, the impossible task of judging and ranking the increasingly specialized and esoteric achievements of our peers, the heterogeneity of research fields and differences in citation practices etc. A few points discussed, however, I think, are specific for Poland, and I would like to dwell on them a little longer. Continue reading

Selectivity is important in research publishing

I have had an interesting discussion with Mick Watson on his blog, opiniomics, regarding open vs. closed access publishing models. Mick is a huge advocate of open access, and many of his points in support of this model are valid, but we couldn’t agree on one aspect of the open vs. closed access debate. My argument in defense of the closed access paywalled model of scientific publishing is that the incentives in this model are so aligned, that the publisher is motivated to be selective – they will pick the best papers most likely to be read and cited by their clients – the readers. On the other hand, the open access model forces the publisher to lower their standards – if the standards are too high, none of their prospective clients – the authors, will want to publish there. Please note the important dichotomy – in the closed access model the client is the reader, in the open access model the client is the author. Continue reading