Category Archives: Miscellaneous

Open mind, open science [Speech for Tilburg uni board]

This is a speech I gave for the board of Tilburg University on why Open science is important for the future of TIlburg University or any knowledge institute, honestly. Speech was given on March 9, 2017.

We produce loads of knowledge at this university, and yet we throw most of that knowledge away. We are throwing away taxpayer’s money; stifling scientific discovery; hampering the curiosity of our students and society.

Research data; peer reviews; research materials; developed software; research articles — they are literally and figuratively thrown away. But these are all building blocks for new knowledge, and the more building blocks available, the more knowledge we can build. Science is a bit like Legos in that sense: more pieces allow you to build greater structures.

Students can use these building blocks to learn better — learn about the real, messy process of discovery for example. Businesses can create innovative tools for information consumption by taking the hitherto unavailable information and reshaping it into something valuable. Citizens can more readily participate in the scientific process. And last but not least, science can become more accurate and more reliable.

Researchers from different faculties and staff members from different support services see the great impact of open science, and today I call on you to give make it part of the new strategic plan.

Let’s as a university work towards making building blocks readily available instead of throwing or locking them away.

Open Access and Open Data, two of these building blocks, have already been on the board’s agenda. All researchers at Tilburg are mandated to make their publications freely available since 2016 and free to re-use by 2024. For Open Data, the plans are already in motion to make all data open per 2018, across all faculties, as I was happy to read in a recent memorandum. So, why am I here?

Open Access and Open Data are part of the larger idea of Open Science; they are only two of many building blocks. Open science is that all research pieces are available for anyone to use for any purpose.

Advancing society is only possible if we radically include society in what we do. The association of Dutch universities, state secretary Sander Dekker, and 17 other organizations have underscored the importance of this when they signed the National Plan Open Science just a few weeks ago.

So I am happy to see the landscape is shifting from closed to open. However, it is happening slowly and incompletely if we only focus only on data and access. Today, I want to focus on one of the biggest problems facing science, that open science can solve: selective publication.

Researchers produce beautifully written articles that read like the script of a good movie: they set the scene with an introduction and methods, have beautiful results, and provide a happy ending in the discussion that makes us think we actually understand the world. And just like movies, not all are available in the public theatre.

But research isn’t about good and successful stories that sell; it is about accuracy. We need to see not just the successful stories, we need to see all stories. We need to see the entire story, not just the exciting parts. Only then can we efficiently produce new knowledge and and understand society.

Because good movies pretend to find effects even if there is truly nothing to find. Here, researchers investigate the relation between jelly beans and pimples.

So they start researching. Nothing.

xkcd comic "Significant"; https://www.xkcd.com/882/

xkcd comic “Significant”; https://www.xkcd.com/882/

More studies; nothing.

More studies; an effect and a good story!

More studies; nothing.

And what is shared? The good story. While there is utterly nothing to find. And this happens daily.

Researchers fool each other, including themselves, and it has been shown time and time again that we researchers wish to see what we want to see. This human bias greatly harms science.

The results are disconcerting. Psychology; cancer research; life sciences; economics — all fields have issues with providing a valid understanding of the world, to a worrying extent. This is due to researchers fooling themselves and confirming prior beliefs to produce good movies instead of being skeptical and producing accurate, good science.

So I and other members across the faculties and services say: Out with the good movies, in with the good science we can actually build on — OPEN science.

Sharing all research that is properly conducted is feasible and will increase validity of results. Moreover, it will lead to less research waste. We as a university could become the first university to share all our research output. All based on a realistic notion of a researcher: do not evaluate results based on whether they are easy to process, confirm your expectations, and whether they provide a good story — evaluate them on their methods.

But please please members of our university, do not expect this change open science to come easily or by magically installing a few policies!

It requires a cultural shift that branches out into the departments and even the individual researchers’ offices. Policies don’t necessarily result in behavior change.

And as a researcher, I want to empirically demonstrate that policy doesn’t necessarily result in behavioral change.

Here is my Open Access audit for this university. Even though policies have been instated by the university board, progress is absent and we are actually doing worse at making our knowledge available to society than in 2012. This way we will not reach ANY of our Open Access goals we have set out.

Open Access audit Tilburg University; data and code: https://github.com/libscie/access-audit

Open Access audit Tilburg University; data and code: https://github.com/libscie/access-audit

In sum, let us advance science by making it open, which in turn will help us advance society. I will keep fighting for more open science. Anyone present, student or staff, I encourage you to do so as well. I am here to help.

Open science is out of the box, and it won’t go back in. The question is, what are we as a university going to do with that knowledge?

Share

DNase Treatment – Abalone Water Filters for RLO Viability

The RNA I isolated earlier today was subjected to DNase treatment using the Turbo DNA-free Kit (Invitrogen), following the manufacturer’s standard protocol.

After DNase inactivation treatment, the RNA was transferred (recovered ~19uL from each samples)  to a clear, low-profile PCR plate.

The plate layout is here (Google Sheet): 20170309_RLO_viability_DNased_RNA_plate_layout

The samples will be subjected to qPCR to assess the presence/absence of residual gDNA. The plate of DNased RNA was stored @ -80C in the original box that the water filters were stored in.

An overview of the experiment and the various treatments are viewable in the “Viability Trial 3″ tab of Lisa’s spreadsheet (Google Sheet): RLO Viability & ID50

Share

RNA Isolation – Abalone Water Filters for RLO Viability

Water filters stored at -80C in ~1mL of RNAzol RT were provided by Lisa. This is part of an experiment (and Capstone project) to assess RLO viability outside of the host.

The samples were thawed and briefly homogenized (as best I could) with a disposable plastic pestle. The samples were then processed according to the manufacturer’s protocol for total RNA isolation. Samples were resuspended in 25μL of nuclease-free water (Promega).

Immediately proceeded to DNase treatment.

The experimental samples and the various treatments are viewable in the “Viability Trial 3″ tab of Lisa’s spreadsheet (Google Sheet): RLO Viability & ID50

Share

PBS recipe

1X Phosphate Buffered Saline (PBS Buffer) Recipe

Dissolve in 800ml distilled H2O:

8g of NaCl
0.2g of KCl
1.44g of Na2HPO4
0.24g of KH2PO4

Adjust pH to 7.4 with HCl.

Adjust volume to 1L with distilled H2O.

Sterilize by autoclaving.

Share

Manuscript Writing – More “Nuances” Using Authorea

I previously highlighted some of the issues I was having using Authorea.com as an writing platform.

As a collaborative writing platform, it also has issues.

I recently received email notifications about comments made on the manuscript. However, when visiting my manuscript on Authorea, there are no indications that any comments have been made…

As it turns out, comments are currently only viewable when using Private Browsing/Incognito modes on your browser!!!

 

 

 

I found this out by using the chat feature that’s built into Authorea. This feature is great and support is pretty quick to respond:

 

 

However, Josh at Authorea suggested this bug would be resolved by the end of the day (that was on February 24th). I took the above screenshots of my manuscript demonstrating that comments don’t show up when using a browser like a normal person, today, February 28th…

 

Another significant shortcoming to using Authorea as a collaborative writing platform, as it relates to comments:

  • You can’t reply to individual comments! You can only add comments in chronological order for any given section of the manuscript. For example, if multiple comments are made in the Methods section, it makes it extremely difficult to address individual comments that were made earlier, in a clear fashion. To reply to a comment, you have to type out which previous comment you’re currently addressing in the comment you’re writing to address a particular previous comment. See what I mean?
Share

Data Received – Jay’s Coral RADseq and Hollie’s Geoduck Epi-RADseq

Jay received notice from UC Berkeley that the sequencing data from his coral RADseq was ready. In addition, the sequencing contains some epiRADseq data from samples provided by Hollie Putnam. See his notebook for multiple links that describe library preparation (indexing and barcodes), sample pooling, and species breakdown.

For quickest reference, here’s Jay’s spreadsheet with virtually all the sample/index/barcode/pooling info (Google Sheet): ddRAD/EpiRAD_Jan_16

I’ve downloaded both the demultiplexed and non-demultiplexed data, verified data integrity by generating and comparing MD5 checksums, copied the files to each of the three species folders on owl/nightingales that were sequenced (Panopea generosa, Anthopleura elegantissima, Porites astreoides), generated and compared MD5 checksums for the files in their directories on owl/nightingales, and created/updated the readme files in each respective folder.

 

Data management is detailed in the Jupyter notebook below. The notebook is embedded in this post, but it may be easier to view on GitHub (linked below).

Readme files were updated outside of the notebook.

Jupyter notebook (GitHub): 20170227_docker_jay_ngs_data_retrieval.ipynb

Share

Sample Prep – Pinto Abalone Tissue/RNA for Collabs at UC-Irvine

We need to send half of each sample that we have from Sean Bennett’s Capstone project to Alyssa Braciszewski at UC-Irvine.

This is quite the project! There are ~75 samples, and about half of those are tissues (presumably digestive gland) stored in RNAzol RT. The remainder are RNA that has already been isolated. Additionally, tube labels are not always clear and there are duplicates. All of these factors led to this taking an entire day in order to decipher and process all the samples.

I selected samples from only those that I was confident in their identity.

I aliquoted 25μL of each RNA for shipment to Alyssa.

Tissue samples were thawed and tissue was cut in half using razor blades.

Planning to send samples on Monday.

Lisa has already assembled a master spreadsheet to try to keep track of all the samples and what they are (Google Sheet): Pinto Transcriptome

Here’s the list of samples I’ll be sending to Alyssa (Google Sheet): 20170222_pinto_abalone_samples

Here are some images to detail some of the issues I had to deal with in sample ID/selection.

 

 

 

 

 

 

 

 

 

Share

Symbiodinium cp23S Re-PCR

Yesterday I completed some re-do PCRs of Symbiodinium cp23S from the branching Porites samples Sanoosh worked on over the past summer. Some of the samples did not amplify at all, so I reattempted PCR of these samples (107, 108, 112, 116). Sample 105 amplified last summer but the sequence was lousy, so I redid that one too. After the first PCR, I obtained 1 ul of the product and diluted it 1:100 in water. I then used 1 ul of this diluted product as the template for a second round of PCR. PCR conditions were the same Sanoosh and I used last summer (based on Santos et al. 2002):

Reagent Volume (µl)
water 17.2
5X Green Buffer 2.5
MgCl2 25 mM 2.5
dNTP mix 10 mM 0.6
Go Taq 5U/µl 0.2
primer 23S1 10 µM 0.5
primer 23S2 10 µM 0.5
Master Mix volume 24
sample 1
total volume 25

Initial denaturing period of 1 min at 95 °C, 35 cycles of 95 °C for 45 s, 55 °C for 45 s, and 72 °C for 1 min, and a final extension period of 7 min.

Samples were then run on a 1% agarose gel for 30 min at 135 volts.

021217_cp23s

Surprisingly, the first round of PCR amplified samples 107 and 112 (note: two subsamples of each were run; one that was the original extraction diluted (d) and another that was the original cleaned with Zymo OneStep PCR Inhibitor Removal kit (c)). The cleaned samples were the ones that amplified. I believe Sanoosh had tried these cleaned samples with no success.

The second round of PCR produced faint bands for both of the 108 samples. Sample 116 still did not amplify.

I cleaned the samples with the NEB Monarch Kit and shipped them today to Sequetech. I combined the two 108 samples to ensure enough DNA for sequencing.

Share

RAD library prep

This is a belated post for some RAD library prep I did the week of January 23rd in the Leache Lab. I followed the same ddRAD/EpiRAD protocol I used in August. Samples included mostly Porites astreoides from the transplant experiment, as well as some geoduck samples from the OA experiment, and a handful of green and brown Anthopleura elegantissima. Sample metadata can be found here. The library prep sheet is here. The TapeStation report is here. Below is the gel image from the TapeStation report showing that the size selection was successful. However, the selection produced fragments with a mean size of 519-550 base pairs, as opposed to the size selection in August which produced ~500 bp fragments. While there will obviously be some overlap between libraries, combining samples from the two libraries may be problematic. This occurred despite identical Pippen Prep settings targeting fragments 415-515 bp. Libraries were submitted to UC Berkeley on 1/31/17 for 100 bp paired-end sequencing on the HiSeq 4000.

Library JD002_A-L

Picture1

Share

Interview Danish Psychology Association responses

Below, I copy my responses to an interview for the Danish Psychology Association. My responses are in italic. I don’t know when the article will be shared, but I am posting my responses here,  licensed CC0. This is also my way of sharing the full responses, which won’t be copied verbatim into an article because they are simply too lengthy.What do you envision that this kind of technology could do in a foreseable future?

What do you mean by “this” kind of technology? If you mean computerized tools assisting scholars, I think there is massive potential in both development of new tools to extract information (for example what ContentMine is doing) and in application. Some formidable means are already here. For example, how much time do you spend as a scholar to produce your manuscript when you want to submit it? This does not need to cost half a day when there are highly advanced, modern submission managers. Same when submitting revisions. Additionally, annotating documents colloboratively on the Internet with hypothes.is is great fun, highly educational, and productive. I could go on and on about the potential of computerized tools for scholars.

Why do you think this kind of computerized statistical policing is necessary in the field of psychology and in science in general?Again, what is “this kind of computerized statistical policing”? I assume you’re talking about statcheck only for the rest of my answer. Moreover, it is not policing — a spell-checker does not police your grammar, it helps you improve your grammar. statcheck does not police your reporting, it helps you improve your reporting. Additionaly, I would like to reverse the question: should science not care about the precision of scientific results? With all the rhetoric going on in the USA about ‘alternative facts’, I think it highlights how dangerous it is to let go of our desire to be precise in what we do. Science’s inprecision has trickle down effects in the policies that are subsequently put in place, for example. We put in all kinds of creative and financial effort to progress our society, why should we let it be diminished by simple mistakes that can be prevented so easily? If we agree that science has to be precise in the evidence it presents, we need to take steps to make sure it is. Making a mistake is not a problem, it is all about how you deal with it.

So far the Statcheck tool is only checking if the math behind the statistical calculations in the published articles are wrong when the null-hypothesis significance testing has been used. What you refer to as reporting errors in your article from December last year published in Behaviour Research Methods. But these findings aren’t problematic as long as the conclusions in the articles aren’t affected by the reporting errors?

They aren’t problematic?—who is the judge of whether errors aren’t problematic? If you consider just statistical significance, there are still 1/8 papers that contain such a problem. Moreover, all errors in reported results affect meta-analyses — is that not also problematic down-the-line? I find it showing of hubris for any individual to say that they can determine whether something is problematic or not, when there can be many things that that person doesn’t realize even can be affected. It should be open to discussion, so information about problems need to be shared and discussed. This is exactly what I aimed to do with the statcheck reports on PubPeer for a very specific problem.

In the article in Behaviour Research Methods you find that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. And that One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. What does this mean? I’m not a mathematician.

You don’t need to be a mathematician to understand this. Say we have a set of eight research articles presenting statistical results with certain conclusions. Four of those eight will contain a result that does not match up to the results presented (i.e., inconsistent), but does not affect the broad strokes of the conclusion. One of those eight contains a result that does not match up to the conclusion and potentially nullifies the conclusions. For example, if a study contains a result that does not match up with the conclusion, but concluded that a new behavioral therapy is effective at treating depression. That means the evidence for the therapy effectiveness is undermined — affecting direct clinical benefits as a result.

Why are these findings important?

Science is vital to our society. Science is based on empirical evidence. Hence, it is vital to our society that empirical evidence is precise and not distorted by preventable or remediable mistakes. Researchers make mistakes, no big deal. People like to believe scientists are more objective and more precise than other humans — but we’re not. The way we build checks- and balances to prevent mistakes from proliferating and propagating into (for example) policy is crucial. statcheck contributes to understanding and correcting one specific aspect of such mistakes we can all make.

Why did you decide to run the statcheck on psychology papers specifically?

statcheck was designed to extract statistical results reported as prescribed by the American Psychological Association. It is one of the most standardized ways of reporting statistical results. It makes sense to apply software developed on standards in psychology to psychology.

Why do you find so many statistical errors in psychology papers specifically?

I don’t think this is a problem to psychology specifically, but more a problem of how empirical evidence is reported and how manuscripts are written.

Are psychologists not as skilled at doing statistical calculations as other scholars?

I don’t think psychologists are worse at doing statistical calculations. I think point-and-click software has made it easy for scholars to compute statistical results, but not to insert them into manuscripts reliably. Typing in those results is error prone. I make mistakes when I’m doing my finances at home, because I have to copy the numbers. I wish I had something like statcheck for my finances. But I don’t. For scientific results, I promote writing manuscripts dynamically. This means that you no longer type in the results manually, but inject the code that contains the result. This is already possible with tools such as Rmarkdown and can greatly increase the productivity of the researcher. It has saved my skin multiple times, although you still have to be vigilant for mistakes (wrong code produces wrong results).

Have you run the Statcheck tool on your own statistical NHST-testing in the mentioned article?

Yes! This was the first thing I did, way before I was running it on other papers. Moreover, I was non-selective when I started scanning other people’s papers — I apparently even made a statcheck report that got posted on PubPeer for my supervisor (see here). He laughed, because the paper was on reporting inconsistencies and the gross inconsistency was simply an example of one in the running text. A false positive, highlighting that statcheck‘s results always need to be checked by a human before concluding anything definitive.

Critics call Statcheck “a new form of harassment” and accuse you of being “a self appointed data police”. Can you understand these reactions?

Proponents of statcheck praise it as a good service. Researchers who study how researchers conduct research are called methodological terrorists. Any change comes with proponents and critics. Am I a self-appointed data policer? To some, maybe. To others, I am simply providing a service. I don’t chase individuals and I am not interested in that at all — I do not see myself as part of a “data police”. That people think these reports is like getting reprimanded highlights to me that there still rests a taboo on skepticism within science. Skepticism is one of the ideals of science, so let’s aim for that.

Why do you find it necessary to send out thousands of emails to scholars around the world informing them that their work has been reviewed and point out to them if they have miscalculated?

It was not necessary — I thought it was worthwhile. Why do some scholars find it necessary to e-mail a colleague about their thoughts on a paper? Because they think it is worthwhile and can help them or the original authors. Exactly my intentions by teaming up with PubPeer and posting those 50,000 statcheck reports.

Isn’t it necessary and important for ethical reasons to be able to make a distinction between deliberate miscalculations and miscalculations by mistake when you do this kind of statcheck?

If I was making accusations about gross incompetence towards the original authors, such a distinction would clearly be needed. But I did not make accusations at all. I simply stated the information available, without any normative or judging statements. Mass-scale post-publication peer review of course brings with it ethical problems, which I carefully weighed before I started posting statcheck reports with the PubPeer team. The formulation of these reports was discussed within our group and we all agreed this was worthwhile to do.

As a journalist I can write and publish an article with one or two factual errors. This doesn’t mean the article isn’t of a general high journalistic standard or that the content of the article isn’t of great relevance for the public- couldn’t you make the same argument about a scientific article? And when you catalogue these errors online you are at the risk of blowing up a storm in a tea cup and turn everybody’s eyes away from the actual scientific findings?

Journalists and scholars are playing different games. An offside in football is not a problem in tennis and the comparison between journalists and scholars seems similar to me. I am not saying that an article is worthless if it contains an inconsistency, I just say that it is worth looking at before building new research lines on it. Psychology has wasted millions and millions of euros/dollars/pounds/etc on chasing ephemeral effects that are totally unreasonable, as several replication projects have highlighted in the last years. Moreover, I think the general opinion of science will only improve if we are more skeptical and critical of each other instead of trusting findings based on reputation, historical precedent, or ease with which we can assimilate the findings.

Share