Interview Danish Psychology Association responses

Below, I copy my responses to an interview for the Danish Psychology Association. My responses are in italic. I don’t know when the article will be shared, but I am posting my responses here,  licensed CC0. This is also my way of sharing the full responses, which won’t be copied verbatim into an article because they are simply too lengthy.What do you envision that this kind of technology could do in a foreseable future?

What do you mean by “this” kind of technology? If you mean computerized tools assisting scholars, I think there is massive potential in both development of new tools to extract information (for example what ContentMine is doing) and in application. Some formidable means are already here. For example, how much time do you spend as a scholar to produce your manuscript when you want to submit it? This does not need to cost half a day when there are highly advanced, modern submission managers. Same when submitting revisions. Additionally, annotating documents colloboratively on the Internet with hypothes.is is great fun, highly educational, and productive. I could go on and on about the potential of computerized tools for scholars.

Why do you think this kind of computerized statistical policing is necessary in the field of psychology and in science in general?Again, what is “this kind of computerized statistical policing”? I assume you’re talking about statcheck only for the rest of my answer. Moreover, it is not policing — a spell-checker does not police your grammar, it helps you improve your grammar. statcheck does not police your reporting, it helps you improve your reporting. Additionaly, I would like to reverse the question: should science not care about the precision of scientific results? With all the rhetoric going on in the USA about ‘alternative facts’, I think it highlights how dangerous it is to let go of our desire to be precise in what we do. Science’s inprecision has trickle down effects in the policies that are subsequently put in place, for example. We put in all kinds of creative and financial effort to progress our society, why should we let it be diminished by simple mistakes that can be prevented so easily? If we agree that science has to be precise in the evidence it presents, we need to take steps to make sure it is. Making a mistake is not a problem, it is all about how you deal with it.

So far the Statcheck tool is only checking if the math behind the statistical calculations in the published articles are wrong when the null-hypothesis significance testing has been used. What you refer to as reporting errors in your article from December last year published in Behaviour Research Methods. But these findings aren’t problematic as long as the conclusions in the articles aren’t affected by the reporting errors?

They aren’t problematic?—who is the judge of whether errors aren’t problematic? If you consider just statistical significance, there are still 1/8 papers that contain such a problem. Moreover, all errors in reported results affect meta-analyses — is that not also problematic down-the-line? I find it showing of hubris for any individual to say that they can determine whether something is problematic or not, when there can be many things that that person doesn’t realize even can be affected. It should be open to discussion, so information about problems need to be shared and discussed. This is exactly what I aimed to do with the statcheck reports on PubPeer for a very specific problem.

In the article in Behaviour Research Methods you find that half of all published psychology papers that use NHST contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. And that One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. What does this mean? I’m not a mathematician.

You don’t need to be a mathematician to understand this. Say we have a set of eight research articles presenting statistical results with certain conclusions. Four of those eight will contain a result that does not match up to the results presented (i.e., inconsistent), but does not affect the broad strokes of the conclusion. One of those eight contains a result that does not match up to the conclusion and potentially nullifies the conclusions. For example, if a study contains a result that does not match up with the conclusion, but concluded that a new behavioral therapy is effective at treating depression. That means the evidence for the therapy effectiveness is undermined — affecting direct clinical benefits as a result.

Why are these findings important?

Science is vital to our society. Science is based on empirical evidence. Hence, it is vital to our society that empirical evidence is precise and not distorted by preventable or remediable mistakes. Researchers make mistakes, no big deal. People like to believe scientists are more objective and more precise than other humans — but we’re not. The way we build checks- and balances to prevent mistakes from proliferating and propagating into (for example) policy is crucial. statcheck contributes to understanding and correcting one specific aspect of such mistakes we can all make.

Why did you decide to run the statcheck on psychology papers specifically?

statcheck was designed to extract statistical results reported as prescribed by the American Psychological Association. It is one of the most standardized ways of reporting statistical results. It makes sense to apply software developed on standards in psychology to psychology.

Why do you find so many statistical errors in psychology papers specifically?

I don’t think this is a problem to psychology specifically, but more a problem of how empirical evidence is reported and how manuscripts are written.

Are psychologists not as skilled at doing statistical calculations as other scholars?

I don’t think psychologists are worse at doing statistical calculations. I think point-and-click software has made it easy for scholars to compute statistical results, but not to insert them into manuscripts reliably. Typing in those results is error prone. I make mistakes when I’m doing my finances at home, because I have to copy the numbers. I wish I had something like statcheck for my finances. But I don’t. For scientific results, I promote writing manuscripts dynamically. This means that you no longer type in the results manually, but inject the code that contains the result. This is already possible with tools such as Rmarkdown and can greatly increase the productivity of the researcher. It has saved my skin multiple times, although you still have to be vigilant for mistakes (wrong code produces wrong results).

Have you run the Statcheck tool on your own statistical NHST-testing in the mentioned article?

Yes! This was the first thing I did, way before I was running it on other papers. Moreover, I was non-selective when I started scanning other people’s papers — I apparently even made a statcheck report that got posted on PubPeer for my supervisor (see here). He laughed, because the paper was on reporting inconsistencies and the gross inconsistency was simply an example of one in the running text. A false positive, highlighting that statcheck‘s results always need to be checked by a human before concluding anything definitive.

Critics call Statcheck “a new form of harassment” and accuse you of being “a self appointed data police”. Can you understand these reactions?

Proponents of statcheck praise it as a good service. Researchers who study how researchers conduct research are called methodological terrorists. Any change comes with proponents and critics. Am I a self-appointed data policer? To some, maybe. To others, I am simply providing a service. I don’t chase individuals and I am not interested in that at all — I do not see myself as part of a “data police”. That people think these reports is like getting reprimanded highlights to me that there still rests a taboo on skepticism within science. Skepticism is one of the ideals of science, so let’s aim for that.

Why do you find it necessary to send out thousands of emails to scholars around the world informing them that their work has been reviewed and point out to them if they have miscalculated?

It was not necessary — I thought it was worthwhile. Why do some scholars find it necessary to e-mail a colleague about their thoughts on a paper? Because they think it is worthwhile and can help them or the original authors. Exactly my intentions by teaming up with PubPeer and posting those 50,000 statcheck reports.

Isn’t it necessary and important for ethical reasons to be able to make a distinction between deliberate miscalculations and miscalculations by mistake when you do this kind of statcheck?

If I was making accusations about gross incompetence towards the original authors, such a distinction would clearly be needed. But I did not make accusations at all. I simply stated the information available, without any normative or judging statements. Mass-scale post-publication peer review of course brings with it ethical problems, which I carefully weighed before I started posting statcheck reports with the PubPeer team. The formulation of these reports was discussed within our group and we all agreed this was worthwhile to do.

As a journalist I can write and publish an article with one or two factual errors. This doesn’t mean the article isn’t of a general high journalistic standard or that the content of the article isn’t of great relevance for the public- couldn’t you make the same argument about a scientific article? And when you catalogue these errors online you are at the risk of blowing up a storm in a tea cup and turn everybody’s eyes away from the actual scientific findings?

Journalists and scholars are playing different games. An offside in football is not a problem in tennis and the comparison between journalists and scholars seems similar to me. I am not saying that an article is worthless if it contains an inconsistency, I just say that it is worth looking at before building new research lines on it. Psychology has wasted millions and millions of euros/dollars/pounds/etc on chasing ephemeral effects that are totally unreasonable, as several replication projects have highlighted in the last years. Moreover, I think the general opinion of science will only improve if we are more skeptical and critical of each other instead of trusting findings based on reputation, historical precedent, or ease with which we can assimilate the findings.

Data Management – SRA Submission of Ostrea lurida GBS FASTQ Files

Prepared a short read archive (SRA) submission for archiving our Olympia oyster genotype-by-sequencing (GBS) data in NCBI. This is in preparation for submission of the mansucript we’re putting together.

I followed my outline/guideline for navigating the SRA submission process, as it’s a bit of a pain in the neck. Glad my notes were actually useful!

The following two files are currently being uploaded via FTP; the process will take about 3hrs, as each file is ~18GB in size:

 

They are being submitted under the following accession numbers (note: a final accession number will be provided once this is publicly available; I will update this post when that happens):

Blue Carbon Initiative: mitigating climate change in the oceans

Listened to this webinar, “Blue Carbon Ecosystems – what’s included, what’s not and why,” by Jenna Howard and Ariana Sutton-Grier about the Blue Carbon Initiative and climate change mitigation to understand how terrestrial carbon sequestration rates compare with those in marine ecosystems.  This talk focused mostly on coastal marine habitats with the best potential — mostly in-air plants (mangroves, marshes) and also seagrass beds.  Here is a video recording

https://youtu.be/o_QBBFXn_mE

and some of my notes/screengrabs:

 

Their paper has a more detailed table with equations, sources, etc.

Their paper has a more detailed table with equations, sources, etc.

link to paper — Clarifying the role of coastal and marine systems in climate mitigation

For comparison, humans activities put about 7000 million Mg C yr^-1 into the atmosphere.

Or 7 Gt/yr.

Mitigation potential (and feasibility)

Mitigation potential (and feasibility)

Mangroves and marshes

Range of uptake rates

  • vegetative carbon: 8-126 tons/ha
  • soil carbon: 250-280 tons/ha
  • ratio shows that 2-30 time more carbon is in soil than in vegetation
  • sequestration rate: 2.1-2.6 tons/ha/yr

Global loss of these habitats is ~1.5-2%, so half of habitat gone in 35-50 years

Coral

Long-term effect of calcification in coral reefs: a slight net source of CO2, but could reverse if ocean acidification increases dramatically

Dissolution process takes CO2 out of water column.

Take home message: no red arrow, meaning coral isn’t a climate change mitigation option

Kelp

Most kelp gets consumed or degrades in days, months, or at most a couple years

So, it’s a temporary carbon pool, not a long term storage option

Phytoplankton

Only 0.1% sinks to ocean floor for long-term storage, but the area is HUGE compared to the geographic extent of the coastal ecosystems.

BUT, it’s not in the running due to policy issues that have been (and are) challenging…

  • who manages, owns carbon sequestered, etc.
  • ethics of seeding with bioengineered cultures (didn’t even mention fertilization)

Screengrab from webinar

Screengrab from webinar

Marine fauna

Calcifiers (e.g. pteropods), krill, teleost fish (feces contain CaCO3 but doesn’t sequester, only affects alkalinity gradient)

 

 

 


 

My related notes from beyond the webinar:

What is sequestration rate for terrestrial ecosystems (temperate vs tropical forests)?

The Blue Carbon Initiative web site says of marine/coastal ecosystems:

“These ecosystems store up to 10 times more carbon — called “blue carbon” — per hectare than terrestrial forests, and degradation of these ecosystems accounts for up to 19% of carbon emissions from global deforestation.”

Why not great (baleen+sperm) whales as long-term sequestration?  Estimate potential using listed species population assessments and historical baselines?

If we protect and enhance their habitat to maximize their growth rate, how many new whales could be added to the global marine ecosystem each year?

Back of the envelope:

Let’s say blue whales are ~100,000 individuals below carrying capacity (it’s likely closer to 200,000) and each adult whale constitutes a sink of ~100 Mg of carbon (assuming they’re mostly lipid-rich blubber = hydrocarbons).  If the extant population (currently <10,000 whales growing at 8%/yr) manages to add 1,000 new blue whales per year, that would sequester 100,000 Mg/yr.  Maybe multiply that by 10 for other baleen whale species (blue, fin, right, humpback, sei, grey, bowhead, Bryde, minke, Omura/Eden, sperm) of similar mass that are similarly below carrying capacity and we’re at 1 million Mg/yr.  That estimate might be high or low by a factor of ~10 given the uncertainties.  Either way, 1 million Mg C yr^-1 is within the range of interesting numbers listed in the table…

Assuming recovery of the globe’s baleen whale populations takes 100 years, we could expect over that time period an increase in the amount of carbon stored in living whales of about 100 million Mg C.  That’s comparable to the biomass of all phytoplankton (0.5-2.4 billion Mg C, according to their paper and it’s citations).  Then there’s the flux (from dead adult whales) into the deep sea, where it would be sequestered for order 100 years…

It looks like famous folks have already pondered most of this:

 

 


 

Email announcement:​​

Blue Carbon Ecosystems – what’s included, what’s not and why

WHEN:

​TODAY, ​

Wednesday, February 8, 2017, 2pm – 3pm EST

REGISTER online at https://attendee.gotowebinar.com/register/6381612047463199747

 

Webinar Summary:

With increasing recognition of the role natural systems have in climate mitigation, where should management initiatives focus? While forest have historically had the spotlight of such efforts, coastal wetland ecosystems are now considered important and effective long-term carbon sinks. This attention to “blue carbon” habitats has sparked interest in including other marine systems, such as coral reefs, phytoplankton, kelp forests, and marine fauna.

In this webinar, authors of a recently published paper – Clarifying the role of coastal and marine systems in climate mitigation (Frontiers in Ecology and the Environment, Feb. 2017) – analyze the scientific evidence and potential management role of several coastal and marine ecosystems to determine which should be prioritized within current climate mitigation strategies and policies. Findings can assist decision-makers and conservation practitioners to understand where management actions can have additional carbon benefits.

Presenters​

:​ 

Dr. Jennifer Howard and Dr. Ariana Sutton-Grier

Dr. Jennifer Howard is the Marine Climate Change Director at Conservation International. Prior to accepting her current position, she was a AAAS Science and Technology Policy Fellow where she served two years at the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Fisheries Service. While at NOAA, Jennifer co-led and coordinated the development of the Ocean and Marine Resources in a Changing Climate Technical Input Report to the National Climate Assessment and coordinated the Interagency Working Group for Ocean Acidification.

 

Dr. Ariana Sutton-Grier is an ecosystem ecologist with expertise in wetland ecology and restoration, biodiversity, biogeochemistry, climate change, and ecosystem services. Dr. Sutton-Grier is a research faculty member at the University of Maryland in the Earth System Science Interdisciplinary Center and is also the Ecosystem Science Adviser for the National Ocean Service at NOAA.  She holds Honors Bachelor degrees from Oregon State University in Environmental Science and International Studies and a doctoral degree from Duke University in Ecology.

 

Moderator: Stefanie Simpson, Blue Carbon Program Manager, Restore America’s Estuaries (and paper co-author)

 

This free webinar is hosted by Restore America’s Estuaries.

Goals – February 2017

First goal is to be the first person in lab to post their goals each month. Props to one of our new grad students, Yaamini Venkataraman on beating me this month!

Next goal is to dominate this year’s Pub-a-thon. I’m working on two different manuscripts, this one and this one, but I still think I can win this!

Stuff that got tackled from last month’s goals:

Freezer organization – This has happened, albeit without much effort on my part. Many thanks to the Big Cheese and [Grace for tackling this project[(https://genefish.wordpress.com/2017/01/28/80-organization)!

Data Management Plan – Some progress has been made on this. I improved the instructions on the DMP a bit, but the master spreadsheet on which the DMP revolves around (Nightingales) is still in a massive state of flux that needs a lot of attention.

Sequencing data handling – Thanks to Sean for putting forth a serious dent in automating this. He wrote an R script to handle this sort of thing. I’m not entirely sure if he’s done testing it, but it seems to work so far. Next will be incorporating usage instructions of this R script into the DMP so that others can utilize it. On that note, I need to figure out where Sean is keeping this script (can’t seem to locate in his notebook.

Curriculum Testing – Determination of Most Useful Concentration of Sodium Carbonate Solution

After evaluating whether or not dry ice would be effective to trigger a noticeable change in pH in a solution, I determined which concentration(s) of sodium carbonate (Na2CO3) would be most useful for demonstration and usage within the curriculum. Previously, I used a 1M Na2CO3 solution a the universal pH indicator showed no change in color. What I want is a color change, but one that takes place at a noticeably slower rate than the other solutions that are demonstrated/tested; this will show how sodium carbonate acts as a buffer to CO2-acidification.

Additionally, I tested the difference in rate of pH change between Instant Ocean and sodium chloride (NaCl). The reason for testing this is to use this as a demonstration that salt water (i.e. sea water, ocean water) isn’t just made up of salt. It’s likely that many students simply think of the ocean as salt water and have not considered that the makeup of sea water is much more complex.

Finally, I performed these tests in larger volumes than I did previously to verify that the larger volumes will slow the rate of pH change, thus increasing the time it takes for the universal pH indicator to change color, making it easier to see/monitor/time.

Instant Ocean mix (per mfg’s recs): 0.036g/mL (36g/L)

For the NaCl solution, I used the equivalent weight (36g) that was used to make up the Instant Ocean solution.

 

Results:

  • Use of 0.001M Na2CO3 is passable, but due to the fact that it’s a diprotic base, the pH indicator didn’t progress lower than ~pH 6.0 in my limited tests. Adding additional dry ice (or using an even more dilute solution) are options to drive the pH lower.
  • The comparison between salt water and Instant Ocean will work well as a demonstration to introduce the concept that sea water is more complex than just being salty.
  • Using 1L volumes works well to slow the color changes of the universal pH indicator to improve the ability of the students to observe and measure the rate of color change.

The table below summarizes what I tested.

SOLUTION VOLUME (mL) DRY ICE (g) TIME OBSERVATIONS
0.1M Na2CO3 1000 3.0 No color change. Dry ice gone.
0.01M Na2CO3 1000 3.3 No color change. Dry ice gone.
0.001M Na2CO3 1000 3.3 ~20s Dry ice gone, but final color indicated a pH ~6.0.
Instant Ocean 1000 3.3 3m Initial color change noticeable within 10s; full color change after ~3m
NaCl 1000 3.0 instant Immediate, complete color change.
Tap H2O 1000 3.3 3m pH started @ ~7.5. Full color change took place.

Manuscript Writing – The “Nuances” of Using Authorea

I’m currently trying to write a manuscript covering our genotype-by-sequencing data for the Olympia oyster using the Authorea.com platform and am encountering some issues that are a bit frustrating. Here’s what’s happening (and the ways I’ve managed to get around the problems).

 


 

PROBLEM: Authorea spits out a browser-crashing “unresponsive script” message (actually, lots and lots of them; clicking “Stop script” or “Continue” just results in additional messages) in Firefox (haven’t tried any other browsers). This renders the browser inoperable and I have to force quit. It doesn’t happen all of the time, so it’s hard to pinpoint what triggers this.

 

 

SOLUTION: Edit documents in Git/GitHub. I have my Authorea manuscript linked to a GitHub repo, which allows me to write without using Authorea.com. This is how I’ll be doing my writing the majority of the time anyway, but I would like to use Authorea.com to insert and manage citations…

 


 

PROBLEM: Authorea remains in a perpetual “saving…” state after inserting a citation. It also renders the page strangely, with HTML <br></br> tags (see the “Methods” section in the screen cap below).

 

SOLUTION: Type additional text somewhere, anywhere. This is an OK solution, but is particularly annoying if I just want to go through and add citations and have no intentions of doing any writing.

 


 

PROBLEM: Multi-author citations don’t get formatted with “et al.” By default, Authorea inserts all citations using the following LaTeX format:

cite{Elshire_2011}

Result: (Elshire 2011).

This is a problem because this reference has multiple authors and should be written as: (Elshire et al., 2011).

SOLUTION: Change citation format to:

citep{Elshire_2011}

Other citation formatting options can be found here (including multiple citations within one set of parentheses, and referring in-text author name with only publication year in parentheses):

How to add and manage citations and references in Authorea


 

 

PROBLEM: When a citation no longer exists in the manuscript, it still persists in the bibliography.

SOLUTION: A known bug with no current solution. Currently, have to delete them from the bibliography by hand (or, maybe figure out a way to do it programatically)…

 

 


 

PROBLEM: Cannot click-and-drag some references from Mendeley (haven’t tested other reference managers) without getting an error. To my knowledge, the BibTeX is valid, as it appears to be the same formatting as other references that can be inserted via the click-and-drag method. There are some references it won’t work for…

 

SOLUTION: Use the search bar in the citation insertion dialogue box. Not as convenient and slows down the workflow for citation insertion, but it works…

 

Chinook salmon and southern resident killer whales occupy similar depths in the Salish Sea

New paper by UW colleagues entitled “Interpreting vertical movement behavior with holistic examination of depth distribution: a novel method reveals cryptic diel activity patterns of Chinook salmon in the Salish Sea” shows some results from Vemco receivers I deployed in the San Juan Islands. Young adult Chinook favor depths less than ~30 meters, with some seasonal variability in their diel activity patterns. Overall, they go deeper and vary more in the depths at night.

Dive profiles for two Salish Sea Chinook salmon during the summer and fall.

Dive profiles for two Salish Sea Chinook salmon during the summer and fall.

Interestingly, according to a report to NOAA/NWFSC by Baird et al, 2003 (STUDIES OF FORAGING IN “SOUTHERN RESIDENT” KILLER WHALES DURING JULY 2002: DIVE DEPTHS, BURSTS IN SPEED, AND THE USE OF A “CRITTERCAM” SYSTEM FOR EXAMINING SUB-SURFACE BEHAVIOR) SRKWs spend >97% of their time at depths of less than 30m.

This suggests any future deployment of horizontal echosounders should aim to ensonify a depth range centered on ~25m (e.g. 5-45m or 10-40 m).  Compared to the estimated orientation and surveyed depth range of our 2008-9 salmon-SRKW echosounder pilot studies, we may want to measure inclination more carefully to (a) center the survey on the mean summertime depth range of Chinook and (b) avoid ping reflections from surface waves, boats, and bubbles (which may have confused interpretations of targets >100 m from the transducer).  Here’s my diagram for the situation in 2008-9 in which we were centered on 15 m and ensonified a maximum depth range of ~0-30m (in other words, we may have been aiming a little high):

Screen grab from the 2009 ASA presentation showing echosounder geometry

Screen grab from the 2009 ASA presentation showing echosounder geometry

 

 

Curriculum Testing – Viability of Using Dry Ice to Alter pH

Ran some basic tests to get an idea of how well (or poorly) the use of dry ice and universal indicator would be for this lesson.

Instant Ocean mix (per mfg’s recs): 0.036g/mL

Universal Indicator (per mfg’s recs): 15μL/mL

Played around a bit with different solution volumes, different dry ice amounts, and different Universal Indicator amounts.

Indicator Vol (mL) Solution Solution Vol (mL) Dry Ice (g) Time to Color Change (m) Notes
3 Tap H2O 200 1.5 <0.5
3 Tap H2O 200 0.5 >5 Doesn’t trigger full color change and not much bubbling (not very exciting)
5 Tap H2O 1000 12 <1
3 Instant Ocean 200 1.5 <0.5 Begins at higher pH than just tap water. Full color change is slower than just tap water, but still too quick for timing.
2 1M Na2CO3 200 5 >5 No color change and dry ice fully sublimated.
2 1M Tris Base 200 5 >5 No color change and dry ice fully sublimated.
2 Tap H2O + 20 drops 1M NaOH 200 5 2.75 ~Same color as Na2CO3 and Tris Base solutions to begin. Dry ice gone after ~5m and final pH color is ~6.0.

 

Summary

  • Universal Indicator amount doesn’t have an effect. It’s solely needed for ease-of-viewing color changes. Use whatever volume is desired to facilitate easy observations of color changes.
  • Larger solution volumes should be used in order to slow the rate of pH change, so that it’s easier to see differences in rates of change between different solutions.
  • 1M solutions of Na2CO3 and Tris Base have too much buffering capacity and will not exhibit a decrease in pH (i.e. color change) from simply using dry ice. May want to try out different dilutions.
  • Use of water + NaOH to match starting color of Na2CO3 and/or Tris Base is a good way to illustrate differences in buffering capacity to students.
  • Overall, dry ice will work as a tool to demonstrate effect(s) of CO2 on pH of solutions!

Some pictures (to add some zest to this entry):

 

 

 

False claims of copyright and STM

Recently, I have become interested in the issue of false claims of copyright (i.e., copyfraud) in publishing. I just wrote to the publisher’s association (STM) to ask them what their perspective is on copyfraud is and whether they condone such behavior by their member associations. Read my letter here. I will update this blog when I get a response.

An example of copyright is this index page from the Lancet, published in 1823. Let’s assume copyright for this index page was actively registered and that it received protection under copyright legislation (copyright was not automatic before the 1886 Berne Convention). That would mean the duration of copyright would have to be at least 192 years for this claim to be valid! Even under the current situation, copyright does not last that long for organizations (if I am correct, it is around ~120 years).

Regretfully, it is easy for a rightsholder to legally pursue someone who violates their copyright, but when someone falsely claims to be a rightsholder the public cannot fight back in the same way. This is an inherent asymmetric power relation in copyright. The World Intellectual Property Organization (WIPO) does not provide a way to easily report potential copyfraud it seems and I would like to call on them to make this possible. Opening up a way to reliably report it at least allows everyone to get a better view on how often copyfraud might occur. Even better, form a legislation that empowers the public to fight back against copyfraud.

Copyfraud is a widespread problem that does not only occur with old works, but also with for example works by U.S. federal employees, which are uncopyrightable under United States federal law 17 U.S. Code § 105). Recent articles by the 44th President Barack Obama have been illegally copyrighted and yet all we can do is ask nicely that they remove the copyright notice.

DNA extraction: Anthopleura elegantissima

In the interest of comparing methylation levels between symbiotic states in Anthopleura elegantissima, I extracted DNA from three zooxanthellate and three zoochlorellate individuals. These were anemones that were collected last summer at Point Lawrence, Orcas Island, and had been residing in indoor sea tables at Shannon Point since then. For each specimen, I excised part of the tentacle crown with scissors and deposited the tissue directly into a microfuge tube. I opted to freeze the tissue in the -80ºC freezer since earlier attempts with fresh tissue did not seem as effective (i.e., the tissue seemed resistant to lysis). After a day or two in the freezer, I pulled the samples out, rinsed them with PBS, and proceeded with the Qiagen DNeasy assay.  After addition of proteinase K, I used a small pestle to homogenize the sample. An overnight lysis period at 56ºC was used. DNA was eluted via two passes with 50 µl AE buffer (100 µl total volume). To further clean the DNA, samples were subject to ethanol precipitation using this protocol. Samples were re-eluted in 50 µl AE buffer.

To assess DNA quantity and quality, samples were tested with the Qubit BR DNA assay followed by electrophoresis on a 1% agarose gel with 1X TBE, 135 volts for 25 min.

sample ng/µl ng/µl post-dilution (100 µl buffer) total DNA
Ae-B-1 256 128 12800
Ae-B-2 183 91.5 9150
Ae-B-3 304 152 15200
Ae-G-1 163 81.5 8150
Ae-G-2 166 83 8300
Ae-G-3 274 137 13700

Ae_DNA_011017