Category Archives: Miscellaneous

DNA Isolation & Quantification – Geoduck larvae metagenome filter rinses

Isolated DNA from two of the geoduck hatchery metagenome samples Emma delivered on 20180313 to get an idea of what type of yields we might get from these.

  • MG 5/15 #8
  • MG 5/19 #6

As mentioned in my notebook entry upon receipt of these samples, I’m a bit skeptical will get any sort of recovery, based on sample preservation.

Isolated DNA using DNAzol (MRC, Inc.) in the following manner:

  1. Added 1mL of DNAzol to each sample; mixed by pipetting.
  2. Added 0.5mL of 100% ethanol; mixed by inversion.
  3. Pelleted DNA 5,000g x 5mins @ RT.
  4. Discarded supernatants.
  5. Wash pellets (not visible) with 1mL 75% ethanol by dribbling down side of tubes.
  6. Pelleted DNA 5,000g x 5mins @ RT.
  7. Discarded supernatants and dried pellets for 5mins.
  8. Resuspended DNA in 20uL of Buffer EB (Qiagen).

Samples were quantified using the Roberts Lab Qubit 3.0 with the Qubit High Sensitivity dsDNA Kit (Invitrogen).

5uL of each sample were used.


As expected, both samples did not yield any detectable DNA.

Will discuss with Steven on what should be done with the remaining samples.

Progress Report – Titrator

I’ll begin this entry with a TL;DR (becuase it’s definitely a very long read):

  • Sample weight (i.e. volume) appears to have an effect on total alkalinity (TA) determination, despite the fact that sample weight is taken into account when calculating TA.

  • Replicates are relatively consistent.

  • Our TA measurements of CO2 Reference Materials (CRMs) do not match TA info supplied with CRMs.

  • Conclusions?

    • The only thing that actually matters is consistent replicates.
    • Use 50g (i.e. 50mL) sample weights – will greatly conserve reagents (CRMs, acid)
    • Calculate offset from CRMs or just report our TA measurements and the corresponding CRM TA value(s)?
    • Ask Hollie Putnam what she thinks.

With that out of the way, here’s a fairly lengthy overview of what has been done to date with the titrator. In essence, this is a cumulative notebook entry of all the entries I should have been doing on a daily/weekly basis (I actually feel much shame for neglecting this – it’s a terrible practice and an even worse example for other lab members).

Teaser: there are graphs!

Anyway, I’ve spent a lot of time getting our titrator, protocols, and scripts to a point where we can not only begin collecting real data, but also actually analyze the data in a semi-automated way.

More recently, I’ve finally started taking some measurements to assess the consistency of the actual titrator and stumbled across an interesting observation that may (may not) have an impact on how we proceed with sample/data handling.


Titrator SOP is the primary protocol that encompasses setting up/shutting down the titrator, use of the LabX software needed for recording/exporting data from the titrator, and how to implement the necessary scripts to handle the exported data is still in early stages.

In theory, the SOP should be rather straightforward, but due to the sensitivity involved with these measurements, the SOP needs to carefully address how to set things up properly, provide a means for documenting startup/shutdown procedures, and provide troubleshooting assistance (e.g. how to empty/remove burette if/when air bubbles develop).

Overall, it’s a bit of a beast, albeit and important one, but I’ve put it on the back burner in order to focus my efforts/time on getting to the point of being able to collect data from the titrator and feel confident that we’re getting good readings.

Once I get to that point and am able to begin running samples, I’ll be able to dedicate more time to fleshing out the SOP, including adding pictures of all the components.


parsing_TA_output.R is the script that has consumed the majority of my titrator-related time in the last couple of weeks. It is fully functional (it only requires manual entry of the exported LabX data file location). However, I’m hoping to eventually automate this as well – i.e. when new LabX export file appears, this script will execute. I won’t be spending much time on this aspect of the script until I

This has been my highest priority. Without having this script in a usable state, it has been a MAJOR slog to manually retrieve the appropriate data necessary to use in TA determination.

This also has been my biggest challenge with the titrator process. Here are just some of the hurdles I’ve had to deal with in putting this script together:

  • “learning” R
  • handling “dynamic” titrator output data
    • this is not an easy task for a non-programmer!
    • the output data is of differing numbers of rows from sample to sample, so the script had to be able to handle this aspect automatically
    • making the script “flexible” (i.e. no “magic numbers”) to handle any number of samples without the user having to manually modify the script
    • making th script “flexible” by operating on column names instead of column numbers, since column numbers were/are not constant, depending on changes to the script
  • calculations resulting from two-part titration
    • still not sure if I ever would’ve figured this out if I hadn’t taken the intro computer science class at UW a couple of years ago!

Despite all of this, I also feel like it’s one of my biggest accomplishments! It’s super satisfying to have this script functioning with virtually no user input required!

pH_calibration_check.R is still a work in progess, but is easily usable. Currently, it still has some hard-coded values (row numbers) in it for parsing data, but that should be easy to fix after what I went through with the TA parsing script!

Eventually, these two scripts will work in tandem, with the pH_calibration_check script exporting data to a daily “log” file, which the parsing_TA_output script will use to read-in the necessary pH data.

TA_calculation.R will calculate the TA values, but currently requires fully manual data entry. It desperately needs attention and will likely be my primary focus in the immediate future, due to the need to have TA values for actual samples, as well as daily quality control checks (e.g. verify CRM measurements look OK before measuring actual samples).


Consistency checks with Instant Ocean
Instant Ocean Tests

I ran nine replicates of Instant Ocean (36g/L in deionized water) at two different samples weights/volume (50g, 75g) to make sure the titrator was producing consistent results.

Here’s the R Studio Project folder with all the data/scripts used to gather the data and produce the plots:

TA values were determined using the seacarb R package. I used a salinity of 35 (seacarb default value?), but this has not been determined for this batch of Instant Ocean.

Sample Volume Mean TA Standard Deviation
50mL 669.4 11.14
75mL 645.0 11.96

The first thing I noticed was the low TA values when using Instant Ocean. I expected these to be more similar to sea water, but the Instant Ocean hasn’t been aerated, so maybe that could account for the low TA values. Regardless, this shouldn’t be too much of an issue, since I only wanted to use this to see if we were getting consistent measurements.

The second thing I noticed was the difference in TA values between the 50mL and 75mL samples. This is/was odd, as sample weight is taken into account with the seacarb package.

So, I decided to explore this a bit further, using the CRMs that we have. I felt that this would provide more informative data regarding measurement accuracy (i.e. do our measurements match a known value?), in addition to further evaluation of the effects of sample volume on TA determination.

Consistency checks with CRMs
CRM Tests

I ran five replicates of CRM Batch 168 (PDF) at three different sample weights/volume (50g, 75g, 100g) to make sure the titrator was producing consistent results and evaulate how accurate our measurements are.

Here’s the R Studio Project folder with all the data/scripts used to gather the data and produce the plots:

Here’s a bunch of graphs to consider:

Sample Mean TA Standard Deviation
CRM 168 2071.47 NA
50mL 2259.66 7.35
75mL 2236.22 19.96
100mL 2226.73 14.49

First thing to notice is that all sample measurements, regardless of volume, produce a TA value that is ~10% higher than what the CRM is certified to be. I’ve previously discussed this with Hollie and she’s indicated that there are two options:

  1. Calculate an offset relative to what the CRM is supposed to be and apply this offset to any sample measurements.

  2. Do not determine offset and just report calculated values, while providing CRM info.

The next thing that I noticed is the 50mL (g) samples produced the most consistent measurements.

There also seems to be a pattern where fluctuations in TA values across replicates are mirrored by changes in weight for each corresponding replicate.

Finally, although it isn’t explicitly addressed, there is a time element in play here. As sample number increases, the longer those samples sat in the sample changer before titration. Oddly, it appears that there could be an effect on samples as they sit (e.g. sample evaporation prior to titration) when one considers the 75mL and 100mL samples, but both of those result in opposing trends, while the 50mL samples do not seem to suffer from any sort of time-related changes…

The Wrap Up

Whew! We made it! I’ll wait to get some feedback from lab members and Hollie before cranking through all of Hollie’s samples, but I feel pretty good about proceeding with a 50mL sample volume. If we decide to calculate an offset later on, it should only be a relatively minor tweak to our script.

Next up, figure out a way to pull out all of Hollie’s salinity data for the samples I’m going to measure and incorporate that into the TA_calculation.R script.

Titrator Setup – Functional Methods & Data Exports

I’ve been working on getting our T5 Excellence titrator (Mettler Toledo) with Rondolino sample changer (Mettler Toledo) set up and operational.

A significant part of the setup process is utilizing the LabX Software (Mettler Toledo, v.8.0.0). The software is vastly overpowered (i.e. overly complicated) for the nature of our work. As such, it’s been quite the struggle to get the titrator to do what we need it to do.

We’ve received a great deal of help and insight from Dr. Hollie Putnam (Univ. of Rhode Island). She provided us with a LabX Method that was previously used by some of her colleagues. In addition, she provided us with an SOP from those colleagues that helps describe how to physically operate the titrator and a corresponding workflow for data collection. Of course, she’s also helped immensely with understanding the entire process of the chemistry to how to process samples to implementing various quality control checks to ensure we’ll get the most accurate data we can.

After some significant struggles with getting the method to work properly, I contacted Mettler Toledo and the technician helped me modify the method that Hollie passed along to us to run properly.

Basic Functionality Fixes

The updated method resolves the following issues we were having (see annotated screenshot below the lists for more detailed overview of method changes):

  • Method only ends titration at specified maximum volume instead of specified potential
  • Method exits with Sample State = “Not OK”
  • Method fails to calculate acid Consumption at end of titration

I also modified the method to bring it up to spec with the Dickson Guide to Best Practices for Ocean CO2 Measurements- SOP3b:

  • Changed method template to use Endpoint (EP) titration instead of Equivalence Point (EQP) titration
  • Implemented initial titration step to pH=3.5 (200mV)
  • Added degassing step to match Dickson specs (increase stir speed and duration)
  • Improved titration precision by changing titration rate to automatically adjust relative to set endpoint values
  • Set correct voltages for pH=3.5 (200mV) & pH=3.0 (228.57mV); no need to adjust prior to running method

I recommend opening the image below in a new tab in order to be able to read all of the annotations.

So, all of that stuff makes the method actually run according to the Dickson specs. It also fixes the Consumption calculation. Although this aspect of the method is totally unnecessary (the consumption calculation could easily be integrated in downstream analysis), it feels good to have fixed the issue and learn how that aspect of the LabX software functions.

Data Export Fixes

We recently acquired the necessary license to unlock the ability to export data.


This is a terrible practice implemented by Mettler Toledo, btw. I’m particularly annoyed because I specifically asked about data exports when I spoke with the sales rep when initiating the purchase. He neglected to mention that I’d need to purchase an additional license. I wasted a lot of time and hair pulling before I learned that this feature was locked by design and that I needed this additional license.

Anyway, the struggles continued even after activating the license. I was not able to export any data – every attempt at specifying a directory in which to save data failed.

Mettler Toledo informed me that it has to do with Windows Services Permissions.

To change permissions to allow data export:

  1. Search Windows for “Services”
  2. Open “Services”
  3. Right-click on “LabXHostService”
  4. Select “Properties”
  5. Click “Log On” tab.
  6. Click “Local System Account” radio button.
  7. Check “Allow service to interact with desktop” box.
  8. Restart computer.

Now, the data gets automatically exported to my desired directory as soon as a LabX task is finished!!

Data Management & Informational Resources

As we are very close to beginning to actually collect data with the titrator, I realized that all this data needs to go somewhere. Additionally, people need someplace to find out how to use all of this stuff (equipment, software, etc.).

I created a new GitHub repo: RobertsLab/titrator

Please feel free to look through it and post any ideas to the Issues section of the repo.

In my mind, this will be a “master” data repository for all measurements conducted on the titrator. All daily pH calibration data should get pushed to this repo. Any sample titration data should also end up on this repo. Basically, all the raw data coming off the machine each time it is used should end up in this repo. I think this will reduce data fragmentation (e.g. I perform measurements on a subset of samples one day and put the data in my folder on Owl. Then Grace performs measurements on the remaining samples and uploads those data to her folder on Owl. Now, the complete data set for an experiment is split between two different locations, making it difficult to find.)

Although this single data repository approach won’t eliminate fragmentation (it can’t be avoided since the Rondolino sample changer can only hold nine samples), I think it will be beneficial to know that all the data is in a single location.

This repo will also be a resource for SOPs and troubleshooting. A detailed SOP is currently in development (being modified from the SOP Hollie originally sent us) which will detail daily startup/shutdown procedures, running scripts to process data, and guides on how to evaluate quality control procedures at various points throughout the titration process.

Finally, it will also contain the necessary scripts that we develop for data grooming and analysis.

What’s Left?


We only need one final physical component to actually begin collecting data. After reviewing the Dickson SOP3b and speaking with Hollie, it turns out we need an aquarium pump and rotameter (acquired this week) to sparge our samples; this aids in degassing prior to the titration from pH=3.5 to pH=3.0. Just waiting on tubing and tubing adapters for the rotameter. Once we have this, I should be able to start powering through samples.

Initial testing of the titrator (even without the full physical setup in place) shows highly consistent, reproducible measurements – both with pH calibration values and with total alkalinity (TA) determinations on Instant Ocean seawater. As such, I’m confident that I won’t have very much testing left to do once I can start bubbling air into the samples.


Although these things are not necessary in order to start acquiring data, they will be necessary for performing TA calculations and, eventually be desired for analyzing and reporting TA calculations in near real-time (the end goal is to have this titrator setup in a wet lab where water TA calculations can be determined on the spot, as opposed to being stored and analyzed at a later time).

A to-do list is outlined here: RobertsLab/titrator/issues/1

Software Install – 10x Genomics Supernova on Mox (Hyak)

Steven asked me to install Supernova (by 10x Genomics on our Mox node.

First, need to install a dependency: bcl2fastq2
Followed Illumina bcl2fastq2 manual (PDF)

Logged into Mox and initiated a Build node:

srun -p build --time=1:00:00 --pty /bin/bash

Install bclsfastq2 dependency

Illumina bcl2fastq2 manual (PDF)

cd /gscratch/srlab/tmp
export TMP=/gscratch/srlab/tmp/
export SOURCE=${TMP}/bcl2fastq
export BUILD=${TMP}/bcl2fastq2.20-build
export INSTALL_DIR=/gscratch/srlab/programs/bcl2fastq-v2.20
cd ${TMP}
tar -xvzf bcl2fastq2-v2.20.0.422-Source.tar.gz
cd ${BUILD}
chmod ugo+x ${SOURCE}/src/configure
chmod ugo+x ${SOURCE}/src/cmake/bootstrap/
${SOURCE}/src/configure --prefix=${INSTALL_DIR}
cd ${BUILD}
make install

Install Supernova 2.0.0

Supernova install directions

cd /gscratch/srlab/programs
wget -O supernova-2.0.0.tar.gz ""
tar -xzvf supernova-2.0.0.tar.gz
rm supernova-2.0.0.tar.gz
cd supernova-2.0.0
supernova-cs/2.0.0/bin/supernova sitecheck > sitecheck.txt
supernova-cs/2.0.0/bin/supernova upload sitecheck.txt
srun -p srlab -A srlab --time=2:00:00 --pty /bin/bash
/gscratch/srlab/programs/supernova-2.0.0/supernova testrun --id=tiny

OK, looks like the test run finished successfully.

DNA Quantification – MspI-digested Crassostrea virginica gDNA

Quantified the two MspI-digested DNA samples for the Qiagen project from earlier today with the Qubit 3.0 (ThermoFisher).

Used the Qubit dsDNA Broad Range (BR) Kit (ThermoFisher).

Used 1μL of DNA from each sample (including undigested gDNA from initial isolation 20171211


Quantification (Google Sheet): 20180111_qubit_DNA_MspI_virginica

Yields are good and are sufficient for submission to Qiagen:

MspI_virginica_01 – 53.4ng/μL (1335ng; 89% recovery after phenol/chloroform/EtOH precip)
MspI_virginca_02 – 31.0ng/μL (775ng; ~52% recovery after phenol/chloroform/EtOH precip)

Phenol:Chloroform Extractions and EtOH Precipitations – MspI Digestions of C.virginica DNA from Earlier Today

The two MspI restriction digestions from earlier today for our project with Qiagen were subjected to phenol:chloroform cleanup and subsequent ethanol precipitations.

Phenol:chloroform clean up procedure:

  1. Added equal volume (50μL) of phenol:chloroform:IAA (25:24:1) to each sample.

  2. Vortexed.

  3. Centrifuged 5mins, 16,000g at room temperature.

  4. Transferred aqueous phase (top layer) to clean 0.5mL snap-cap PCR tube.

  5. Added equal volume of chloroform (50μL) to aqueous phase.

  6. Vortexed.

  7. Centrifuged 5mins, 16,000g at room temperature.

  8. Transferred aqueous phase (top layer) to clean 0.5mL snap-cap PCR tube.

Performed ethanol precipitation on both samples according to lab protocol.

Resuspended precipitated DNA in 25μL Buffer EB (Qiagen).

Will quantify with Qubit 3.0.

Restriction Digestion – MspI on Crassotrea virginica gDNA

Digested two 1.5μg aliquots of Crassostrea virginica isolated 20171211, as part of the project we’re doing with Qiagen.

Digestion reactions:

Component Volume(μL)
DNA (1.5μg) 25.7
10x CutSmart Buffer (NEB) 5.0
Water 17.3
MspI (NEB) 2

MspI info:

  • NEB R0106T (100,000U/mL; rec’d 20171214)

Reactions were carried out in 0.5mL snap-cap PCR tubes and incubated for 15mins @ 37oC in a PTC-200 thermalcycler (MJ Research), no heated lid.

Samples will be subjected to a phenol:chloroform extraction for cleanup.

DNA Quantification – C.virginica MBD-enriched DNA

Quantified Crassostrea virginica MBD-enriched DNA from earlier today for Qiagen project.

Used the Qubit 3.0 (ThermoFisher) and the Qubit dsDNA Broad Range (BR) Kit (ThermoFisher).

Used 1uL of template DNA.


Quantification Spreadsheet (Google Sheet): 20180110_qubit_dsDNA_BR_MBD_virginica

Both samples had decent yields and have usable quantities for Qiagen (they wanted ~300ng from each sample):

virginica_MBD_01 – 18.3ng/uL (457.5ng = 5.7% methylated DNA capture)

virginica_MBD_02 – 19.6ng/uL (490ng = 6.1% methylated DNA capture)

Will store @ -20C until next week so that we’re not shipping so close to the weekend (shipping address is in Germany).

MBD Enrichment – Crassostrea virginica Sheared DNA Day 3

Continued MBD enrichment of C.virginica DNA from yesterday for Qiagen project.

Followed the MethylMiner Methylated DNA Enrichment Kit (Invitrogen) manufacturer’s protocol for input DNA amounts of 1 -10ug (I am using 8ug in each of two samples).

Since the protocol has two elution steps that are each saved separately from each other for each sample, I did the following to combine the two elution fractions into a single sample:

  • Pelleted one elution fraction from each sample
  • Discarded supernatant from pelleted sample
  • Transferred second elution fraction to the pellet from the first elution fraction
  • Pelleted second elution fraction

The rest of the ethanol precipitation procedure was followed per the manufacturer’s protocol.

Final pellets were resuspended in 25μL of Buffer EB (Qiagen) and stored temporarily on ice for quantification.

MBD Enrichment – Crassostrea virginica Sheared DNA Day 2

Continued MBD enrichment for C.virginica and Qiagen project from yesterday.

Followed the MethylMiner Methylated DNA Enrichment Kit (Invitrogen) manufacturer’s protocol for input DNA amounts of 1 -10ug (I am using 8ug in each of two samples).

Performed a single, high-salt elution.

Samples were precipitated O/N @ -80C.