Tag Archives: Pacific oyster

Samples Received – Triploid Crassostrea gigas from Nisbet Oyster Company

Received a bag of Pacific oysters from Nisbet Oyster Company.

Four oysters were shucked and the following tissues were collected from each:

  • ctenidia
  • gonad
  • mantle
  • muscle

Utensils were cleaned and sterilized in a 10% bleach solution between oysters.

Tissues were stored briefly on wet ice and then stored at -80C in Rack 2, Column 3, Row 1


Morphometrics – Crassostrea gigas OA Selection Bags Using ImageJ

Due to some sort of data mis-handling, morphometric data that was previously taken for thousands (seriously, THOUSANDS) of Pacific oysters in 2014 was found to be incorrect. Unfortunately, there’s not enough of a “paper trail” to back track to see what/where things might have gone wrong to try to fix the issues. Essentially, they all had to be re-measured!

The one good thing is that all of the oysters were photographed at the time of sampling (along with a ruler), which allows us to go back and measure them.

I re-measured them all using the free imaging software ImageJ.

Oyster measurements taken were length and width. For length, the oyster was measured from hinge to the leading edge of the shell, attempting to measure as close to the theoretical center line of the oyster as possible, while also capturing the two points furthest from each other. The width was measured at the apparent widest part of the oyster and attempted to be perpendicular to the length measurement line.

Each image with the measurement lines was saved as a .tif file and the filename appended with “measured”. Additionally, each image produced a corresponding Excel file named CgOA_measurements_bag_info.xls, where “bag_info” contains information regarding the oysters in that set.

Images were measured by setting the pixel scale using 100mm (10cm) measurement on each image via the ruler in the image. Images were greatly enlarged when setting the scale to improve scale accuracy. Some images did not contain a ruler. Instead, the scale was set using the length of a weigh boat: 89mm (8.9cm). Weigh boat size was gathered from manufacturer specs: VWR Cat#89106-768 (8.9cm x 8.9cm x 2.5cm). Files corresponding to these sets of measurements are appended with “no_ruler” in the filename. The sample sets that were measured in this fashion were oyster bags:

  • 492
  • 530

The measured images and the individual Excel files were uploaded to the following Dropbox location: Dropbox/Friedman Lab/Carolyn Lab/Manuscripts/2016/Cg OA selection/Data/Sam DATA.

Data from the individual files was aggregated in the following spreadsheet in Dropbox: Dropbox/Friedman Lab/Carolyn Lab/Manuscripts/2016/Cg OA selection/Data/Sam DATA/files to merge/Cg OA selection 9mo sampling All 3 sites_survival data_ FOR SAM to add L and W data.xlsx

Data is still missing (i.e. no labelled image file was present) for the following oysters:

  • 458 21-37
  • 486 1-20
  • 556 20-45
  • 588 1-19

Here’s a quick summary of the amount of data I gathered. I’ll provide details of how I used ImageJ to not only measure the samples, but also create a more reproducible means of following the data acquisition process so that we can improve our ability to follow the “paper trail” from who acquired the data, how they acquired it, and allow people to easily review that data. This way, once all this data is transposed to some master spreadsheet, it will still be granularly accessible for any future troubleshooting that might be needed. I’ll do this in a separate post .

So, what did this work produce and how did I determine this information?

Using Bash (i.e. command line in Terminal):

Count the number of image files analyzed (i.e. saved) by ImageJ:

ls -1 *.tif | wc -l



Count the number of spreadsheet files produced by ImageJ:

ls -1 CgOA*.xls | wc -l


Well, there’s an odd discrepancy. These should be the same number. However, if anything were off, I’d expect the number of images to be greater in number than the Excel files. That would indicate I went through the measuring process, but neglected to save the data. However, this suggests that there’s an “extra” Excel file. It’s possible that I accidentally saved the image to a different location by accident. Will look into this…

Count the number of measurements taken. This will be a two step process.

First, aggregate all the data from the individual data files into a single file:

for i in CgOA*.xls; do awk 'NR>1' "$i" >> all_measures.csv; done

The code above uses a for loop to look at each Excel file (files beginning with “CgOA” and ending with “.xls”). Each file ($i) is passed to the program awk, which concatenates/appends the contents of the file (excluding the header line; NR>1) to a new file (all_measures.csv).

Next, count the number of lines (i.e. measurements) in the all_measures.csv file:

wc -l < all_measures.csv


Whoa! That’s pretty remarkable. Over 5000 individual measurements were recorded (length and width for each oyster). That means there were over 2600 oysters!!!

Hopefully we won’t have to re-measure these guys a third time!


Dissection – Frozen Geoduck & Pacific Oyster

We’re working on a project with Washington Department of Natural Resources’ (DNR) Micah Horwith to identify potential proteomic biomarkers in geoduck (Panopea generosa) and  Pacific oyster (Crassostrea gigas). One aspect of the project is how to best conduct sampling of juvenile geoduck (Panopea generosa) and Pacific oyster (Crassostrea gigas) to minimize changes in the proteome of ctenidia tissue during sampling. Generally, live animals are shucked, tissue dissected, and then the tissue is “snap” frozen. However, Micah’s crew will be collecting animals from wild sites around Puget Sound and, because of the remote locations and the means of collection, will have limited tools and time to perform this type of sampling. Time is a significant component that will have great impact on proteomic status in each individual.

As such, Micah and crew wanted to try out a different means of sampling that would help preserve the state of the proteome at the time of collection. Micah and crew have collected some juveniles of both species and “snap” frozen them in the field in a dry ice/ethanol bath in hopes of being able to best preserve the ctenidia proteome status. I’m attempting to dissect out the frozen ctenidia tissue from both types of animals and am reporting on the success/failure of this method of preservation-sampling protocol.

To test this, I transferred animals (contained in baggies) from the -80C to dry ice. Utensils and weigh boats were cooled on dry ice.



Quick summary: This method won’t and I think sampling will have to take place in the field.

The details of why this won’t work (along with images of the process) are below.


First issue with this sampling method (and should be noted because I believe dry ice/ethanol baths will be used, even with a different sampling methodology) is that the ethanol in the dry ice bath at the time of animal collection is a potential problem for labeling baggies. Notice in the screenshot below that the label for the geoduck baggie (the baggie on the left) has, for all intents and purposes, completely washed off:



Starting with C.gigas, opening the animal was relatively easy. Granted, the animal has become brittle, but access to, and identification of, tissues ended up being pretty easy:





However, dissecting out just ctenidia is a lost cause. The mantle and the ctendia are, essentially, fused together in a frozen block through the oyster. Although the image below might look like part of the shell, it is not. It is strictly a chunk of frozen ctenidia/mantle tissue:




The geoduck were even more difficult. In fact, I couldn’t even manage to remove the soft tissue from the shell (for the uninitiated, there are two geoduck in the image below). I only managed to crush most of the tissue contained within the shell, making it even more impossible (if that’s possible) to identify and dissect out the ctenidia:





SRA Submission – Individual Transcriptomic Profiles of C.gigas Before & After Heat Shock

RNA-seq experiment conducted by Claire in 2013.

She sampled mantle tissue from three adult oysters, allowed them to recover from the sampling (one week?) and then subjected those same oysters to a 1hr heat shock at 40C and collected mantle tissue from them again.

As this is our first Small Read Archive (SRA) submission in many years, I decided to submit these to the SRA due to the small number of samples (6) from the Illumina sequencing we had done to make sure it was manageable.

An overview of the basic SRA submission process is here.

The current status can be seen in the screen cap below. Current release date is set for a year from now, but will likely bump it up. Need Steven to review the details of the submission (BioProject, Experiment descriptions, etc.) before I initiate the public release. Will update this post with the SRA number once we receive it.

Here’s the list of files uploaded to the SRA:


SRA Accession: SRP072251


Data Received – Bisulfite-treated Illumina Sequencing from Genewiz

Received notice the sequencing data was ready from Genewiz for the samples submitted 20151222.

Download the FASTQ files from Genewiz project directory:

wget -r -np -nc -A "*.gz" ftp://username:password@ftp2.genewiz.com/Project_BS1512183

Since two species were sequenced (C.gigas & O.lurida), the corresponding files are in the following locations:




In order to process the files, I needed to identify just the FASTQ files from this project and save the list of files to a bash variable called ‘bsseq':

bsseq=$(ls | grep '^[0-9]{1}_*' | grep -v "2bRAD")


  • This initializes a variable called “bsseq” to the values contained in the command following the equals sign.
$(ls | grep '^[0-9]{1}_*' | grep -v "2bRAD")
  • This lists (ls) all files, pipes them to the grep command (|), grep finds those files that begin with (^) one or two digits followed by an underscore ([0-9{1}_*), pipes those results (|) to another grep command which excludes (-v) any results containing the text “2bRAD”.


1_ATCACG_L001_R1_001.fastq.gz 1NF11 O.lurida
2_CGATGT_L001_R1_001.fastq.gz 1NF15 O.lurida
3_TTAGGC_L001_R1_001.fastq.gz 1NF16 O.lurida
4_TGACCA_L001_R1_001.fastq.gz 1NF17 O.lurida
5_ACAGTG_L001_R1_001.fastq.gz 2NF5 O.lurida
6_GCCAAT_L001_R1_001.fastq.gz 2NF6 O.lurida
7_CAGATC_L001_R1_001.fastq.gz 2NF7 O.lurida
8_ACTTGA_L001_R1_001.fastq.gz 2NF8 O.lurida
9_GATCAG_L001_R1_001.fastq.gz M2 C.gigas
10_TAGCTT_L001_R1_001.fastq.gz M3 C.gigas
11_GGCTAC_L001_R1_001.fastq.gz NF2_6 O.lurida
12_CTTGTA_L001_R1_001.fastq.gz NF_18 O.lurida


I wanted to add some information about the project to the readme file, like total number of sequencing reads generated and the number of reads in each FASTQ file.

Here’s how to count the total of all reads generated in this project

totalreads=0; for i in $bsseq; do linecount=`gunzip -c "$i" | wc -l`; readcount=$((linecount/4)); totalreads=$((readcount+totalreads)); done; echo $totalreads

Total reads = 138,530,448

C.gigas reads: 22,249,631

O.lurida reads: 116,280,817

Code explanation:

  • Creates variable called “totalreads” and initializes value to 0.
for i in $bsseq;
  • Initiates a for loop to process the list of files stored in $bsseq variable. The FASTQ files have been compressed with gzip and end with the .gz extension.
do linecount=
  • Creates variable called “linecount” that stores the results of the following command:
`gunzip -c "$i" | wc -l`;
  • Unzips the files ($i) to stdout (-c) instead of actually uncompressing them. This is piped to the word count command, with the line flag (wc -l) to count the number of lines in the files.
  • Divides the value stored in linecount by 4. This is because an entry for a single Illumina read comprises four lines. This value is stored in the “readcount” variable.
  • Adds the readcount for the current file and adds the value to totalreads.
  • End the for loop.
echo $totalreads
  • Prints the value of totalreads to the screen.

Next, I wanted to generate list of the FASTQ files and corresponding read counts, and append this information to the readme file.

for i in $bsseq; do linecount=`gunzip -c "$i" | wc -l`; readcount=$(($linecount/4)); printf "%st%sn%sttn" "$i" "$readcount" >> readme.md; done

Code explanation:

for i in $bsseq; do linecount=`gunzip -c "$i" | wc -l`; readcount=$(($linecount/4));
  • Same for loop as above that calculates the number of reads in each FASTQ file.
printf "%st%snn" "$i" "$readcount" >> readme.md;
  • This formats the the printed output. The “%st%snn” portion prints the value in $i as a string (%s), followed by a tab (t), followed by the value in $readcount as a string (%s), followed by two consecutive newlines (nn) to provide an empty line between the entries. See the readme file linked above to see how the output looks.
>> readme.md; done
  • This appends the result from each loop to the readme.md file and ends the for loop (done).



Illumina Methylation Library Quantification – BS-seq Oly/C.gigas Libraries

Re-quantified the libraries that were completed yesterday using the Qubit3.0 dsDNA HS (high sensitivity) assay because the library concentrations were too low for the normal broad range kit.


Qubit Quants and Library Normalization Calcs: 20151222_qubit_illumina_methylation_libraries

1NF11 2.42
1NF15 1.88
1NF16 2.74
1NF17 2.54
2NF5 2.72
2NF6 2.44
2NF7 2.38
2NF8 1.88
M2 2.18
M3 2.56
NF2_6 2.5
NF_18 2.66


Things look pretty good. The TruSeq DNA Methylation Library Kit (Illumina) suggests that the libraries produced should end up with concentrations >3ng/μL, but we have plenty of DNA here to make a pool for running on the HiSeq2500.


Illumina Methylation Library Construction – Oly/C.gigas Bisulfite-treated DNA

Took the bisulfite-treated DNA from 20151218 and made Illumina libraries using the TruSeq DNA Methylation Library Kit (Illumina).

Quantified the completed libraries using the Qubit 3.0 dsDNA BR Kit (ThermoFisher).

Evaluated the DNA with the Bioanalyzer 2100 (Agilent) using the DNA 12000 assay. Illumina recommended using the High Sensitivity assay, but we don’t have access to that so I figured I’d just give the DNA 12000 assay a go.

SampleName IndexNumber BarCode



Library Quantification (Google Sheet): 20151221_quantification_illumina_methylation_libraries

Test Name Concentration (ng/μL)
1NF11 Out of range
1NF15 2.14
1NF16 2.74
1NF17 2.64
2NF5 2.92
2NF6 Out of range
2NF7 2.42
2NF8 2.56
M2 Out of range
M3 2.1
NF2_6 2.38
NF2_18 Out of range


I used the Qubit’s BR (broad range) kit because I wasn’t sure what concentrations to expect. I need to use the high sensitivity kit to get a better evaluation of all the samples’ concentrations.



Bioanalyzer Data File (Bioanalyzer 2100): 2100_20expert_DNA_2012000_DE72902486_2015-12-21_16-58-43.xad


Ha! Well, looks like you definitely need to use the DNA High Sensitivty assay for the Bioanalyzer to pick up anything. Although, I guess you can see a slight hump in most of the samples at the appropriate sizes (~300bp); you just have to squint. ;)


Bioanalyzer – Bisulfite-treated Oly/C.gigas DNA

Following the guidelines of the TruSeq DNA Methylation Library Prep Guide (Illumina), I ran 1μL of each sample on an RNA Pico 6000 chip on the Seeb Lab’s Bioanalyzer 2100 (Agilent) to confirm that bisulfite conversion from earlier today worked.


Data File 1(Bioanlyzer 2100): 2100 expert_Eukaryote Total RNA Pico_DE72902486_2015-12-18_21-05-04.xad

Data File 1(Bioanlyzer 2100): 2100 expert_Eukaryote Total RNA Pico_DE72902486_2015-12-18_21-42-55.xad



Firstly, the ladder failed to produce any peaks. Not sure why this happened. Possibly not denatured? Seems unlikely, but next time I run the Pico assay, I’ll denature the ladder aliquot I use prior to running.

Overall, the samples look as they should (see image from TruSeq DNA Methylation Kit manual below), albeit some are a bit lumpy.


Bisulfite Treatment – Oly Reciprocal Transplant DNA & C.gigas Lotterhos DNA for BS-seq

After confirming that the DNA available for this project looked good, I performed bisulfite treatment on the following gDNA samples:

  • 1NF11
  • 1NF15
  • 1NF16
  • 1NF17
  • 2NF5
  • 2NF6
  • 2NF7
  • 2NF8
  • NF2_6
  • NF2_18
  • M2
  • M3

Sample names breakdown like this:


1 = Fidalgo Bay outplants

NF = Fidalgo Bay broodstock origination

= Sample number


Same as above, but:

2 = Oyster Bay outplants

NF2_# (Oysters grown in Oyster Bay; DNA provided by Katherine Silliman)

NF2 = Fidalgo Bay broodstock origination, family #2

= Sample number

M2/M3 = C.gigas from Katie Lotterhos


Followed the guidelines of the TruSeq DNA Methylation Library Prep Guide (Illumina).

Used the EZ DNA Methylation-Gold Kit (ZymoResearch) according to the manufacturer’s protocol with the following changes/notes:

  • Used 100ng DNA (per Illumina recs; Zymo recommends at least 200ng for “optimal results”).
  • Thermal cycling was performed in 0.5mL thin-wall tubes in a PTC-200 (MJ Research) using a heated lid
  • Centrifugations were performed at 13,000g
  • Desulphonation incubation for 20mins.

DNA quantity calculations are here (Google Sheet): 20151218_oly_bisulfite_calcs

Samples were stored @ -20C. Will check samples via Bioanalyzer before proceeding to library construction.