More RNA extractions

Continued RNA extractions of Anthopleura elegantissima samples using the Qiagen RNeasy kit. Further information on these samples can be found in this post. Qiashredder columns were again used for sample homogenization, and the Qubit RNA HS assay for quantification. Here are the results:

Species Sample RNA (ng/ul) Total vol (ul) Notes
A. elegantissima A4-001 96 40
A. elegantissima A5-416 59 40
A. elegantissima A6-B22 >100 40
A. elegantissima H4-415 >100 40
A. elegantissima H1-518 >100 40
A. elegantissima H2-019 >100 40

Gunzip – Trimmed Illumina Geoduck HiSeq Genome Sequencing Data

In preparation to run SpareAssembler, I needed to gunzip the trimmed gzipped FASTQ files from 20140401.

Ran the following slurm script on our Mox node:


#!/bin/bash
## Job Name
#SBATCH --job-name=20180404_geoduck_gunzip
## Allocation Definition
#SBATCH --account=srlab
#SBATCH --partition=srlab
## Resources
## Nodes (We only get 1, so this is fixed)
#SBATCH --nodes=1
## Walltime (days-hours:minutes:seconds format)
#SBATCH --time=30-00:00:00
## Memory per node
#SBATCH --mem=500G
##turn on e-mail notification
#SBATCH --mail-type=ALL
#SBATCH --mail-user=samwhite@uw.edu
## Specify the working directory for this job
#SBATCH --workdir=/gscratch/scrubbed/samwhite/illumina_geoduck_hiseq/20180328_trim_galore_illumina_hiseq_geoduck

for i in /gscratch/scrubbed/samwhite/illumina_geoduck_hiseq/20180328_trim_galore_illumina_hiseq_geoduck/*.gz; do
    filename="${i##*/}"
    no_ext="${filename%%.*}"
    gunzip < "$i" > "$no_ext".fastq
done
Results:

This crashed shortly after initiating the run (~30mins later). Received following email notification:

SLURM Job_id=155940 Name=20180404_geoduck_gunzip Failed, Run time 00:30:40, NODE_FAIL

It did not generate a slurm output file, nor any gunzipped files. Will contact UW IT…

UPDATE 20140404

Weird, about an hour after this crashed, I received the following email, indicating the job was submitted (I did no resubmit, btw):

SLURM Job_id=155940 Name=20180404_geoduck_gunzip Began, Queued time 00:02:29

Completed about 3hrs later.

DNA Isolation & Quantification – Geoduck larvae metagenome filter rinses

This is another attempt to isolate DNA from two more of the geoduck hatchery metagenome samples Emma delivered on 20180313.

The previous attempt, using DNAzol, did not yield any DNA.

I isolated DNA from the following two samples:

  • MG 5/19 #4
  • MG 5/26 #4

I used the DNA Stool Kit (Qiagen), following the “Stool Human DNA” protocol with the following changes:

  • Incubated @ 95oC for 5mins after initial addition of Buffer ASL. This is a lysis step that might help increase yields (see the “Stool Pathogen Detection” protocol)
  • Did not add InhibitEX Tablet. Deemed unnecessary, since these weren’t stool samples.
  • Eluted in 50μL of Buffer AE

I opted to follow the “Stool Human DNA” protocol, as it processes a larger portion of the initial sample, compared to the “Stool Pathogen Detection” protocol (600μL vs. 200μl)

Samples were quantified using the Roberts Lab Qubit 3.0 with the Qubit High Sensitivity dsDNA Kit (Invitrogen).

10μL of each sample were used.

Results:

Neither sample yielded any detectable DNA. Will discuss with Steven.

Titrations – Yaamini’s Seawater Samples

All data is deposited in the following GitHub repo:

Sample sizes: ~50g

LabX Method:

Daily pH calibration data file:

Daily pH log file:

Titrant batch:

CRM Batch:

Daily CRM data file:

Sample data file(s):

See metadata file for sample info (including links to master samples sheets):

Titrations – Yaamini’s Seawater Samples

All data is deposited in the following GitHub repo:

Sample sizes: ~50g

LabX Methods:

Daily pH calibration data file:

Daily pH log file:

Titrant batch:

CRM Batch:

Daily CRM data file:

Sample data file(s):

See metadata file for sample info (including links to master samples sheets):

TrimGalore!/FastQC/MultiQC – Illumina HiSeq Genome Sequencing Data Continued

The previous attempt at this was interrupted by a random glitch with our Mox HPC node.

I removed the last files processed by TrimGalore!, just in case they were incomplete. I updated the slurm script to process only the remaining files that had not been processed when the Mox glitch happened (including the files I deemed “incomplete”).

As in the initial run, I kept the option in TrimGalore! to automatically run FastQC on the trimmed output files.

TrimGalore! slurm script: 20180401_trim_galore_illumina_geoduck_hiseq_slurm.sh

MultiQC was run locally once the files were copied to Owl.

Results:

Job completed on 20180404.

Trimmed FASTQs: 20180328_trim_galore_illumina_hiseq_geoduck/

MD5 checksums: 20180328_trim_galore_illumina_hiseq_geoduck/checksums.md5

  • MD5 checksums were generated on Mox node and verified after copying to Owl.

Slurm output file: 20180401_trim_galore_illumina_geoduck_hiseq_slurm.sh

TrimGalore! output: 20180328_trim_galore_illumina_hiseq_geoduck/20180404_trimgalore_reports/

FastQC output: 20180328_trim_galore_illumina_hiseq_geoduck/20180328_fastqc_trimmed_hiseq_geoduck/

MultiQC output: 20180328_trim_galore_illumina_hiseq_geoduck/20180328_fastqc_trimmed_hiseq_geoduck/multiqc_data/

MultiQC HTML report: 20180328_trim_galore_illumina_hiseq_geoduck/20180328_fastqc_trimmed_hiseq_geoduck/multiqc_data/multiqc_report.html

Trimming completed and the FastQC results look much better than before.

Will proceed with full-blown assembly!

Data Received – Crassostrea virginica MBD BS-seq from ZymoResearch

Received the sequencing data from ZymoResearch for the <em>Crassostrea virginica</em> gonad MBD DNA that was sent to them on 20180207 for bisulfite conversion, library construction, and sequencing.

Gzipped FASTQ files were:

  1. downloaded to Owl/nightingales/C_virginica
  2. MD5 checksums verified
  3. MD5 checksums appended to the checksums.md5 file
  4. readme.md file updated
  5. Updated nightingales Google Sheet

Here’s the list of files received:

zr2096_10_s1_R1.fastq.gz
zr2096_10_s1_R2.fastq.gz
zr2096_1_s1_R1.fastq.gz
zr2096_1_s1_R2.fastq.gz
zr2096_2_s1_R1.fastq.gz
zr2096_2_s1_R2.fastq.gz
zr2096_3_s1_R1.fastq.gz
zr2096_3_s1_R2.fastq.gz
zr2096_4_s1_R1.fastq.gz
zr2096_4_s1_R2.fastq.gz
zr2096_5_s1_R1.fastq.gz
zr2096_5_s1_R2.fastq.gz
zr2096_6_s1_R1.fastq.gz
zr2096_6_s1_R2.fastq.gz
zr2096_7_s1_R1.fastq.gz
zr2096_7_s1_R2.fastq.gz
zr2096_8_s1_R1.fastq.gz
zr2096_8_s1_R2.fastq.gz
zr2096_9_s1_R1.fastq.gz
zr2096_9_s1_R2.fastq.gz

Here’s the sample processing history:

RNA extraction – coral and anemone samples

I did some further RNA extractions of Porites astreoides and Anthopleura elegantissima samples using the Qiagen RNeasy kit. Further information on these samples can be found in the previous post. I again used Qiashredder columns for sample homogenization, and use the Qubit RNA HS assay for quantification. Here are the results:

Organism Sample RNA ng/ul Sample vol (ul) Notes
A. elegantissima A4 483 >100 40
A. elegantissima M4 414 <20 40
A. elegantissima H1 400 >100 40
A. elegantissima H6 120 >100 40
A. elegantissima M3 461 >100 40 lowest amount of starting material
A. elegantissima A2 438 >100 40
P. astreoides past7-cbc 75 40
P. astreoides past7-home 99 40
P. astreoides past14-cbc >100 40
P. astreoides past14-home >100 40
P. astreoides past5-cbc 67 40
P. astreoides past5-home >100 40

I may do some dilutions later.

FastQC/MultiQC – Illumina HiSeq Genome Sequencing Data

Since running SparseAssembler seems to be working and actually able to produce assemblies, I’ve decided I’ll try to beef up the geoduck genome assembly with the rest of our existing genomic sequencing data.

Yesterday, I transferred our BGI geoduck data to our Mox node and ran it through FASTQC

I transferred our Illumina HiSeq data sets (*NMP*.fastq.gz) to our Mox node (/gscratch/scrubbed/samwhite/illumina_geoduck_hiseq). These were part of the Illumina-sponsored sequencing project.

I verified the MD5 checksums (not documented) and then ran FASTQC, followed by MultiQC.

FastQC slurm script: 20180328_fastqc_illumina_geoduck_hiseq_slurm.sh

This was followed with MultiQC (locally, after copying the the FastQC output to Owl).

Results:

FASTQC output: 20180328_illumina_hiseq_geoduck_fastqc

MultiQC output: 20180328_illumina_hiseq_geoduck_fastqc/multiqc_data

MultiQC HTML report: 20180328_illumina_hiseq_geoduck_fastqc/multiqc_data/multiqc_report.html

Well, lots of fails. I high level of “Per Base N Content” (these are only warnings, but we also haven’t received data with these warnings before). Also, they all fail in the “Overrepresented sequences” analysis.

I’ll run these through TrimGalore! (probably twice), and see how things change.

TrimGalore!/FastQC/MultiQC – Illumina HiSeq Genome Sequencing Data

Previous FastQC/MultiQC analysis of the geoduck Illumina HiSeq data (NMP.fastq.gz files) revealed a high level of overrepresented sequences, high levels of Per Base N Content, failure of Per Sequence GC Content, and a few other bad things.

Ran these through TrimGalore! on our Mox HPC node.

Added an option in TrimGalore! to automatically run FastQC on the trimmed output files.

TrimGalore! slurm script: 20180328_trim_galore_illumina_geoduck_hiseq_slurm.sh

Results:

Slurm output file: slurm-153098.out

I received a job status email on 20180330:

SLURM Job_id=153098 Name=20180328_trim_galore_geoduck_hiseq Failed, Run time 1-17:22:47, FAILED, ExitCode 141

The slurm output file didn’t indicate any errors, so I restarted the job and contacted UW IT to see if I could get more info.

UPDATE

Here’s their response:

04/02/2018 9:13 AM PDT – Matt

Hi Sam,

Your job died because of a networking hiccup that caused GPFS (/gscratch filesystem and such) to expel the node from the GPFS cluster. It’s a symptom of a known ongoing network issue that we’re actively working with Lenovo/Intel/IBM. Things like this aren’t happening super frequently, but enough that we recognized something was wrong and started investigating with vendors. Unfortunately, your job was unlucky and got bitten by it.

So, in short, you or your job didn’t do anything wrong. If you haven’t already (and if it is possible for your use case), we would highly recommend building in some sort of periodic state-preserving behavior (and a method to “resume”) into your longer-running jobs. Jobs can unexpectedly die for any number of reasons, and it is nice not to lose days of compute progress when that happens.

-Matt

Well, okay then.