# Transcriptome Assembly – Olympia oyster RNAseq Data with Trinity

Used all of our current oly RNAseq data to assemble a transcriptome using Trinity.

Trinity was run our our Mox HPC node.

Reads were trimmed using the built-in version of Trimmomatic with the default settings.

SBATCH script:

Despite the naming conventions, this job was submitted to the Mox scheduler on 201800912 and finished on 20180913.

After job completion, the entire folder was gzipped, using an interactive node (the following method of gzipping is SUPER fast, btw):

tar -c 20180827_trinity_oly_RNAseq | pigz > 20180827_trinity_oly_RNAseq.tar.gz

#### RESULTS:

Output folder:

Trinity assembly (FastA):

Next up, I’ll follow up on this GitHub issue and get some bedgraphs generated.

# Transcriptome Assembly – Geoduck RNAseq data

Used all of our current geoduck RNAseq data to assemble a transcriptome using Trinity.

Trinity was run our our Mox HPC node. Specifically, I had to use just a single node with 500GB of RAM. Trinity could not run with much less than that. Initially, I attempted to run with two nodes, but our smaller node (120GB) ended up limiting the available RAM (the system only uses the RAM available on the smallest node; it cannot combine RAM or dynamically allocate computing to a node with larger RAM when needed) and Trinity consistently crashed due to memory limitations.

Reads were trimmed using the built-in version of Trimmomatic with the default settings.

SBATCH script:

Due to the huge number of input files, I won’t post the entire script contents here. Instead, here’s a snippet of the script showing the commands used to start the Trinity run:


#!/bin/bash
## Job Name
#SBATCH --job-name=20180829_trinity
## Allocation Definition
#SBATCH --account=srlab
#SBATCH --partition=srlab
## Resources
## Nodes
#SBATCH --nodes=1
## Walltime (days-hours:minutes:seconds format)
#SBATCH --time=30-00:00:00
## Memory per node
#SBATCH --mem=500G
##turn on e-mail notification
#SBATCH --mail-type=ALL
#SBATCH --mail-user=samwhite@uw.edu
## Specify the working directory for this job
#SBATCH --workdir=/gscratch/scrubbed/samwhite/20180827_trinity_geoduck_RNAseq

# Load Python Mox module for Python module availability

# Document programs in PATH (primarily for program version ID)

date >> system_path.log
echo "" >> system_path.log
echo "System PATH for $SLURM_JOB_ID" >> system_path.log echo "" >> system_path.log printf "%0.s-" {1..10} >> system_path.log echo${PATH} | tr : \n >> system_path.log

# Run Trinity
/gscratch/srlab/programs/trinityrnaseq-Trinity-v2.8.3/Trinity
--trimmomatic
--seqType fq
--max_memory 500G
--CPU 28


Despite the naming conventions, this job was submitted to the Mox scheduler on 20180829 and finished on 20180901.

After job completion, the entire folder was gzipped (the following method of gzipping is SUPER fast, btw):

tar -c 20180827_trinity_geoduck_RNAseq | pigz > 20180827_trinity_geoduck_RNAseq.tar.gz

#### RESULTS:

Output folder:

Trinity assembly (FastA):

Next up, I’ll get some annotations going by running through TransDecoder and blastx.

# FastQC/MultiQC/TrimGalore/MultiQC/FastQC/MultiQC – O.lurida WGBSseq for Methylation Analysis

I previously ran this data through the Bismark pipeline and followed up with MethylKit analysis. MethylKit analysis revealed an extremely low number of differentially methylated loci (DML), which seemed odd.

Steven and I met to discuss and compare our different variations on the analysis and decided to try out different tweaks to evaluate how they affect analysis.

I did the following tasks:

1. Looked at original sequence data quality with FastQC.

2. Summarized FastQC analysis with MultiQC.

3. Trimmed data using TrimGalore!, trimming 10bp from 5′ end of reads (8bp is recommended by Bismark docs).

4. Summarized trimming stats with MultiQC.

5. Looked at trimmed sequence quality with FastQC.

6. Summarized FastQC analysis with MultiQC.

This was run on the Univ. of Washington High Performance Computing (HPC) cluster, Mox.

Mox SBATCH submission script has all details on how the analyses were conducted:

##### RESULTS

Output folder:

Raw sequence FastQC output folder:

Raw sequence MultiQC report (HTML):

TrimGalore! output folder (trimmed FastQ files are here):

Trimming MultiQC report (HTML):

Trimmed FastQC output folder:

Trimmed MultiQC report (HTML):

# Mox – Over quota: Olympia oyster genome annotation progress (using Maker 2.31.10)

##### UPDATE 20180730

Per this GitHub Issue, Steven has cleared out files.

Additionally, it appears that my job has just continued from where it was stuck. Very nice!

ORIGINAL POST IS BELOW

Well, this is an issue. Checked in on job progress and noticed that we’ve exceeded our storage quota on Mox:

Will have post an issue on GitHub to help get unnecessary files cleaned out. Kind of shocking that we’re using >6TB of space already…

# Mox – Olympia oyster genome annotation progress (using Maker 2.31.10)

#### TL;DR – It appears to be continuing where it left off!

I decided to spend some time to figure out what was actually happening, as it’s clear that the annotation process is going to need some additional time to run and may span an additional monthly maintenance shutdown.

This is great, because, otherwise, this will take an eternity to actually complete (particularly because we’d have to move the job to run on one of our lab’s computers – which pale in comparison to the specs of our Mox nodes).

However, it’s a bit shocking that this is taking this long, even on a Mox node!

I started annotating the Olympia oyster genome on 20180529. Since then, the job has been interrupted twice by monthly Mox maintenance (which happens on the 2nd Tuesday of each month). Additionally, when this happens, the SLURM output file is overwritten, making it difficult to assess whether or not Maker continues where it left off or if it’s starting over from scratch.

Anyway, here’s how I deduced that the program is continuing where it left off.

1. I figured out that it produces a generic feature format (GFF) file for each contig.

2. Decided to search for the first contig GFF and look at it’s last modified date. This would tell me if it was newly generated (i.e. on the date that the job was restarted after the maintenance shutdown) or if it was old. Additionally, if there were more than one of these files, then I’d also know that Maker was just starting at the beginning and writing data to a different location.

This shows:

1. Only one copy of Contig0.gff exists.

3. Check the slurm output file for info.

This reveals this important piece of info:

MAKER WARNING: The file 20180529_oly_annotation_01.maker.output/20180529_oly_annotation_01_datastore/AC/68/Contig215522//theVoid.Contig215522/0/Contig215522.0.all.rb.out
did not finish on the last run

All of these taken together lead me to confidently conclude that Maker is not restarting from the beginning and is, indeed, continuing where it left off. WHEW!

Just for kicks, I also ran a count of GFF files to see where this stands so far:

Wow! 622,010 GFFs!!!

Finally, for posterity, here’s the SLURM script I used to submit this job, back in May! I’ll have all of the corresponding genome files, proteome files, transcriptome files, etc. on one of our servers once the job completes.


#!/bin/bash
## Job Name
#SBATCH --job-name=20180529_oly_maker_genome_annotation
## Allocation Definition
#SBATCH --account=srlab
#SBATCH --partition=srlab
## Resources
## Nodes
#SBATCH --nodes=1
## Walltime (days-hours:minutes:seconds format)
#SBATCH --time=30-00:00:00
## Memory per node
#SBATCH --mem=500G
##turn on e-mail notification
#SBATCH --mail-type=ALL
#SBATCH --mail-user=samwhite@uw.edu
## Specify the working directory for this job
#SBATCH --workdir=/gscratch/srlab/sam/outputs/20180529_oly_maker_genome_annotation

## Establish variables for more readable code

### Path to Maker executable
maker=/gscratch/srlab/programs/maker-2.31.10/bin/maker

### Path to Olympia oyster genome FastA file
oly_genome=/gscratch/srlab/sam/data/O_lurida/oly_genome_assemblies/jelly.out.fasta

### Path to Olympia oyster transcriptome FastA file
oly_transcriptome=/gscratch/srlab/sam/data/O_lurida/oly_transcriptome_assemblies/Olurida_transcriptome_v3.fasta

### Path to Crassotrea gigas NCBI protein FastA
gigas_proteome=/gscratch/srlab/sam/data/C_gigas/gigas_ncbi_protein/GCA_000297895.1_oyster_v9_protein.faa

### Path to Crassostrea virginica NCBI protein FastA
virginica_proteome=/gscratch/srlab/sam/data/C_virginica/virginica_ncbi_protein/GCF_002022765.2_C_virginica-3.0_protein.faa

## Create Maker control files needed for running Maker
$maker -CTL ## Store path to options control file maker_opts_file=./maker_opts.ctl ## Create combined proteome FastA file touch gigas_virginica_ncbi_proteomes.fasta cat "$gigas_proteome" >> gigas_virginica_ncbi_proteomes.fasta
cat "$virginica_proteome" >> gigas_virginica_ncbi_proteomes.fasta ## Edit options file ### Set paths to O.lurida genome and transcriptome. ### Set path to combined C. gigas and C.virginica proteomes. ## The use of the % symbol sets the delimiter sed uses for arguments. ## Normally, the delimiter that most examples use is a slash "/". ## But, we need to expand the variables into a full path with slashes, which screws up sed. ## Thus, the use of % symbol instead (it could be any character that is NOT present in the expanded variable; doesn't have to be "%"). sed -i "/^genome=/ s% %$oly_genome %" "$maker_opts_file" sed -i "/^est=/ s% %$oly_transcriptome %" "$maker_opts_file" sed -i "/^protein=/ s% %$gigas_virginica_ncbi_proteomes %" "$maker_opts_file" ## Run Maker ### Set basename of files and specify number of CPUs to use$maker
-base 20180529_oly_annotation_01
-cpus 24


# Read Mapping – Olympia oyster 2bRAD Data with Bowtie2 (on Mox)

Per Steven’s request, mapped our Olympia oyster 2bRAD data.

Mapped to:

This was run on our Mox computing node.

The script is far too long to paste here, due to the shear number of input files. However, here’s a snippet to show the command and options that were used:


/gscratch/srlab/programs/bowtie2-2.3.4.1-linux-x86_64/bowtie2
--no-unal
--score-min L,16,1
--local
-L 16
-x /gscratch/srlab/sam/data/O_lurida/oly_genome_assemblies/20180515_oly_bowtie2_pbjelly_sjw_01_genome_index/pbjelly_sjw_01_ref
-U


See the linked Slurm script above for the entire thing.

#### RESULTS:

Output folder:

SAM file (104GB)

Mapping summary:


729797535 reads; of these:
729797535 (100.00%) were unpaired; of these:
273989476 (37.54%) aligned 0 times
310581308 (42.56%) aligned exactly 1 time
145226751 (19.90%) aligned >1 times
62.46% overall alignment rate


# Assembly & Stats – SparseAssembler (k95) on Geoduck Sequence Data > Quast for Stats

Had a successful assembly with SparseAssembler k101, but figured I’d just tweak the kmer setting and throw it in the queue and see how it compares; minimal effort/time needed.

Initiatied an assembly run using SparseAssembler on our Mox HPC node on all of our geoduck genomic sequencing data:

Kmer size set to 95.

Slurm script: 20180423_sparse_assembler_kmer95_geoduck_slurm.sh

After the run finished, I copied the files to our server (Owl) and then ran Quast on my computer to gather some assembly stats, using the following command:


/home/sam/software/quast-4.5/quast.py
-t 24
--labels 20180423_sparse_k95
/mnt/owl/Athaliana/20180423_sparseassembler_kmer95_geoduck/Contigs.txt


##### Results:

SparseAssembler output folder: 20180423_sparseassembler_kmer95_geoduck/

SparseAsembler assembley (FastA; 15GB): 20180423_sparseassembler_kmer95_geoduck/Contigs.txt

Quast output folder: quast_results/results_2018_05_10_15_04_07

Quast report (HTML): quast_results/results_2018_05_10_15_04_07/report.html

I’ve embedded the Quast HTML report below, but it may be easier to view by using the link above.

Well, it’s remarkable how different this is than the previous SparseAssembler with k101 setting!

This assembly doesn’t have a single contig >50,000bp, while the previous one has four contigs over that threshold!

Definitely shows what a large impact the kmer setting in assembly software can have on the final assembly!

# Assembly – SparseAssembler (k 111) on Geoduck Sequence Data

Continuing to try to find the best kmer setting to work with SparseAssemlber after the last attempt failed due to a kmer size that was too large (k 131; which happens to be outside the max kmer size [127] for SparseAssembler), I re-ran SparseAssembler with an arbitrarily selected kmer size < 131 (picked k 111).

The job was run on our Mox HPC node.

##### Results:

Output folder:

Slurm output file:

This failed with the following error message:

Error! K-mer size too large!

Well, this is disappointing. Not entirely sure why this is the case, as it’s below the max kmer setting for SparseAssembler. However, I’m not terribly surprised, as this happened previously (only using NovaSeq data) with a kmer setting of 117.

I’ve posted an issue on the kmergenie GitHub page; we’ll see what happens.

# Assembly – SparseAssembler (k 131) on Geoduck Sequence Data

After some runs with kmergenie, I’ve decided to try re-running SparseAssembler using a kmer setting of 131.

The job was run on our Mox HPC node.

##### Results:

Output folder:

Slurm output file:

This failed with the following error message:

Error! K-mer size too large!

Looking into this, it’s because the maximum kmer size for kmergenie is 127! Doh!

It’d be nice if the program looked at that setting first before processign all the data files…

A bit disappointing, but I’ll give this a go with a lower kmer setting and see how it goes.

# Kmer Estimation – Kmergenie (k 301) on Geoduck Sequence Data

Continuing the quest for the ideal kmer size to use for our geoduck assembly.

The previous two runs with kmergenie using the diploid setting were no good.

So, this time, I simply increased the maximum kmer size to 301 and left all other settings as default. I’m hoping this is large enough to produce a smooth curve, with a maximal value that can be determined from the output graph.

The job was run on our Mox HPC node.

##### Results:

Output folder:

Slurm output file:

Kmer histogram (HTML) reports:

Well, the graph is closer to what we’d expect, in that it appears to reach a zenith, but after that plateau, we see a sharp dropoff, as opposed to a gradual dropoff that mirrors the left half. Not entirely sure what the implications for this are, but I’ll go ahead an run SparseAssembler using a kmer size of 131 and see how it goes.