Tag Archives: olympia oyster

Manuscript Writing – Submitted!

Boom!

 

Here are some useful links:

data records repo-URL: https://osf.io/j8rc2/
draft repo-URL: https://github.com/kubu4/paper_oly_gbs
draft: https://www.authorea.com/users/4974/articles/149442
preprint (Overleaf): https://www.overleaf.com/read/mqbbvmwxhncg
preprint (PDF): https://osf.io/cdj7m/

Share

Manuscript – Oly GBS 14 Day Plan

For Pub-a-thon 2017, Steven has asked us to put together a 14 day plan for our manuscripts.

My manuscript is accessible in three locations:

Current: Overleaf for final editing/formatting before submission Scientific Data.
Archival: Authorea for initial writing/comments.
Archival: GitHub for initial writing/issues.

Additionally, I have established a data repository with a Digital Object Identifier (DOI) at Open Science Framework

Here’s what I have going on:

Day 1

  • Convert .xls data records to .csv to see if they will render in OSF repo.
  • Assemble figure: phylogenetic tree.
  • Add figure to manuscript.
  • Deal with any minor edits.

Day 2

  • Assemble figure: Puget Sound map.
  • Add figure to manuscript.
  • Deal with any minor edits.

Day 3

  • Submit? Depends on what Steven’s availability is to finish of Background & Summary and write up Abstract.
Share

Data Received – Olympia oyster PacBio Data

Back in December 2016, we sent off Ostrea lurida DNA to the UW PacBio sequencing facility. This is an attempt to fill in the gaps left from the BGI genome sequencing project.

See the GitHub Wiki dedicated to this for an overview of this UW PacBio sequencing.

I downloaded the data to http://owl.fish.washington.edu/nightingales/O_lurida/20170323_pacbio/ using the required browser plugin, Aspera Connect. Technically, saving the data to a subfolder within a given species’ data folder goes against our data management plan (DMP) for high-throughput sequencing data, but the sequencing data output is far different than what we normally receive from an Illumina sequencing run. Instead of a just FASTQ files, we received the following from each PacBio SMRT cell we had run (we had 10 SMRT cells run):

├── Analysis_Results
│   ├── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.1.bax.h5
│   ├── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.2.bax.h5
│   ├── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.3.bax.h5
│   └── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.bas.h5
├── filter
│   ├── data
│   │   ├── control_reads.cmp.h5
│   │   ├── control_results_by_movie.csv
│   │   ├── data.items.json
│   │   ├── data.items.pickle
│   │   ├── filtered_regions
│   │   │   ├── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.1.rgn.h5
│   │   │   ├── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.2.rgn.h5
│   │   │   └── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.3.rgn.h5
│   │   ├── filtered_regions.fofn
│   │   ├── filtered_subread_summary.csv
│   │   ├── filtered_subreads.fasta
│   │   ├── filtered_subreads.fastq
│   │   ├── filtered_summary.csv
│   │   ├── nocontrol_filtered_subreads.fasta
│   │   ├── post_control_regions.chunk001of003
│   │   │   └── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.1.rgn.h5
│   │   ├── post_control_regions.chunk002of003
│   │   │   └── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.3.rgn.h5
│   │   ├── post_control_regions.chunk003of003
│   │   │   └── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.2.rgn.h5
│   │   ├── post_control_regions.fofn
│   │   └── slots.pickle
│   ├── index.html
│   ├── input.fofn
│   ├── input.xml
│   ├── log
│   │   ├── P_Control
│   │   │   ├── align.cmpH5.Gather.log
│   │   │   ├── align.plsFofn.Scatter.log
│   │   │   ├── align_001of003.log
│   │   │   ├── align_002of003.log
│   │   │   ├── align_003of003.log
│   │   │   ├── noControlSubreads.log
│   │   │   ├── summaryCSV.log
│   │   │   ├── updateRgn.noCtrlFofn.Gather.log
│   │   │   ├── updateRgn_001of003.log
│   │   │   ├── updateRgn_002of003.log
│   │   │   └── updateRgn_003of003.log
│   │   ├── P_ControlReports
│   │   │   └── statsJsonReport.log
│   │   ├── P_Fetch
│   │   │   ├── adapterRpt.log
│   │   │   ├── overviewRpt.log
│   │   │   └── toFofn.log
│   │   ├── P_Filter
│   │   │   ├── filter.rgnFofn.Gather.log
│   │   │   ├── filter.summary.Gather.log
│   │   │   ├── filter_001of003.log
│   │   │   ├── filter_002of003.log
│   │   │   ├── filter_003of003.log
│   │   │   ├── subreadSummary.log
│   │   │   ├── subreads.subreadFastq.Gather.log
│   │   │   ├── subreads.subreads.Gather.log
│   │   │   ├── subreads_001of003.log
│   │   │   ├── subreads_002of003.log
│   │   │   └── subreads_003of003.log
│   │   ├── P_FilterReports
│   │   │   ├── loadingRpt.log
│   │   │   ├── statsRpt.log
│   │   │   └── subreadRpt.log
│   │   ├── master.log
│   │   └── smrtpipe.log
│   ├── metadata.rdf
│   ├── results
│   │   ├── adapter_observed_insert_length_distribution.png
│   │   ├── adapter_observed_insert_length_distribution_thumb.png
│   │   ├── control_non-control_readlength.png
│   │   ├── control_non-control_readlength_thumb.png
│   │   ├── control_non-control_readquality.png
│   │   ├── control_non-control_readquality_thumb.png
│   │   ├── control_report.html
│   │   ├── control_report.json
│   │   ├── filter_reports_adapters.html
│   │   ├── filter_reports_adapters.json
│   │   ├── filter_reports_filter_stats.html
│   │   ├── filter_reports_filter_stats.json
│   │   ├── filter_reports_filter_subread_stats.html
│   │   ├── filter_reports_filter_subread_stats.json
│   │   ├── filter_reports_loading.html
│   │   ├── filter_reports_loading.json
│   │   ├── filtered_subread_report.png
│   │   ├── filtered_subread_report_thmb.png
│   │   ├── overview.html
│   │   ├── overview.json
│   │   ├── post_filter_readlength_histogram.png
│   │   ├── post_filter_readlength_histogram_thumb.png
│   │   ├── post_filterread_score_histogram.png
│   │   ├── post_filterread_score_histogram_thumb.png
│   │   ├── pre_filter_readlength_histogram.png
│   │   ├── pre_filter_readlength_histogram_thumb.png
│   │   ├── pre_filterread_score_histogram.png
│   │   └── pre_filterread_score_histogram_thumb.png
│   ├── toc.xml
│   └── workflow
│       ├── P_Control
│       │   ├── align.cmpH5.Gather.sh
│       │   ├── align.plsFofn.Scatter.sh
│       │   ├── align_001of003.sh
│       │   ├── align_002of003.sh
│       │   ├── align_003of003.sh
│       │   ├── noControlSubreads.sh
│       │   ├── summaryCSV.sh
│       │   ├── updateRgn.noCtrlFofn.Gather.sh
│       │   ├── updateRgn_001of003.sh
│       │   ├── updateRgn_002of003.sh
│       │   └── updateRgn_003of003.sh
│       ├── P_ControlReports
│       │   └── statsJsonReport.sh
│       ├── P_Fetch
│       │   ├── adapterRpt.sh
│       │   ├── overviewRpt.sh
│       │   └── toFofn.sh
│       ├── P_Filter
│       │   ├── filter.rgnFofn.Gather.sh
│       │   ├── filter.summary.Gather.sh
│       │   ├── filter_001of003.sh
│       │   ├── filter_002of003.sh
│       │   ├── filter_003of003.sh
│       │   ├── subreadSummary.sh
│       │   ├── subreads.subreadFastq.Gather.sh
│       │   ├── subreads.subreads.Gather.sh
│       │   ├── subreads_001of003.sh
│       │   ├── subreads_002of003.sh
│       │   └── subreads_003of003.sh
│       ├── P_FilterReports
│       │   ├── loadingRpt.sh
│       │   ├── statsRpt.sh
│       │   └── subreadRpt.sh
│       ├── Workflow.details.dot
│       ├── Workflow.details.html
│       ├── Workflow.details.svg
│       ├── Workflow.profile.html
│       ├── Workflow.rdf
│       ├── Workflow.summary.dot
│       ├── Workflow.summary.html
│       └── Workflow.summary.svg
├── filtered_subreads.fasta.gz
├── filtered_subreads.fastq.gz
├── m170211_224036_42134_c101073082550000001823236402101737_s1_X0.metadata.xml
└── nocontrol_filtered_subreads.fasta.gz

That’s 20 directories and 127 files – for a single SMRT cell!

Granted, there is the familiar FASTQ file (filtered_subreads.fastq), which is what will likely be used for downstream analysis, but it’s hard to make a decision on how we manage this data under the guidelines of our current DMP. It’s possible we might separate data files from the numerous other files (the other files are, essentially, metadata), but we need to decide which file type(s) (e.g. .h5 files, .fastq files) will server as the data files people will rely on for analysis. So, for the time being, this will be how the data is stored.

I’ll update the readme file to reflect the addition of the top level folders (e.g. ../20170323_pacbio/170210_PCB-CC_MS_EEE_20kb_P6v2_D01_1/).

I’ll also update the GitHub Wiki

Share

Data Management – SRA Submission Oly GBS Batch Submission

An earlier attempt at submitting these files failed.

I re-uploaded the failed files (indicated in my previous notebook entry linked above) and tried again.

 

It failed again, despite having successfully uploaded just minutes before.

I re-uploaded that “missing” file and tried again.

This time, it succeeded (and no end-of-stream error for the 1SN_1A file!)!

Will post here with the SRA accession number once it goes live!

 

Share

Data Management – SRA Submission Oly GBS Batch Submission Fail

I had previously submitted the two non-demultiplexed genotype-by-sequencing (GBS) files provided by BGI to the NCBI short read archive (SRA).

Recently, Jay responded to an issue I had posted on the GitHub repo for the manuscript we’re writing about this data.

He noticed that the SRA no longer wants “raw data dumps” (i.e. the type of submission I made before). So, that means I had to prepare the demultiplexed files provided by BGI to actually submit to the SRA.

Last week, I uploaded all 192 of the files via FTP. It took over 10hrs.

(FTP tips: – Use ftp -i to initiate FTP. – Then use open ftp.address.IP to connect. – Then can use mput with regular expressions to upload multiple files)

Today, I created a batch BioSample submission:

 

 

 

Initiated the submission process (Ummm, this looks like it’s going to take awhile…):

 

 

 

Aaaaaand, it failed:

 

 

It seems like the FTP failed at some point, as there’s nothing about those seven files that would separate them from the remaining 185 files. Additional support for FTP failure is that the 1SN_1A_1.fq.gz error message makes it sound like only part of the file got transferred.

I’ll retrieve those files from our UW Google Drive (since their original home on Owl is still down) and get them trasnferred to the SRA.

Share

Computing – Oly BGI GBS Reproducibility; fail?

OK, so things have improved since the last attempt at getting this BGI script to run and demultiplex the raw data.

I played around with the index.lst file format (based on the error I received last time, it seemed like a good possibility that the file formatting was incorrect) and actually got the script to run to completion! Granted, it took over 16hrs (!!), but it completed!

See the Jupyter notebook link below.

 

Results:

Well, although the script finished and kicked out all the demultiplexed FASTQ files, the contents of the FASTQ files don’t match (the read counts differ between these results and the BGI files) the original set of demultiplexed files. I’m not entirely sure if this is to be expected or not, since the script allows for a single nucleotide mismatch when demultiplexing. Is it possible that the mismatch could be interpreted slightly differently each time this is run? I’m not certain.

Theoretically, you should get the same results every time…

Maybe I’ll re-run this again over the weekend and see how the results compare to this run and the original BGI demultiplexing…

Jupyter notebook (GitHub): 20170314_docker_Oly_BGI_GBS_demultiplexing_reproducibility.ipynb

 

Jupyter notebook (may be easier to view in GitHub link above):

Share

Computing – Oly BGI GBS Reproducibility Fail (but, less so than last time)…

Well, my previous attempt at reproducing the demultiplexing that BGI performed was an exercise in futility. BGI got back to me with the following message:

 

Hi Sam,

We downloaded it and it seems fine when compiling. You can compile it with the below command under Linux system.

tar -zxvf ReSeqTools_XXX.tar.gz ; cd iTools_Code; chmod 775 iTools ; ./ iTools -h

 

I gave that whirl and got the following message:

Error opening terminal: xterm

Some internet searching got me sucked into a useless black hole about 64 bit systems running 32 bit programs and enabling the 64 bit kernel on Mac OS X 10.7.5 (Lion) since it’s not enabled by default and on and on. In the end, I can’t seem to enable the 64 bit kernel on my Mac Pro, likely due to hardware limitations related to the graphics card and/or displays that are connected.

Anyway, I decided to try getting this program installed again, using a Docker container (instead of trying to install locally on my Mac).

 

Results:

It didn’t work again, but for a different reason! Despite the instructions in the readme file provided with iTools, you don’t actually need to run make! All that has to be done is unzipping the tarball!! However, despite figuring this out, the program fails with the following error message: “Warming : sample double in this INDEX Files. Sample ID: OYSzenG1AAD96FAAPEI-109; please renamed it diff” (note: this is copied/pasted – the spelling errors are note mine). So, I think there’s something wrong with the formatting of the index file that BGI provided me with.

I’ve contacted them for more info.

See the Jupyter notebook linked below to see what I tried.

Jupyter notebook (GitHub): 20170314_docker_Oly_BGI_GBS_demultiplexing_reproducibility.ipynb

Share

Computing – Oly BGI GBS Reproducibility Fail

Since we’re preparing a manuscript that relies on BGI’s manipulation/handling of the genotype-by-sequencing data, I attempted to could reproduce the demultiplexing steps that BGI used in order to perform the SNP/genotyping on these samples.

The key word in the above sentence is “attempted.” Ugh, what a massive waste of time it turned out to be. I’ve contacted BGI to get some help on this.

In the meantime, here’s a brief (actually, not as brief as I’d like) rundown of my struggles.

The demultiplexing software that BGI used is something called “iTools” which is bundled in this GitHub repo: Resqtools

To demutliplex, they ran a script called: split.sh

The script seems fairly straightforward. Here is what it contains:

iTools Fqtools splitpool 
-InFq1 160123_I132_FCH3YHMBBXX_L4_OYSzenG1AAD96FAAPEI-109_1.fq.gz 
-InFq2 160123_I132_FCH3YHMBBXX_L4_OYSzenG1AAD96FAAPEI-109_2.fq.gz 
-Index index.lst 
-Flag enzyme.txt 
-MisMatch 
-OutDir split

It tells the iTools program to use the Fqtools tool “splitpool” to operate on a pair of gzipped FASTQ files. It also utilizes an index file (index.lst) which contains all the barcodes needed to identify, and separate, the individual samples that were combined prior to sequencing.

The first bump in the road is the -Flag enzyme.txt portion of the code. BGI did not provide me with this file. I recently requested them to send me it (or its contents, since I suspected it was only a single line text file). They sent me the contents of the file:

CAGC
CTGC

The next problem is neither of those two sequences are the recognition site for the enzyme that was (supposedly) used: ApeKI. The recognition site for ApeKI is: GCWGC

Regardless, I decided to see if I could reproduce the demultiplexing using the info they’d provided me.

I cloned the Resqtools repo, changed into the Reseqtools/iTools directory and typed make.

This resulted in an error informing me that it could not find boost/spirit/core.hpp

I tracked down the Boost library junk, downloaded the newest version and untarred it in /usr/local/bin.

Tried to run make in the Reseqtools/iTools directory and got the same error. Realized iTools might not be searching the system $PATH (this turned out to be correct), so I moved the contents of the Boost folder to the iTools, ran make and got the same error. Turns out, the newest version of Boost doesn’t have that core.hpp file any more. Looking at the iTools documentation, iTools was built around Boost 1.44. OMG…

Downloaded Boost 1.44 and went through the same steps as above. This eliminated the missing core.hpp error!

But, of course, led to another error. The error:

"Threading support unavaliable: it has been explicitly disabled with BOOST_DISABLE_THREADS"

That was related to something with newer versions of the GCC compiler (this is, essentially, built into the computer; it’s not worth trying to install/use old versions of GCC) trying to work with old versions of Boost. Found a patch for a config file here: libstdcpp3.hpp.patch

I made the appropriate edits to the file as shown in that link and ran make and it almost worked!

The current error is:

./src/Variants/soapsv-v1.02/include.h:15:16: fatal error: gd.h: No such file or directory

I gave up and contacted BGI to see if they can get me a functional version of iTools…

Share

FASTQC – Oly BGI GBS Raw Illumina Data Demultiplexed

Last week, I ran the two raw FASTQ files through FastQC. As expected, FastQC detected “errors”. These errors are due to the presence of adapter sequences, barcodes, and the use of a restriction enzyme (ApeKI) in library preparation. In summary, it’s not surprising that FastQC was not please with the data because it’s expecting a “standard” library prep that’s already been trimmed and demultiplexed.

However, just for comparison, I ran the demultiplexed files through FastQC. The Jupyter notebook is linked (GitHub) and embedded below. I recommend viewing the Jupyter notebook on GitHub for easier viewing.

Results:

Pretty much the same, but with slight improvements due to removal of adapter and barcode sequences. The restriction site still leads to FastQC to report errors, which is expected.

Links to all of the FastQC output files are linked at the bottom of the notebook.

Jupyter notebook (GitHub): 20170306_docker_fastqc_demultiplexed_bgi_oly_gbs.ipynb

Share

FASTQC – Oly BGI GBS Raw Illumina Data

In getting things prepared for the manuscript we’re writing about the Olympia oyster genotype-by-sequencing data from BGI, I felt we needed to provide a FastQC analysis of the raw data (since these two files are what we submitted to the NCBI short read archive) to provide support for the Technical Validation section of the manuscript.

Below, is the Jupyter notebook I used to run the FastQC analysis on the two files. I’ve embedded for quick viewing, but it might be easier to view the notebook via the GitHub link.

 

Results:

Well, I realized that running FastQC on the raw data might not reveal anything all too helpful. The reason for this is that the adaptor and barcode sequences are still present on all the reads. This will lead to over-representation of these sequences in all of the samples, which, in turn, will skew FastQC’s intepretation of the read qualities. For comparison, I’ll run FastQC on the demultiplexed data provided by BGI and see what the FastQC report looks like on trimmed data.

However, I’ll need to discuss with Steven about whether or not providing the FastQC analysis is worthwhile as part of the “technical validation” aspect of the manuscript. I guess it can’t hurt to provide it, but I’m not entirely sure that the FastQC report provides any real information regarding the quality of the sequencing reads that we received…

 

Jupyter notebook (GitHub): 20170301_docker_fastqc_nondemultiplexed_bgi_oly_gbs.ipynb

Share