Blue Carbon Initiative: mitigating climate change in the oceans

Listened to this webinar, “Blue Carbon Ecosystems – what’s included, what’s not and why,” by Jenna Howard and Ariana Sutton-Grier about the Blue Carbon Initiative and climate change mitigation to understand how terrestrial carbon sequestration rates compare with those in marine ecosystems.  This talk focused mostly on coastal marine habitats with the best potential — mostly in-air plants (mangroves, marshes) and also seagrass beds.  Here is a video recording

and some of my notes/screengrabs:


Their paper has a more detailed table with equations, sources, etc.

Their paper has a more detailed table with equations, sources, etc.

link to paper — Clarifying the role of coastal and marine systems in climate mitigation

For comparison, humans activities put about 7000 million Mg C yr^-1 into the atmosphere.

Or 7 Gt/yr.

Mitigation potential (and feasibility)

Mitigation potential (and feasibility)

Mangroves and marshes

Range of uptake rates

  • vegetative carbon: 8-126 tons/ha
  • soil carbon: 250-280 tons/ha
  • ratio shows that 2-30 time more carbon is in soil than in vegetation
  • sequestration rate: 2.1-2.6 tons/ha/yr

Global loss of these habitats is ~1.5-2%, so half of habitat gone in 35-50 years


Long-term effect of calcification in coral reefs: a slight net source of CO2, but could reverse if ocean acidification increases dramatically

Dissolution process takes CO2 out of water column.

Take home message: no red arrow, meaning coral isn’t a climate change mitigation option


Most kelp gets consumed or degrades in days, months, or at most a couple years

So, it’s a temporary carbon pool, not a long term storage option


Only 0.1% sinks to ocean floor for long-term storage, but the area is HUGE compared to the geographic extent of the coastal ecosystems.

BUT, it’s not in the running due to policy issues that have been (and are) challenging…

  • who manages, owns carbon sequestered, etc.
  • ethics of seeding with bioengineered cultures (didn’t even mention fertilization)

Screengrab from webinar

Screengrab from webinar

Marine fauna

Calcifiers (e.g. pteropods), krill, teleost fish (feces contain CaCO3 but doesn’t sequester, only affects alkalinity gradient)





My related notes from beyond the webinar:

What is sequestration rate for terrestrial ecosystems (temperate vs tropical forests)?

The Blue Carbon Initiative web site says of marine/coastal ecosystems:

“These ecosystems store up to 10 times more carbon — called “blue carbon” — per hectare than terrestrial forests, and degradation of these ecosystems accounts for up to 19% of carbon emissions from global deforestation.”

Why not great (baleen+sperm) whales as long-term sequestration?  Estimate potential using listed species population assessments and historical baselines?

If we protect and enhance their habitat to maximize their growth rate, how many new whales could be added to the global marine ecosystem each year?

Back of the envelope:

Let’s say blue whales are ~100,000 individuals below carrying capacity (it’s likely closer to 200,000) and each adult whale constitutes a sink of ~100 Mg of carbon (assuming they’re mostly lipid-rich blubber = hydrocarbons).  If the extant population (currently <10,000 whales growing at 8%/yr) manages to add 1,000 new blue whales per year, that would sequester 100,000 Mg/yr.  Maybe multiply that by 10 for other baleen whale species (blue, fin, right, humpback, sei, grey, bowhead, Bryde, minke, Omura/Eden, sperm) of similar mass that are similarly below carrying capacity and we’re at 1 million Mg/yr.  That estimate might be high or low by a factor of ~10 given the uncertainties.  Either way, 1 million Mg C yr^-1 is within the range of interesting numbers listed in the table…

Assuming recovery of the globe’s baleen whale populations takes 100 years, we could expect over that time period an increase in the amount of carbon stored in living whales of about 100 million Mg C.  That’s comparable to the biomass of all phytoplankton (0.5-2.4 billion Mg C, according to their paper and it’s citations).  Then there’s the flux (from dead adult whales) into the deep sea, where it would be sequestered for order 100 years…

It looks like famous folks have already pondered most of this:




Email announcement:​​

Blue Carbon Ecosystems – what’s included, what’s not and why



Wednesday, February 8, 2017, 2pm – 3pm EST

REGISTER online at


Webinar Summary:

With increasing recognition of the role natural systems have in climate mitigation, where should management initiatives focus? While forest have historically had the spotlight of such efforts, coastal wetland ecosystems are now considered important and effective long-term carbon sinks. This attention to “blue carbon” habitats has sparked interest in including other marine systems, such as coral reefs, phytoplankton, kelp forests, and marine fauna.

In this webinar, authors of a recently published paper – Clarifying the role of coastal and marine systems in climate mitigation (Frontiers in Ecology and the Environment, Feb. 2017) – analyze the scientific evidence and potential management role of several coastal and marine ecosystems to determine which should be prioritized within current climate mitigation strategies and policies. Findings can assist decision-makers and conservation practitioners to understand where management actions can have additional carbon benefits.



Dr. Jennifer Howard and Dr. Ariana Sutton-Grier

Dr. Jennifer Howard is the Marine Climate Change Director at Conservation International. Prior to accepting her current position, she was a AAAS Science and Technology Policy Fellow where she served two years at the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Fisheries Service. While at NOAA, Jennifer co-led and coordinated the development of the Ocean and Marine Resources in a Changing Climate Technical Input Report to the National Climate Assessment and coordinated the Interagency Working Group for Ocean Acidification.


Dr. Ariana Sutton-Grier is an ecosystem ecologist with expertise in wetland ecology and restoration, biodiversity, biogeochemistry, climate change, and ecosystem services. Dr. Sutton-Grier is a research faculty member at the University of Maryland in the Earth System Science Interdisciplinary Center and is also the Ecosystem Science Adviser for the National Ocean Service at NOAA.  She holds Honors Bachelor degrees from Oregon State University in Environmental Science and International Studies and a doctoral degree from Duke University in Ecology.


Moderator: Stefanie Simpson, Blue Carbon Program Manager, Restore America’s Estuaries (and paper co-author)


This free webinar is hosted by Restore America’s Estuaries.

Chinook salmon and southern resident killer whales occupy similar depths in the Salish Sea

New paper by UW colleagues entitled “Interpreting vertical movement behavior with holistic examination of depth distribution: a novel method reveals cryptic diel activity patterns of Chinook salmon in the Salish Sea” shows some results from Vemco receivers I deployed in the San Juan Islands. Young adult Chinook favor depths less than ~30 meters, with some seasonal variability in their diel activity patterns. Overall, they go deeper and vary more in the depths at night.

Dive profiles for two Salish Sea Chinook salmon during the summer and fall.

Dive profiles for two Salish Sea Chinook salmon during the summer and fall.

Interestingly, according to a report to NOAA/NWFSC by Baird et al, 2003 (STUDIES OF FORAGING IN “SOUTHERN RESIDENT” KILLER WHALES DURING JULY 2002: DIVE DEPTHS, BURSTS IN SPEED, AND THE USE OF A “CRITTERCAM” SYSTEM FOR EXAMINING SUB-SURFACE BEHAVIOR) SRKWs spend >97% of their time at depths of less than 30m.

This suggests any future deployment of horizontal echosounders should aim to ensonify a depth range centered on ~25m (e.g. 5-45m or 10-40 m).  Compared to the estimated orientation and surveyed depth range of our 2008-9 salmon-SRKW echosounder pilot studies, we may want to measure inclination more carefully to (a) center the survey on the mean summertime depth range of Chinook and (b) avoid ping reflections from surface waves, boats, and bubbles (which may have confused interpretations of targets >100 m from the transducer).  Here’s my diagram for the situation in 2008-9 in which we were centered on 15 m and ensonified a maximum depth range of ~0-30m (in other words, we may have been aiming a little high):

Screen grab from the 2009 ASA presentation showing echosounder geometry

Screen grab from the 2009 ASA presentation showing echosounder geometry



Paperpile vs Zotero for bibliographic citations in scientific coauthorship

Having recently felt some limitations of my $96/year Overleaf Pro account (both in collaborative writing and bibliographic citation) while preparing my recent PeerJ publication, I found myself last week pushing a new group of collaborators to try co-authoring within Google Docs (rather than their preferred, old-school exchange of .docx files).  It quickly became apparent that using Zotero and/or exported bibtex files to insert citations and build a bibliography in Docs wasn’t going to be simple (for me or the Word users).  One method of inserting from bibtex in Google Docs looked clunky to say the least and hadn’t been updated for about a year.

Instead of trying to hack Zotero and Google Docs together, I found glowing recent reviews of Paperpile — a bibliographic manager that is tightly integrated with Google Docs and Drive through a cloud-based app and the Paperpile Chrome extension.  I decided to give the free trial of Paperpile a go.  It felt natural because a few months ago I (somewhat reluctantly) took the plunge and migrated from the opensource world of Thunderbird and Firefox to the proprietary world of Gmail and Chrome.  Part of that was driven by some slight frustration I felt with having to have the Zotero stand-alone program running when collecting references via the Zotero extension for Chrome.  I was also motivated by the realization that I am paying $60/year for 6 Gb of Zotero storage when Paperpile personal costs $36/yr and uses the 15Gb of free Google Drive storage.

I signed up using my Google account and immediately found myself invited to pile on some papers.  How pleasantly surprising to find that importing references (and attached PDFs) was an option in the “Add Papers” button’s dropdown menu!  In the time it took me to make a cup of coffee (15 minutes), the app had imported my entire shared ~1/3 of my 4,000-reference Zotero library (with PDFs).  To get it to import my private shared group library I had to follow the instructions for changing the default permissions in the Paperpile-Zotero API key.

Screen Shot 2016-12-10 at 1.39.01 PM

I’ll report on how it goes editing in Docs with Paperpile next week, but I’m immediately struck by two benefits of the GUI.  First, when you search in Google Scholar you get an indication of whether or not a search result is already in your Paperpile.  That’s a feature I’ve requested of the Zotero devs, but it’s not yet manifested.  I was surprised to see that quite a few references were missing — even from searches of keywords and topics about which I think of myself as an expert!  Secondly, in the Paperpile app it is a breeze to select multiple references (e.g. a suite you’ve just collected) and organize them — first by dragging and dropping into a folder, and then by dragging tags onto the group.  In comparison, I could open the shared library in the Zotero stand-alone and know that any references I collected with the Zotero Chrome extension would be added there, but sometimes I forgot to check and the references ended up in some random folder of my general Zotero library.  I could fix that by dragging and dropping, but sometimes it was confusing and files got accidentally deleted in multiple folders inadvertently…  Tagging in Zotero is clunky at best.  I haven’t discerned how to tag multiple files at once, so I just go through them one at a time — which is a big time sink when I collect a bunch from behind paywalls via a trip to the local University library.

I have to mention that Overleaf is trying to build better collaborative tools, but most of their work is still in beta and there hasn’t been a ton of evolution in the last year that I’ve used it.  Here’s a screengrab of where they are heading with their comment insertion (ok in Rich Text, but messy in LaTex), as well as their History and Revisions tab (recent activity; timeline; and the old Labeled versions).

Screen Shot 2016-12-10 at 2.11.09 PMWe’ll see whether they can hold a candle to Google Docs simultaneous editing, conversational views of comments, and change notifications…

Screen Shot 2016-12-10 at 2.34.03 PM



With our ship source level paper (pre-print) finally revised in Overleaf and accepted by PeerJ, it’s time to get ready to help translate the manuscript (which ended up on the long side) into simpler formats to convey our key findings to the public and press.  My first impulse was to use this as an opportunity to do something I’ve long-desired for my teaching: create a table and/or info-graphic that communicates the (very confusing) difference between underwater and in-air decibel scales.

A more urgent goal for today — with press phone calls/interviews looming — is distilling some simple facts from the paper and articulating them without scientific jargon.  These same highlights and summary statistics are destined for a blog post at Beam Reach that will convey the key findings, potential implications — both for killer whale communication and echolocation masking, and for mitigation of underwater noise pollution from ships.

How “loud” is ship noise?

Build tables that extract key results from the paper and contextualize them for the public, with attention to the confusion about underwater vs in-air decibels. First confirmed via DoSiTs that we subtract 62 when converting underwater decibels to in-air decibels (NOAA’s underwater acoustics page is not clear [and maybe wrong] on this topic!?). (Erbe, 2010) also could be improved by being explicit about how to do and derive this conversion…

Ship source levels (broadband)


Ship class Underwater broadband source level In-air broadband source level Familiar noise source of equivalent intensity
dB re 1 microPa @ 1 meter dB re 20 microPa @ 1 meter (from various Googled decibel scale images)
“Max” (mean+4sigma) 201 138 Jet take-off
Max of center 95% (mean+2sigma) 187 124 Jack hammer @ 1 m
Mean of all ships in study 173 +/- 7 ~110 Rock concert (or earbuds at full volume? or auto horn)
Max of center 95% of container ships (loudest class, mean+2sigma) 186 124 Jack hammer @ 1 m
Mean of container ships (loudest class) 178 +/- 4 116 Ambulance siren
Mean of military ships (one of quietest classes) 161 +/- 10 99 Chain saw

Received levels (broadband)

Of course, the killer whales are typically at least 100-1000 meters from these ships and we measured frequency-independent spreading rates at study site of about -18 log (R), so we should present a table of receive levels that depicts what ship noise experience is typical for SRKWs…

Receiving situation Underwater broadband source level In-air r broadband source level Familiar noise source of equivalent intensity
dB re 1 microPa dB re 20 microPa
Typical background level at Lime Kiln
Typical ship passing Lime Kiln
“Loud” ship passing Lime Kiln
1 m from “loudish” ship 175 113 Rock concert
10 m from “loudish” ship (175-18) 157 95 Lawn mower
100 m from “loudish” ship (175-18*2) 139 77 Busy street
1,000 m from “loudish” ship (175-18*3) 121 59 Vacuum cleaner
10,000 m from “loudish” ship 103 41

Broadband levels associated with the quantiles

Email with Val and Wikipedia remind us that Gaussian distributions hold 95% of their values in a range of the mean +/- twice the standard deviation (s.d., or “sigma”, or σ).  Here’s a comparison that Val ran down in Portland:

quantile(data_1_3_BB_noNA$ddB_SL_Emp,probs = c(0.025,0.05,0.25,0.5,0.75,0.95,0.975))
    2.5%       5%      25%      50%      75%      95%    97.5%
155.6082 160.8810 169.6775 174.4650 178.3650 183.5545 184.8073
> mean(data_1_3_BB_noNA$ddB_SL_Emp)
[1] 173.5007
> sd(data_1_3_BB_noNA$ddB_SL_Emp)
[1] 7.403251
Mean - 2 sigma
 173.5 - 2*7.4
[1] 158.7      compare with 155.6   (2.5% quantile above)

Mean + 2 sigma
 173.5 + 2*7.4
[1] 188.3	this one is about 3 dB above the 97.5% quantile above

So the 2*sigma estimates in the table are close enough, but may be slight (3 dB) over-estimates of the broadband levels associated with the  97.5% quantiles…

Erbe, C. (2010). Underwater acoustics: Noise and the effects on marine mammals. Pocketbook, printed by JASCO Applied Sciences, Brisbane, QLD, Australia. Retrieved from


Ship noise paper progress: spectral figures

Back on 3/4/14, Val and I decided to bounce this strategy off Jason: rather than using only night-time data (to exclude the 50 kHz spike and hump we see in the 95% (and to a lesser extent 75%) quantiles of ship and background RL, present quantiles for the whole population and then underlay a “nighttime-only” 95% quantile curve to show how the 50kHz humps go away… If he balks (or reviewers do down the line), then we ~halve data set by only using nighttime data..

Working with Val in Seattle on final details of receive level figure:

  • Hz with sci notation in x-axis, rather than kHz (for ship noise audience)
  • Increase horizontal grid lines to make it easier to read off values and differences between background and ship levels.
  • Work out how to get a legend in ggplot (involves “melting” data series into single column of data with variable name in adjacent column) and position it inside the plot

We sketched out Tufte-ian solution to the ship population source level plot that will show per Hz, 1/12-octave band, and 1/3-octave band levels.

Scott's sketch of what a plot comparing spectrum, 1/12-, and 1/3-octave levels of ship noise.

Scott’s sketch of what a plot comparing spectrum, 1/12-, and 1/3-octave levels of ship noise.

About a month later, we’ve got that figure finalized and are working on the last piece of the paper: whether there’s anything interesting to say about acoustic outliers within a particular class of ships.  Here are notes I took as I visually analyzed plots Val made of noise spectrum levels for each of our ship classes.  Each plot has 25, 50, and 70% quantiles from the population within that class, as well as clouds of data points for each ship measurement made within that class.

Scanning through the DropBox folder called SL_by_Class, here are some highlights:

1) First a data processing sanity check: What are the diagonal features/artifacts (with slope of ~10 dB/decade) showing up in the densest of data point clouds in the 1/12 (but not 1/3!) octave level plots, e.g.–

ddB_abs_vs_ 1_12_octave band _ Bulk carrier __.png
ddB_abs_vs_ 1_12_octave band _ Container ship __.png

See this example at HF end in this zoomed screen grab  —
HF artifacts?

and more worrying example at LF because of the reflection of the slope in the 1/12 octave levels, along with what may be steps in the population distribution of levels near 25 Hz and 45 Hz –Screen Shot 2014-04-11 at 11.43.58 AM

2) Here is a list of per Hz (with absorption) png files that show *really* interesting outliers which I’ve grouped into a couple “categories of interest” —

ddB_abs_vs_ hz _ Bulk carrier __.png
1 super-high, 2 high, and 1 low outliers at HF
The super-high and unusually low outliers define a huge range from lowest to highest at uppermost HFs — about 60 dB for bulk carriers!
Other classes have less variability, e.g. Cargo ships and tankers ~30 dB, Container and military ships ~40 dB, Vehicle carriers ~45 dB, Tugs ~50 dB
For perspective, Ross (Ross, 1976) shows a plot of WWII sub cavitation inception raising levels in a 10kHz-30kHz band 50 dB as the speed of a sub:
(at 7m) increased from ~3kt to ~5kt…
(at 16m) increased from ~4kt to ~6kt…
(at 91m) increased from ~8kt to 12kt.

ddB_abs_vs_ hz _ Cargo __.png
1 high outlier at HF

ddB_abs_vs_ hz _ Container ship __.png
2 similarly high outliers at HF

ddB_abs_vs_ hz _ Tanker __.png
1 of upper outlier at HF shows some very interesting wiggles, akin to those that Hildebrand captured from the Hanjin
Here they are looking very wiggly (in the no-abosorption, per Hz version of the plot) –Screen Shot 2014-04-11 at 12.27.37 PM

ddB_abs_vs_ hz _ Tug __.png
2 high outliers at HF, one or both of which have some wiggles

ddB_abs_vs_ hz _ Vehicle carrier __.png
1 particularly high outlier

3) Other random nifty observations

Different fisheries boats show cool peaks near common transducer frequencies (e.g. 38 and 50 kHz):
Screen Shot 2014-04-11 at 11.53.47 AM

There was more variability in HF levels for military boats than I expected.  I wonder if this is different ship/prop designs or different speeds — maybe a good class in which to look for correlations?
Screen Shot 2014-04-11 at 11.56.35 AM

Ross, D. (1976). Mechanics of underwater noise. Pergamon Press.

Open source & access scientific publication workflow

As we prepare to submit our paper on ship noise in orca habitat I have been seeking an optimal way in which to publish our work. As an independent researcher working for a small non-profit, my publication workflow priorities (in order) were: open access, low costs to authors, easy with opensource tools (e.g. not requiring a Word document!), facilitation of long-range collaboration (my co-authors are not co-located), allowance of supporting materials in diverse formats (e.g. audio files of outlier ships), and open peer review (because I am tired of reviewing without getting any response or credit).

While most of my colleagues have historically published in bioacoustic, biological, or environmental journals, most are either closed access or only offer open access after extracting a pretty penny from the author(s) or after a ~5-year waiting period. For example, the Journal of the Acoustical Society of America is closed access (membership costs ~$100/yr) and PLOSone is charges $1350/article.  It’s a work in progress, but I started building a comparison matrix of open access and other journals that might appeal to marine scientists or bioacousticians.

After a lot of searching and reading about projects like Liberating Research (open peer review) and ShareLaTeX, I homed in on the PeerJ as a potential publisher.   Compared to F1000 and eLife, PeerJ has a clearly defined, progressive, and most-affordable revenue model.  They get rave reviews after only a year of operation, and they are used by my alma mater, Stanford University.

Then I discovered they were promoting themselves by offering free publication for submissions during the month of March!  A quick email to PeerJ asking if a paper on ship noise in an endangered species’s habitat was within their scope (“Biological Sciences, Medical Sciences, and Health Sciences”) got a positive response within a day.  (If and when they expand their scope to include all science is an intriguing question, but for now they seem to be keeping it simple and/or avoiding competition with long-standing entities like arxiv…)

So, now the question was how to move from writing the manuscript (in Libre Office with the Zotero plugin) to submission to first in PeerJ pre-prints, and then in the PeerJ journal.  The simplest route was to just continue in Libre Office and export the paper as a PDF file.  Given PeerJ’s new policy on accepting a single PDF for reviewer use, this would work both the pre-print and the first round of a submission to the PeerJ journal.  If the article was accepted, individual files for each figure and table would need to be provided.

The only problems with this R & GMT figures & tables -> LibreOffice+ZoteroPlugin -> PDF -> PeerJs workflow were that: (1) collaborative writing on the final draft was not going to be as easy as working in something like a Google document that handles simultaneous editing; and (2) the PDF generated by LibreOffice wasn’t going to look as elegant as a PDF generated from a LaTeX editor.  (I’m still spoiled by how great my dissertation looked — thanks to Eric Anderson for the LaTeX inspiration back then!)

This longing for on-line collaboration with remote colleagues and for not having to adjust figure size and table dimensions in LibreOffice anymore sent me towards ShareLatex until I noticed that PeerJ was promoting WriteLaTeX!  Most importantly, WriteLaTeX was offering a PeerJ template and “single-click” submission from WriteLatex to PeerJ pre-prints and/or the PeerJ journal.  Full functionality at WriteLatex will cost ~$39/year, but I think it may be worth it to have all the requisite files transferred over to PeerJ automagically…

So now the most promising workflow looks like: R & GMT figures & tables -> WriteLaTeX + .bib file from Zotero -> PDF -> PeerJs.  Unfortunately, I’m getting bogged down in how to export in bibtex or bibLatex with the citekeys that I know and love, but it looks like there is light at the end of the tunnel…

Basic R Studio and ggplot2 syntax

While I continue to try to get Sage to underpin the analysis stages of my open scientific workflow, a LOT of my colleagues and cohort use R these days.  So this week I installed R Studio and ggplot2 for OSX.

Here are notes from my first foray into plotting histograms.

First I took some date/time data in a Libre Office spreadsheet, converted it to decimal hour of day to keep things simple, renamed column headers to be simple single words (e.g. underscores to connect words), and then saved the active sheet as a tab-delimited .csv file (in this case named 2012-human.csv).

> lk2012<-read.table("/Users/scott/Documents/beamreach/research/pubs-in-progress/02a - ship SL/r-scott/2012-human.csv",header=T,sep="\t")
> png("/Users/scott/Documents/beamreach/research/pubs-in-progress/02a - ship SL/r-scott/lk-human-hr-hists.png")
> par(mfrow= c(3, 1))
> hist(lk2011$hour,breaks=24)
> hist(lk2012$hour,breaks=24)
> hist(lk2013$hour,breaks=24)

That reads the .csv file into lk2012, then prints this 3 panel plot of hour histograms —

Histograms of the hours in which reports were made to the Salish Sea hydrophone network's Google-spreadsheet-based observation log in 2011, 2012, & 2013.

Histograms of the hours in which reports were made to the Salish Sea hydrophone network’s Google-spreadsheet-based observation log in 2011, 2012, & 2013.

More on ggplot2 as I learn it… but know that it is making most of the plots for our current paper in progress regarding underwater noise from Salish Sea ships.

High frequency noise in the Salish Sea

After registering for ORCID and figshare, I worked with Val today in Seattle on trying to understand some of the subtleties in our ship and background quantile curves.  In particular, there are some peaks — both broad and narrow — in the 75th and 95th % quantiles from ~20-96kHz that may represent signals from sources other than commercial ships.  If they are being generated by “outlier” ships, then we need to put some effort into understanding which ships are the source(s) and what mechanisms may be generating the sound.

Here’s a grab of the HF part of Val’s preliminary figure (thanks to R and ggplot2) —

HF ship (blue) and background (black) 5,25,50,75,95% quantiles

HF ship (blue) and background (black) 5,25,50,75,95% quantiles

There is a clear spike at 50 kHz that Jason thinks it is coming from depth sounders and/or fish-finders mounted on recreational fishing boats that happen to be nearby when a ship passes our study site and get recorded.  Though our ship recordings are only 2 seconds long, it’s possible that a nearby fishing boat could generate a ping during that interval because most sounders and finders have ping rates of 1-5 pulses per second.  The alternative is that depth sounders on the commercial ships are responsible for this peak.

To test those two hypotheses, Val wrote an R script that goes through our data set and looks for strong cross-correlations between a template (a power spectrum with a good strong peak at 50 kHz) and the rest of the ship recordings.  We looked through the resulting graphs for high correlation coefficients and visual similarity between the spectra.  Noting the month and time of day of these spectra, we found that all but one occurred during daylight hours and during the peak of the bottomfish & salmon fishing seasons (March-October).

Histogram of month of year in which ship spectra contain a prominent peak at 50 kHz.

Histogram of month of year in which ship spectra contain a prominent peak at 50 kHz.


Histogram of time of day at which ship spectra contain a prominent peak at 50 kHz.

Recreational fishing depth sounders and fish-finders?

The bulk of these recordings were made during daylight hours.  Even the sample taken in the very early morning (5:50 a.m.) occurred during daylight; sunrise on that August morning was at 5:38.  One possible exception is the recording made at 18:50; it occurred on November 1, 2012, when sunset was at 17:51…

Because of the prevalence of daylight recordings, we presume that the more likely source of most of the 50 kHz pings is depth sounders or fish-finders mounted on recreational fishing vessels.  These boats are often observed drifting or trolling off of the west side of San Juan Island, particularly during the summer months.  They frequently operate very near shore, putting them directly over or within a 500 meter radius of our hydrophone.


 Commercial ship depth sounders

The one nighttime spectrum is also anomalous seasonally, occurring in January when odds are very good no one is out fishing in Haro Strait at 9:20 p.m.  Although we’re finding a great variety of pulse widths and rates among the spectra with 50 kHz peaks, this one stands out as having a clear first arrival and subsequent reverberant bottom-bounces.  It also has a slow pulse rate: about 1 pulse every 1.5 seconds, or 0.68 pulses/sec.

MMSI 636091891 waveform and spectrogram showing 50 kHz pings.

MMSI 636091891 waveform and spectrogram from 1/1/12 at 21:21 showing 50 kHz pings.

Waveform and spectrogram showing 50 kHz ping sequence.

Waveform and spectrogram showing 50 kHz ping sequence.

The TDOA is 0.1 seconds which is a path length difference of about 150m.  That’s the approximate depth of Haro Strait midway between the northbound shipping lane and Lime Kiln.  So in this case, the pings may indeed have come from the ship.  Here’s the sound file in case you want to load it into Audacity and listen to the pings; try playback at 0.01x – 0.13x speed.  This ship is a 334 m container ship named the Vancouver Express.

Since Val included in the cross-correlation assessment only ships that had transited our study site at least 10 times, the other strong evidence that pings are made by a ship rather than a nearby boat would be repetitions of similar ping sequences each time the ship passes.  This assumes that it is unlikely that the same type of recreational fishing sounder would be present during each transit of a particular cargo ship.  On the contrary, if the ship’s depth sounder is heard on one transit, we ought to detect that same sounder on other transits of the same ship.

Vancouver Express 636091891 transit on 2-11-12 at 11:50

Vancouver Express 636091891 transit on 2-11-12 at 11:50

Vancouver Express 636091891 transit on 9-7-11 at 6:47

Vancouver Express 636091891 transit on 9-7-11 at 6:47

Below is another fascinating example of ship-generated noise, including 50 and 79 kHz pulses that may have come from the ship, or some other vessel(s).

372649000 7-8-12 showing 79 & 50 kHz pings and cavitation noise to 96 kHz.

372649000 7-8-12 showing 79 & 50 kHz pings and cavitation noise to 96 kHz.

Next will come the tough part: excluding from our ship noise data set the recordings that have spectra contaminated by fishing boat pingers, system noise, and other non-ship sources…  Thankfully, Val is a programming wizard so this won’t take long!

Revisiting GMT, perl, and proj

The time has come to make a site map for our ship source level paper.  I always love this stage of a project — map making — because it still takes me back to some of the geekiest open source Unix stuff my PhD advisor, Russ McDuff, taught me: using public bathymetric data in Generic Mapping Tools (GMT) and Proj within Perl to make awesome maps of the ocean for free!  It’s always a chore to get all these tools working within your Li/Un/*ix operating system, but below is an example of what they can do from this working directory

In my latest attempt (within a account), I had to remember (again) the syntax and commands for setting path variables…

  1. edit .bashrc to include a path to your proj and GMT binaries
  2. export PATH=~/bin:~/usr/local:~/usr/local/bin:~/$PATH
  3. execute: . .bashrc to reinitialize the PATH variable
  4. check PATH with env
  5. check all is well with which proj

Here’s the perl script for making the (above) rough draft map of Haro Strait and the San Juan Islands.


#`gmtset X_AXIS_LENGTH 8.5i X_AXIS_LENGTH 11i`;
#`gmtset ANOT_FONT_SIZE 8 ANOT_OFFSET 0.037`;
#$gxscl=4; $gyscl=4;
#`gmtset GLOBAL_X_SCALE $gxscl  GLOBAL_Y_SCALE $gyscl`;

# Haro Strait region
#$prj=6;        # inches/degree
$lonmin=-123.5; $lonmax=-123; $latmin=48.25; $latmax=48.75;
#$xoff=2.5;  $yoff=4;
$xoff=2;  $yoff=2;
#$xoff=0;  $yoff=0;



`psbasemap $proj $bound -K -P -V -X$xoff -Y$yoff $tick > $out_ps`;
`grdimage $basemap -C$cptfile $proj $bound -P -O -K -V >> $out_ps`;
`grdcontour $basemap -C50 $proj $bound -W.05p -Q10 -P -O -K -V >> $out_ps`;
#`grdcontour $basemap -Cfig1cnt $proj $bound -W1p/255/255/255 -Q10 -P -O -K -V >> $out_ps`;
#`grdcontour $basemap -C10 $proj $bound -W.05p -Q10 -P -O -K -V >> $out_ps`;
#`grdcontour $basemap -C20 -A100f3 $proj $bound -W1p -Q10 -P -O -K -V >> $out_ps`;
#`grdcontour $basemap -C500 $proj $bound -W$cntpen -Q10 -P -O -K -V >> $out_ps`;
#`grdcontour $basemap -C0 $proj $bound -W$cntpen -Q10 -P -O -K -V >> $out_ps`;
#`grdcontour $basemap -C2000 -Af3 $proj $bound -W0.5p/50/50/50 -Q10 -P -O -K -V >> $out_ps`;

#`pscoast $proj $bound -Dh -A9 -W -G255/255/255 -S240/240/240 -N1 -N2 -O -K -V >> $out_ps`;
#`pscoast $proj $bound -Dh -A9 -W -N1 -N2 -O -K -V >> $out_ps`;
# Land grey:
#`pscoast $proj $bound -Dh -A20 -W0.5p/10/10/10 -G200/200/200 -O -K -V >> $out_ps`;
`pscoast $proj $bound -Dh -A20 -W0.5p/10/10/10 -O -K -V >> $out_ps`;

# Label pertinent features...
open (WORDS, ">labels");
print (WORDS "-123.75  48.445    $wfont 0 0 2 Victoria\n");
#print (WORDS "-124.690  48.245    $nodefont 0 0 TL Neah Bay\n");
#print (WORDS "-122.750  48.060    $nodefont 0 0 TR Marine Science Center\n");
#print (WORDS "-123.300  48.550    $nodefont 0 0 BR Orcasound lab\n");
print (WORDS "-123.065  48.525    $nodefont 0 0 TL Lime Kiln\n");
#print (WORDS "-122.700  48.425    $nodefont 0 0 TL State Park\n");
print (WORDS "-122.700  47.635    $nodefont 0 0 TC Seattle\n");
#`pstext -Jm -R -G0/0/0 -Dj$xtoff/$ytoff-O -K -V < labels >> $out_ps`;
# No background rectangles
#`pstext -Jm -R -G0/0/0 -O -K -V < labels >> $out_ps`;
# Background rectangles with rounded corners: [ -W[color,][o|O|c|C[pen]]
#`pstext -Jm -R -G0/0/0 -W255/255/255,Othick,255/0/0 -O -K -V < labels >> $out_ps`;
`rm labels`;

# Put symbols on nodes
open(SYM, ">nodes.lonlat") || die "Can't open nodes.lonlat!\n";
# print SYM "-124.675  48.390\n";  # Neah Bay
# print SYM "-122.750  48.125\n"; # Port Townsend
# print SYM "-123.200  48.550\n"; # Orcasound
 print SYM "-123.150  48.500\n"; # Lime Kiln
# print SYM "-122.350  47.600\n"; # Seattle
`psxy nodes.lonlat -Jm -R -P -Sc0.5/0/0/0 -G255/0/0 -V -O -K >> $out_ps`;

#Put more symbols on...
`psxy catscradle.lonlat -Jm -R -P -Sc0.05/0/0/0 -G255/0/0 -V -O -K >> $out_ps`;

# Good offset if using WS lat/lon labels
#$locyoff=1.8; $locxoff=2.8;
$locyoff=0.75; $locxoff=0.75;
#`psbasemap -Jm0.2 -R-135/-105/35/55 -O -K -X$locxoff -Y$locyoff -P -Ba10f5/a5f5WeSn >> $out_ps`;
#`psbasemap -Jm0.2 -R-135/-105/35/55 -O -K -X$locxoff -Y$locyoff -P -Ba10f5/a5f5wesn >> $out_ps`;
`psbasemap -Jm2.5 -R-125/-122/47/48.5 -O -K -X$locxoff -Y$locyoff -P -Ba10f5/a5f5wesn >> $out_ps`;
# plot the coast line
#`pscoast -Jm -R -O -K -W -G210/180/140 -S135/206/250 -N1 -N2 >> $out_ps`;
#`pscoast -Jm -R -Di -A200 -O -K -W -G216/242/254 -S148/191/139 -N1 -N2 -V >> $out_ps`;
# Seattle Aquarium Kiosk colors
#`pscoast -Jm -R -Di -A200 -O -K -W -G172/208/165 -S186/232/254 -N1 -N2 -V >> $out_ps`;
`pscoast -Jm -R -Di -A200 -O -K -W -G200/200/200 -S255/255/255 -N1 -N2 -V >> $out_ps`;

# Put symbols on key cities
open(SYM, ">cities.lonlat") || die "Can't open cities.lonlat!\n";
# print SYM "-123.45 48.45\n";
`psxy cities.lonlat -Jm -R -P -Sc0.5/0/0/0 -G0/0/0 -V -O -K >> $out_ps`;

# Small box around the zoomed area
$lonboxmax=$lonmax; $lonboxmin=$lonmin;
$latboxmax=$latmax; $latboxmin=$latmin;
open(BOX, ">magbox.lonlat") || die "Can't open magbox.lonlat!\n";
print BOX "$lonboxmax-$marg   $latboxmax+$marg\n";
print BOX "$lonboxmax-$marg      $latboxmin-$marg\n";
print BOX "$lonboxmin+$marg      $latboxmin-$marg\n";
print BOX "$lonboxmin+$marg      $latboxmax+$marg\n";
print BOX "$lonboxmax-$marg      $latboxmax+$marg\n";
`psxy magbox.lonlat -Jm -R -P -W4p/0/0/0 -V -O >> $out_ps`;
`rm magbox.lonlat`;

`convert $out_ps $out_nm.png`;

Late night notes on noise from ship propeller cavitation

In Val’s most recent plots of noise levels received from passing ships at our Lime Kiln study site, we’re seeing interesting peaks at high frequency (>20 kHz) that vary between ship classes. Most prominently, there is a peak near or just above ~40 kHz for many ship types that is sometimes quite narrow-band, other times really broad (+/- ~10 kHz).

What spectral patterns should we expect for modern commercial ships, particularly at frequencies between 10 kHz and 100 kHz?

Thus begins another dive into the ship noise literature… revealing:

  1. Merchant ship propeller diameter (in meters) is about equal to their length in meters divided by 25 (Gray, Greeley, 1980).

    From Gray and Greeley, 1980

    From Gray and Greeley, 1980

  2. Mean blade-rate frequency varies a bit with ship length, but basically for big ships a typical frequency is about 8 Hz.

    Blade rate frequency for ~200m ships

    Blade rate frequency for ~200m ships (from Gray and Greeley, 1980)

  3. The blade-rate frequency shows up because cavitation noise is maximized when a propeller blade tip passes through the low velocity and pressure region of the wake field (the top of their rotation on single screw ships).  There is also a lot of energy at many harmonics of the blade-rate frequency.  Similar patterns of peaks are created by engines (firing rate harmonics) and generators (generator harmonics).
  4. The following plots from (Arveson, Vendittis, 2000) show that underwater noise data collected <~600m from a 173 m long coal carrier contain these numerous harmonics.
    Blade, engine, and generator harmonics

    Blade, engine, and generator harmonics

      1. All those peaks and additional broad-band noise from the collapse of the cavities add up to create spectra (e.g. 1/3-octave levels, below) which have led to the common characterization of shipping noise as having most of the energy at low frequencies (1-500 Hz).  As Areveson and Vendittis put it:“In addition to the blade rate harmonic series, cavitation generates a wideband spectrum due to the chaotic collapse of cavities. This spectrum has a broad, high-level ‘‘hump’’ centered at about 55 Hz, followed by a continuum that decreases by 6 dB per octave on a constant-bandwidth plot, or 3 dB per octave as seen on a 1/3-octave plot.”

        1/3 octave levels vs ship speeds

        1/3 octave levels vs ship speeds

  5. What is commonly overlooked is the rise in high-frequency noise that occurs with increases in ship engine RPM (and correspondingly ship speed).  Note that the increase in noise between the highest two RPM levels at 30,000 Hz is 2-4x the increase at 30 or 300 Hz.
  6. The authors note: “Above the cavitation inception speed the shape and peak frequency of the wideband cavitation spectrum do not change appreciably with ship speed. Only the overall level changes; it increases smoothly with speed according to 104 log (rpm), or about 31 dB per double speed.”
  7. While these spectra are interesting, they are not showing much structure above 20 kHz…
  8. At least for the cruise ships going 10 knots and measured at 500 yards by (Kipple, 2002) there are no dramatic peaks in the 10-40 kHz range, nor any indication that there is a peak near 40 kHz.

    Cruise ship 1/3-octave spectra

    Cruise ship 1/3-octave spectra

Gray, L. M., & Greeley, D. S. (1980). Source level model for propeller blade rate radiation for the world’s merchant fleet. The Journal of the Acoustical Society of America, 67(2), 516–522. doi:10.1121/1.383916
Arveson, P. T., & Vendittis, D. J. (2000). Radiated noise characteristics of a modern cargo ship. The Journal of the Acoustical Society of America, 107(1), 118–129. doi:10.1121/1.428344
Kipple, B. (2002). Southeast Alaska Cruise Ship Underwater Acoustic Noise (Technical Report No. NSWCCD-71-TR-2002/574) (p. 92). Naval Surface Warfare Center – Detachment Bremerton. Retrieved from