# Ship noise paper progress: spectral figures

Back on 3/4/14, Val and I decided to bounce this strategy off Jason: rather than using only night-time data (to exclude the 50 kHz spike and hump we see in the 95% (and to a lesser extent 75%) quantiles of ship and background RL, present quantiles for the whole population and then underlay a “nighttime-only” 95% quantile curve to show how the 50kHz humps go away… If he balks (or reviewers do down the line), then we ~halve data set by only using nighttime data..

Working with Val in Seattle on final details of receive level figure:

• Hz with sci notation in x-axis, rather than kHz (for ship noise audience)
• Increase horizontal grid lines to make it easier to read off values and differences between background and ship levels.
• Work out how to get a legend in ggplot (involves “melting” data series into single column of data with variable name in adjacent column) and position it inside the plot

We sketched out Tufte-ian solution to the ship population source level plot that will show per Hz, 1/12-octave band, and 1/3-octave band levels.

Scott’s sketch of what a plot comparing spectrum, 1/12-, and 1/3-octave levels of ship noise.

About a month later, we’ve got that figure finalized and are working on the last piece of the paper: whether there’s anything interesting to say about acoustic outliers within a particular class of ships.  Here are notes I took as I visually analyzed plots Val made of noise spectrum levels for each of our ship classes.  Each plot has 25, 50, and 70% quantiles from the population within that class, as well as clouds of data points for each ship measurement made within that class.

Scanning through the DropBox folder called SL_by_Class, here are some highlights:

1) First a data processing sanity check: What are the diagonal features/artifacts (with slope of ~10 dB/decade) showing up in the densest of data point clouds in the 1/12 (but not 1/3!) octave level plots, e.g.–

ddB_abs_vs_ 1_12_octave band _ Bulk carrier __.png
and
ddB_abs_vs_ 1_12_octave band _ Container ship __.png

See this example at HF end in this zoomed screen grab  —

and more worrying example at LF because of the reflection of the slope in the 1/12 octave levels, along with what may be steps in the population distribution of levels near 25 Hz and 45 Hz –

2) Here is a list of per Hz (with absorption) png files that show *really* interesting outliers which I’ve grouped into a couple “categories of interest” —

ddB_abs_vs_ hz _ Bulk carrier __.png
1 super-high, 2 high, and 1 low outliers at HF
The super-high and unusually low outliers define a huge range from lowest to highest at uppermost HFs — about 60 dB for bulk carriers!
Other classes have less variability, e.g. Cargo ships and tankers ~30 dB, Container and military ships ~40 dB, Vehicle carriers ~45 dB, Tugs ~50 dB
For perspective, Ross (Ross, 1976) shows a plot of WWII sub cavitation inception raising levels in a 10kHz-30kHz band 50 dB as the speed of a sub:
(at 7m) increased from ~3kt to ~5kt…
(at 16m) increased from ~4kt to ~6kt…
(at 91m) increased from ~8kt to 12kt.

ddB_abs_vs_ hz _ Cargo __.png
1 high outlier at HF

ddB_abs_vs_ hz _ Container ship __.png
2 similarly high outliers at HF

ddB_abs_vs_ hz _ Tanker __.png
1 of upper outlier at HF shows some very interesting wiggles, akin to those that Hildebrand captured from the Hanjin
Here they are looking very wiggly (in the no-abosorption, per Hz version of the plot) –

ddB_abs_vs_ hz _ Tug __.png
2 high outliers at HF, one or both of which have some wiggles

ddB_abs_vs_ hz _ Vehicle carrier __.png
1 particularly high outlier

3) Other random nifty observations

Different fisheries boats show cool peaks near common transducer frequencies (e.g. 38 and 50 kHz):

There was more variability in HF levels for military boats than I expected.  I wonder if this is different ship/prop designs or different speeds — maybe a good class in which to look for correlations?

Ross, D. (1976). Mechanics of underwater noise. Pergamon Press.

# Open source & access scientific publication workflow

As we prepare to submit our paper on ship noise in orca habitat I have been seeking an optimal way in which to publish our work. As an independent researcher working for a small non-profit, my publication workflow priorities (in order) were: open access, low costs to authors, easy with opensource tools (e.g. not requiring a Word document!), facilitation of long-range collaboration (my co-authors are not co-located), allowance of supporting materials in diverse formats (e.g. audio files of outlier ships), and open peer review (because I am tired of reviewing without getting any response or credit).

While most of my colleagues have historically published in bioacoustic, biological, or environmental journals, most are either closed access or only offer open access after extracting a pretty penny from the author(s) or after a ~5-year waiting period. For example, the Journal of the Acoustical Society of America is closed access (membership costs ~$100/yr) and PLOSone is charges$1350/article.  It’s a work in progress, but I started building a comparison matrix of open access and other journals that might appeal to marine scientists or bioacousticians.

After a lot of searching and reading about projects like Liberating Research (open peer review) and ShareLaTeX, I homed in on the PeerJ as a potential publisher.   Compared to F1000 and eLife, PeerJ has a clearly defined, progressive, and most-affordable revenue model.  They get rave reviews after only a year of operation, and they are used by my alma mater, Stanford University.

Then I discovered they were promoting themselves by offering free publication for submissions during the month of March!  A quick email to PeerJ asking if a paper on ship noise in an endangered species’s habitat was within their scope (“Biological Sciences, Medical Sciences, and Health Sciences”) got a positive response within a day.  (If and when they expand their scope to include all science is an intriguing question, but for now they seem to be keeping it simple and/or avoiding competition with long-standing entities like arxiv…)

So, now the question was how to move from writing the manuscript (in Libre Office with the Zotero plugin) to submission to first in PeerJ pre-prints, and then in the PeerJ journal.  The simplest route was to just continue in Libre Office and export the paper as a PDF file.  Given PeerJ’s new policy on accepting a single PDF for reviewer use, this would work both the pre-print and the first round of a submission to the PeerJ journal.  If the article was accepted, individual files for each figure and table would need to be provided.

The only problems with this R & GMT figures & tables -> LibreOffice+ZoteroPlugin -> PDF -> PeerJs workflow were that: (1) collaborative writing on the final draft was not going to be as easy as working in something like a Google document that handles simultaneous editing; and (2) the PDF generated by LibreOffice wasn’t going to look as elegant as a PDF generated from a LaTeX editor.  (I’m still spoiled by how great my dissertation looked — thanks to Eric Anderson for the LaTeX inspiration back then!)

This longing for on-line collaboration with remote colleagues and for not having to adjust figure size and table dimensions in LibreOffice anymore sent me towards ShareLatex until I noticed that PeerJ was promoting WriteLaTeX!  Most importantly, WriteLaTeX was offering a PeerJ template and “single-click” submission from WriteLatex to PeerJ pre-prints and/or the PeerJ journal.  Full functionality at WriteLatex will cost ~$39/year, but I think it may be worth it to have all the requisite files transferred over to PeerJ automagically… So now the most promising workflow looks like: R & GMT figures & tables -> WriteLaTeX + .bib file from Zotero -> PDF -> PeerJs. Unfortunately, I’m getting bogged down in how to export in bibtex or bibLatex with the citekeys that I know and love, but it looks like there is light at the end of the tunnel… # Refining ship noise paper & figures Working through figures that Val generated in PDX while we are both back in Seattle (12-6pm). Trying to get figures updated with latest data set (northbound only, as our spreading experiment only extended into northbound lanes). For comparison with our passenger vessel class, we digitized a few plots from (Kipple, 2002) and (Kipple, Kollars, 2004). Then we put the ship source spectral data in a Google spreadsheet. Learned you can insert an OLE object into LibreOffice document, including a spreadsheet (linked if you like), or a chart. Managing tables is a pain in LibreOffice, especially adjusting the column width. Also learned that there is a serious LibO bug related to caching images in documents. It probably explains why I was experiencing horrendous lags and jumps when trying to scroll through the figure section of the ship source level paper. It seems to have helped to bump up all memory allocations by about an order of magnitude in the LibO Preferences. Kipple, B. (2002). Southeast Alaska Cruise Ship Underwater Acoustic Noise (Technical Report No. NSWCCD-71-TR-2002/574) (p. 92). Naval Surface Warfare Center – Detachment Bremerton. Retrieved from http://www.nps.gov/glba/naturescience/upload/CruiseShipSoundSignaturesSEAFAC.pdf Kipple, B., & Kollars, R. (2004). Volendam Underwater Acoustic Levels (Technical Report) (p. 7). Naval Surface Warfare Center – Detachment Bremerton. # Dictation & opensource science within OSX Mavericks In part because I’m deciding whether it’s worth it for my kids (7 & 10) to learn to type, I ended up spending some time yesterday and today learning about the state of speech recognition software. I was also motivated by noticing (almost simultaneously) how much faster Google voice search (on an iPhone 4s) returns text than does Siri (on the same phone). As we were discussing the value of typing (e.g. doctors typing notes while talking with patients, or scientists composing papers without disturbing their lab mates), my wife mentioned that Dragon is sometimes used by dictating doctors. This led me to realize that if I updated my MacBook Pro from Lion to Mavericks (for free) that I’d gain access to a built-in speech recognition engine — one that is apparently available from within any OSX application. Side-by-side tests suggested that Dragon for OSX, though more expensive, ($160 in late 2013), was better for long and technical narrations than the Mavericks dictation.  However, the idea that intrigued me was being able to dictate portions of my daily science writing (like blog posts or paper paragraphs) on my laptop in the same way that I use Siri to generate (at least drafts of) texts and short emails on my phone.

After upgrading OSX (to 10.9) I found the Dictation and Speech pane in system preferences.   I initiated the 785 Mb download that enables offline dictation (in near-real time, not in laggy 30-second chunks).

OSX 10.9 (Mavericks) dictation setup

This is what the OSX Maverick’s dictation system delivered before I edited the end of the preceding paragraph:

Dictating works well in Thunderbird you can do for addresses in subject lines.  Dictating a name into Google scholar is much faster than typing it but until my contacts are imported there are a lot of spelling mistakes.  Dictating in zero Taro isn’t very useful because most fields are short phrases or single word tags.  Speech recognition really shines in blog posts and social media updates because the tone can be is in for mall as a conversation.  Dictating the text of an academic paper doesn’t work so well because it’s hard to talk with the requisite clarity and for multi.

Along the way, I learned a little about the sad story of smart scientists losing out to avaricious and incompetent business people, and having their intellectual property end up (most probably) behind this amazing Apple software, and definitely within Dragon.  As an open source advocate, I’d strongly recommend boycotting Dragon and going with any other solution that isn’t such a shameful rip-off of the inventors.

# Southern resident growth rate ambiguity

About a month ago Orca Relief issued press releases related to the population dynamics of southern resident killer whales.  I spent a little while then marveling at how their new leader Bruce Stedman seemed to have cherry picked data with which to support a claim that the population was in precipitous decline.  This morning I took some time to revisit those plots in light of the equally disturbing statistics and graphs presented during the orca-salmon workshops conducted by Will Stelle over the last couple years (live blogs with audio and/or video here — workshop 2; workshop 3).

Basically, somebody who is statistically-talented and politically-independent needs to clarify for the conservation and stewardship community the state of the Southern Resident killer whale (SRKW) population.  The same person or entity should re-interpret the data whenever we get an updated census (from the Center for Whale Research [CWR] which — as an aside — recently put its NOAA-funded photos and census data behind a membership paywall).

The main misleading aspects of the Orca Relief plots (see below) are that they only went back to 1997 (though we have data back to 1971) and they focused on decreases in juvenile and reproductive females that were drawn from maxima in the time series, rather than being fit to the full range of data.  This is an incomplete statistical presentation, and one that is potentially misleading, just as using stacked bar graphs can cause confusion in any science presentation.  Below are two examples.

Female SRKWs of (undefined) reproductive age by pod.  Stacked bar graphs make it difficult to see and compare trends in the K and L populations; only the J and total population trends can be easily discerned.

Juvenile female SRKW population time series as presented by Orca Relief.  The rate of decline from 2001 to present is an overestimate of decline compared with one made using 1997 to present, or a best linear fit to all available data.  More importantly, why have the previous, available 22 years of data been excluded?

For comparison, here is a plot of all CWR census data (1971-2012) by pod as presented in the 2012 Puget Sound Partnerships orca vital sign:

CWR SRKW population data by pod presented by the PSP.

This shows how potentially misleading it is for an organization like Orca Relief to showcase trends using only data since 1997!  If we use 1971 as the baseline, J and K pod appear to have grown slowly, while L pod’s growth is positive but lower.  It also is apparent that the population dynamics of L pod, more than J or K, drive the sudden changes in the total population, which is growing on average over the whole time series.  As the 2012 PSP’s orca vital sign reports:

“…as of July 1st 2013, the size of the population was 82 individuals, down by four whales relative to the 2010 baseline reference of 86 whales… Although there has been no progress made since 2010, the population has been growing, albeit slowly at about 1% per year, over the longer term (1979 to 2010).”

The stories run by Q13FOX, the San Juan Islander and Island Guardian and picked up by KOMO news didn’t concern themselves with such statistical details.  However, a later piece in the San Juan Islander about reproductive (mature) male SRKWs quoted NOAA’s Eric Ward who argued that the recent decline in mature males is part of the population’s historic fluctuations, not a sign of imminent collapse as suggested by Orca Relief in the Island Guardian.  Both pointed out the good news: that juvenile male populations have been rising recently.

Mature male time series portrayed by Orca Relief (1997-2013)

Juvenile male increase graphed by Orca Relief

Eric Ward’s plot of mature male SRKW population (1971-2013)

As an aside, I was impressed that the San Juan Islander provided references to some of Eric Ward’s papers.  What wasn’t referenced though, were the many discussions of SRKW population trends in the 3 NOAA-sponsored workshops on “The Effects of Salmon Fisheries on Southern Resident Killer Whales.”  In the third workshop, it became clear that Ward presented estimates of population growth rates for the SRKWs (basically +1% annual growth rate) which were at odds with estimates made by his Canadian modeler-analog Antonio Velez-Espino (basically -1% annual growth rate).   (Velez-Espino chose to exclude pre-1987 data, stating that his choice — a compromise between extent and quality of data — was most representative of the current population ~25 years or one generation back.)

So, is the SRKW population in slow decline or slow recovery?!  How concerned should we be with the latest downturn in the overall population?  What does it mean for the recovery process if the population continues to fail to meet the recovery plan goal of maintaining 2.3% annual growth over a period of 14 years?

The final report from the orca-salmon workshops (Hilborn, Cox, Gulland, Hankin, Hobbs, Schindler, Trites, 2012) has some exemplary figures that shed a little more light on the situation, particularly the trends in the population of juvenile females (J pod increasing, L pod decreasing).  Note that the break points between Orca Relief’s and the Science Panel’s juvenile/young and breeding/reproductive age classes are different.  Nevertheless, it’s worth focusing on the  juvenile and reproductive female age groups because they have a strong influence in population models like Velez-Espino’s.  (He stated in the final workshop that the greatest increase to the modeled population growth rate comes from increasing survival and fecundity of young reproductive females.)  Below are figures for “young” (<21 year old) and “young and reproductive” (<43 year old) populations (not directly comparable to the age groups chosen by Orca Relief, but similar).

Young (<21 year old) SRKW population trends and sex ratios.

Young and reproductive SRKW population trends

So, out of time again, I’m left wondering: should we be panicking as the strange year of 2013draws to a close?  What do you think?

References:

 

 Hilborn, R., Cox, S. P., Gulland, F. M. D., Hankin, D. G., Hobbs, N. T., Schindler, D. E., & Trites, A. W. (2012). The Effects of Salmon Fisheries on Southern Resident Killer Whales Final Report of the Independent Science Panel (p. 87). NMFS & DFO. Retrieved from http://www.nwr.noaa.gov/publications/protected_species/marine_mammals/cetaceans/killer_whales/esa_status/kw-chnk-final-rpt.pdf 
 

# ResearchGate converging with Libre peer review?

Before I Mendeleted, one thing that always pissed me off about Mendeley was that they provided no tools for being notified when a private group member added a reference to a shared library.  Getting an email or text when a valued colleague or collaborator publishes a paper is the main way I stay on top of the literature these days.  Sometimes the notification comes directly from the author (often via a topical email listserv like MarMam).  Other times it is in a personal email from a co-author, or in a Tweet from a colleague (Rob Williams and Erin Ashe of Oceans Initiative are good at this).

Given this aspect of my scientific workflow, I’ve been very pleased that Research Gate has been reliably sending me email announcements when the members I follow add publications to their account.  While some of these notifications aren’t so useful because the publications are dated (due to new members uploading their old work), many notifications have led me to recent publications that I’d not yet read.  Here’s an example from this morning’s email:

Thunderbird view of ResearchGate email announcing new pubs from colleagues I follow

I clicked on view, cut and pasted the title into Google, and happily found an open source version of the article published by Hindawi.com.  I read it, found some errors, and wished I could ask a couple questions of the lead author.  Hindawi offered no way to comment (or even contact the publisher), so I went back to ResearchGate and noticed the discuss button:

Note the discuss button in the lower left corner…

I clicked on that and was able to ask a question (not always the best way to start a discussion, but ok).  It took me a while to figure out how to get the title worded right (and under the character limit), how to pose multiple questions, and what tags to add to the discussion topic, but this is how it came through:

I’ll report back if I get any response, but I’m pleased that ResearchGate made posing some questions possible, at least.  This is far from what I’d prefer to be able to do: mark-up the paper directly (or at least my version of it) as I read it in a way that the author could easily understand and respond to (in the form of corrections or improvements to the paper).

Of course, this is what projects like Libre aspire to do…  Which leads me to my next task this morning: exploring what appears to be the launch of Libre!  At last their home page has changed; in fact, it’s re-branded as Liberating Research.  Now that’s a movement I can join!

Unfortunately, it looks like the new site is just a framework for what’s to come.  Although the navigational header looks promising —

— those links to (tools for?) Authors, Articles, Bibliographies, and Disciplines all lead to the same old “Sign up” page for those wanting to hear when the site actually launches.  Here’s looking forward to the wedding of scientific social networks like ResearchGate with improvements in peer review like those offered (very soon!?) by Libre.

Just stumbled upon a new “word” in the Urban Dictionary that I plan to use often in this notebook and beyond.  It’s a daily experience for me to get news of a new science article — via email lists like MarMam or in a Tweet from a colleague — only to click on the link and run into a \$20-40 paywall.  I usually scan the abstract, add the doc in Zotero, and tag it with “ref2get” so that when I’m next on the University of Washington campus I can download the PDF.

Since I work in my home office most days, it typically takes from 2-4 weeks before I manage to access the PDF.  Now, in that interim period, I can simplify any response I might have made if the article had been open-access by simply typing —

bp;dr

It’s really an acronym for “behind paywall; didn’t read.”  I’m particularly looking forward to using it in face-to-face conversations with colleagues who still published in closed-access journals or in audience discussions at the next ASA conference.  I predict it will become the openscientist’s version of my kids’ favorite finger symbol: W(hatever)!

Cynthiakane.com Whatever

# Towards open science at Beam Reach

Herein begins my first open science notebook.  Inspired by my friend, Eli Holmes, who shared her open notebook during the recent Federal furlough, I started reading about open notebooks and decided that maintaining one would fit into my on-going efforts to refine and open my scientific workflow.

As 2013 ends I’ll be working on three projects.  Collaborating with Val Veirs and Jason Wood, I will finish a paper on the underwater noise made by ships within the summer habitat of southern resident killer whales.  I will daft a synopsis of sound (noise and signals) in the Salish Sea for the Encyclopedia of Puget Sound.  And I will wrap up a salmon tracking project in the San Juan Islands that was organized by Tom Quinn (UW) and Kurt Fresh (NOAA/NWFSC) with Anna Kagley and others.  These projects will supplement the teaching and research (wiki) and blogging I conduct through my non-profit, the Beam Reach Marine Science and Sustainability School.

Along the way, I expect to continue exploring tools that boost my scientific productivity and support my long-term goal of opening science.  I am writing in Open Office and maintaining my references, bookmarks, and notes with Zotero, while keeping an eye on tools like ShareLaTeX.  I use GMT to make maps and we are using R to generate figures for the ship source level paper.  While surveying our open access publication options, we are watching the release of Libre (this November!?) in the hopes of engaging its open peer-review process.