Tag Archives: owl

Computing – Owl Partially Restored

Heard back from Synology and they indicated I should click the “Repair” option to fix the System Partition Failed error message seen previously.

I did that and our data is now accessible again. However, all the user account info, scheduled tasks (e.g. Glacier backups, notebook backup script), IP configurations, mail configurations, etc. have all been reset.

I downloaded/installed the various packages needed to have the server accessible via the web and configured the IP address settings.

Have a note out to Synology to see if the configurations can be restored somehow. Once I hear back, we’ll get user accounts re-established.

Below is a chronological set of screen caps of the repair process:


Our data is still here! This is before performing the “Repair” operation, btw. It seems it just required some time to re-populate directory structure.





Still getting a “degraded” error message, but all drives appear normal. However, Disk 3 in the DX513 is not showing; possible cause for “degraded” status?





Set up manual IP settings by expanding the “LAN 1″ connection.

Troubleshooting – Synology NAS (Owl) Down After Update

TL;DR – Server didn’t recover after firmware update last night. “Repair” is an option listed in the web interface, but I want to confirm with Synology what will happen if/when I click that button…

The data on Owl is synced here (Google Drive): UW Google Drive

However, not all of Owl was fully synced at the time of this failure, so it seems like a decent amount of data is not accessible. Inaccessible data is mostly from individual user directories.

All high-throughput sequencing is also backed up to Amazon Glacier, so we do have all of that data.


Here is what happened, in chronological order:


  1. Updated DSM via web interface in “Update & Restore”. Did NOT perform manual install.
  2. System became inaccessible via web interface and Synology Assistant.
  3. The physical unit showed blue, flashing power light and green flashing LAN1 light.
  4. No other lights were illuminated (this includes no lights for any of the drive bays).
  5. The attached expansion unit (DX513) showed steady blue power light, steady green lights on all drive bays, and steady green eSATA light.
  6. I powered down both units via the DS1812+ power button.
  7. I turned on the both units via the DS1812+ power button.
  8. Both units returned to their previous status and were still inaccessible via the web interface and Synology Assistant.
  9. I powered down both units via the DS1812+ power button.
  10. I removed all drives from both units.
  11. I turned on the both units via the DS1812+ power button.
  12. I connected to the DS1812+ via Synology Assistant. A message indicated “No Hard Disk Found on 1812+”.
  13. I powered down both units via the DS1812+ power button.
  14. I added a single HDD to the DS1812+.
  15. I turned on the both units via the DS1812+ power button.
  16. I connected to the DS1812+ via Synology Assistant. I was prompted to install the latest DSM. I followed the steps and created a new admin account. Now the system shows 7 drives in the DS1812+ with a message: “System Partition Failed; Healthy”. Disk 1 shows a “Normal” status; this is the disk that I used to re-install DSM in Step 14. Additionally, the system shows one unused disk in the DX513.
  17. I powered down both units via the web interface.
  18. I removed Disk 1 from DS1812+.
  19. I turned on the both units via the DS1812+ power button.
  20. The DS1812+ returns to its initial state as described in Step 3.
  21. I powered down both units via the DS1812+ power button.
  22. I returned Disk 1 to its bay.
  23. I turned on the both units via the DS1812+ power button.
  24. There’s an option to “Repair” the Volume, but I’m not comfortable doing so until I discuss the in/outs of this with Synology. Submitted a tech support ticket with Synology.

Below are pictures of the entire process, for reference.


Server status when I arrived to lab this morning.


Pulled the HDDs from both units, in an attempt to be able to connect via Synology Assistant.


Units w/o HDDs.


No HDDs in units made the server detectable via Synology Assistant, but it indicates “Not installed” in the “Status” column…


Successfully connected, but the DS1812+ indicates no HDDs installed.



Added a single HDD back to the DS1812+. Notice, the drive light is green and the “Status” light is amber. This is actually an improvement over what I saw when I arrived.


Added back a single HDD to the DS1812+ and now have this setup menu.


I’m prompted to install the Synology DSM.


Installing DSM. This “Formatting system partition” message might be related to the eventual error message that I see (“System Partition Failed”) after this is all set up…









Prompted to create an admin account. This does not bode well, since this is behaving like a brand new installation (i.e. no record of the previous configuration, users, etc.).


Continuing set up.


All set up…



Added all the HDDs back and detected via Synology Assistant.


This shows that there are no other users – i.e. previous configuration is not detected.


After putting all the HDDs back in, got this message after logging in.


Looking at the Storage info in DSM; seems bad.



Physically, the drives all look fine (green lights on all drive bays), despite the indication in the DSM about “System Partition Failed” for all of them (except Disk 1). The expansion unit’s bay lights are actually all green, but were actively being read at the time of picture (i.e. flashing) so the image didn’t capture all of them being green. Amber light on expansion unit reflects what was seen in the DSM – the middle drive is “Not initialized”. Note, the drive is actually inserted, but the handle has been released.


This is how I left the system. Notice that after rebooting, the expansion unit no longer shows that “Not initialized” message for Disk 3. Instead, Disk 3 is now detected as installed, but not used…


Data Management – Download Final BGI Genome & Assembly Files

We received info to download the final data and genome assembly files for geoduck and Olympia oyster from BGI.

In total, the downloads took a little over three days to complete!

The notebook detailing how the files were downloaded is below, but it should be noted that I had to strip the output cells because the output from the download command made the file too large to upload to GitHub, and the size of the notebook file would constantly crash the browser/computer that it was opened in. So, the notebook below is here for posterity.

Jupyter Notebook: 20161206_docker_BGI_genome_downloads.ipynb


Data Management – Modify Eagle/Owl Cloud Sync Account

Re-examining our backup options for our two Synology servers (Eagle & Owl), I realized that they were both backing up to the just my account on UW’s unlimited Google Drive storage.

The desired backup was to go to our shared UW account, so that others in the lab would have access to the backups.

Strangely, I could not add the shared UW account (srlab) to my list of Google accounts. In order to verify the shared UW account with Google, I had to connect to the servers’ web interfaces in a private browsing session and then I was able to provide the correct user account info/permissions.

Anyway, it’s all going to our shared UW account now.



























Data Management – Synology Cloud Sync to UW Google Drive

After a bit of a scare this weekend (Synology DX513 expansion unit no longer detected two drives after a system update – resolved by powering down the system and rebooting it), we revisited our approach for backing up data.

Our decision was to utilize UW Google Drive, as it provides unlimited storage space!

Synology has an available app for syncing data to/from Google Drive, so I set up both Owl (Synology DS1812+) and Eagle (Synology DS413) to sync all of their data to a shared UW Google Drive account. This should provide a functional backup solution for the massive amounts of data we’re storing and it will simplify tracking where/what is backed up where. Now, instead of wondering if certain directories are backed up via CrashPlan or Backblaze or Time Backup to another Synology server, we know that everything is backed up to Google Drive.






Server HDD Failure – Owl

Noticed that Owl (Synology DS1812+ server) was beeping.

I also noticed, just like the last time we had to replace a HDD in Owl, that I didn’t receive a notification email… As it turns out, this time the reason no notification email was received was due to the fact that I had changed my UW password and we use my UW account for authorizing usage of the UW email server through Owl. So, the emails Owl’s been trying to send have failed because the authorization password was no longer valid… Yikes!

Anyway, I’ve updated the password on Owl for using the UW email servers and swapped out the bad drive with a backup drive we keep on hand for just such an occasion. See the first post about this subject for a bit more detail on the process of swapping hard drives.


Unfortunately, the dead HDD is out of warranty, however we already have another backup drive on-hand.


Below are some screen caps of today’s incident:








Notice the empty slot in the graphical representation of the disk layout, as well as the “Available Slots” showing 1.





After replacing the HDD (but before the system has rebuilt the new HDD), the empty slot is now represented as a green block and the “Available Slots” is now zero and “Unused Disks” is now 1.


Data Management – High-throughput Sequencing Data

We’ve had a recent influx of sequencing data, which is great, but it created a bit of a backlog documenting what we’ve received.

I updated our Google Sheet (Nightingales) with the data from geoduck genome sequencing data from BGI, Olympia oyster genome sequencing data from BGI, and MBD bisulfite sequencing data from ZymoResearch.

I also fixed the :FileLocation” column by replacing the “HYPERLINK” function with “CONCATENATE”.

Google Sheet: Nightingales


After updating the Nightingales Google Sheet, I updated the corresponding Google Fusion Table (also called Nightingales).

To update the Fusion Table, you have to do the following:

  • delete all rows in the Nightingales Google Fusion Table (Edit > Delete all rows)
  • Import data from the Nightingales Google Spreadsheet (File > Import more rows…)

Fusion Table: Nightingales

At initial glance, the Fusion Table appears the same as the Google Sheet. However, if you follow the link to the full Fusion Table, it offers some unique ways to visually explore the data contained in the Fusion Table.


After that I decided to deal with the fact that many of the directories on Owl (http://owl.fish.washington.edu/nightingales/) lack readme files and subsequent information about the sequencing files in those folders.

So, I took an inordinate amount of time to write a script that would automate as much of the process as I could think of.

The script is here (GitHub): https://github.com/kubu4/Scripts/blob/master/bash/ngs_automator.sh

The goal of the script is to perform the following:

  • Identify folders that do not have readme files.

  • Identify folders that do not have checksum files.

  • Create readme files in those directories lacking readme files

  • Append the directory path to each new readme file

  • Append sequencing file names and corresponding read counts to the new readme files

Will run the script. Hope it works…

Data Storage – Synology DX513

Running a bit low on storage on Owl (Synology DS1812+) and we will be receiving a ton of data in the next few months, so we purchased a Synology DX513. It’s an expansion unit designed specifically for seamlessly expanding our existing storage volume on Owl.

Installed 5 x 8TB Seagate HDDs and connected to Owl with the supplied eSATA cable.

Now, we just need to wait (possibly days) for the full expansion to be completed.

Uninterruptible Power Supplies (UPS)

A new UPS we installed this week for our qPCR machine (Opticon2 – BioRad) to handle power surges and power outages doesn’t seem to be working properly. With the qPCR machine (and computer and NanoDrop1000) plugged into the “battery” outlets on the UPS, this is what happens when the Opticon goes through a heating cycle:

The UPS becomes overloaded when the Opticon is in a heating cycle.


And, sometimes, that results in triggering a fault, shutting everything off in the middle of a qPCR run:

Fault message indicating unit overload.


This is supremely lame because having a battery backup is a great way to prevent the qPCR machine from shutting off when a power outage occurs!


I switched the Opticon (and computer and NanoDrop1000) to the outlets that are solely for surge protection. Check out what happens when I run the qPCR machine now:

Opticon plugged in to surge protection outlet while in heating cycle. Notice that output load is 0%.


So, I guess we’ll settle for at least having the surge protection aspect of things.


While handling this UPS issue, I realized that the two Synology servers we have possess a built-in UPS monitor. So, I connected the USB cables to/from each of the UPS that each server is plugged into and enabled UPS shutdown in the Synology Diskstation Management (DSM):






Now, both Synology units will enter Safe Mode when the UPS they’re connected to reaches a low battery status. This will help minimize data loss/corruption during the next extended power outage we experience.

Server HDD Failure – Owl

We had our first true test of the Synology RAID redundancy with our Synology 1812+ server (Owl). One of the hard drives (HDD) failed. All of the other drives were fine, the data was intact and we had a new replacement HDD on hand. However, there was one shortcoming: no email notification of the drive failure. Luckily, the Synology server is next to Steven’s office and he could hear an audible beeping alerting him to the fact that something was wrong. In any case, the email notifications have been fixed and a replacement hard drive was added to the system. Here’s how these things were accomplished.


Fix email notifications

The system was previously set to use Steven’s Comcast SMTP server. Sending a test email from Owl failed, indicating authentication failure. I changed this to use the University of Washington’s email server for outgoing messages. Here’s how…

In the Synology Disk Station Manager (DSM):

Control Panel > Notifications

  • Service provider: Custom SMTP Server
  • SMTP server: smtp.washington.edu
  • SMTP port: 587
  • Username: myUWnetID@uw.edu
  • Password: myUWpassword

Interesting note, there’s a “Push Service” tab in the “Notifications” window. This allows you to have Synology send emails to email addresses when the server has an issue. This eliminates the need for the SMTP settings shown above which may not be easy to find and/or understand for a given email service provider. The “Push Service” appears to be much simpler and more user friendly to set up.


Hot Swap HDD

We’ve kept a backup HDD on hand for just this occasion, so the HDD failure wasn’t too concerning. Here’re the steps I followed to swap the HDD and have the Synology system initialize/build the new HDD:


Remove the dead HDD and put the new HDD in.



Initialize/build/repair the new HDD.

In Synology DSM:

Storage Manger > Volume

Notice, there should be eight drives listed, but since one has died, only seven are shown:








That’s it! Easy breezy!

I’ve checked with Seagate on the dead HDD and it is still under warranty. Will get that returned and also purchase a new backup drive to have on hand.