Category Archives: Computer Servicing

Computer Servicing

Computing – Owl Partially Restored

Heard back from Synology and they indicated I should click the “Repair” option to fix the System Partition Failed error message seen previously.

I did that and our data is now accessible again. However, all the user account info, scheduled tasks (e.g. Glacier backups, notebook backup script), IP configurations, mail configurations, etc. have all been reset.

I downloaded/installed the various packages needed to have the server accessible via the web and configured the IP address settings.

Have a note out to Synology to see if the configurations can be restored somehow. Once I hear back, we’ll get user accounts re-established.

Below is a chronological set of screen caps of the repair process:

 

Our data is still here! This is before performing the “Repair” operation, btw. It seems it just required some time to re-populate directory structure.

 

 

 

 

Still getting a “degraded” error message, but all drives appear normal. However, Disk 3 in the DX513 is not showing; possible cause for “degraded” status?

 

 

 

 

Set up manual IP settings by expanding the “LAN 1″ connection.

Share

Troubleshooting – Synology NAS (Owl) Down After Update

TL;DR – Server didn’t recover after firmware update last night. “Repair” is an option listed in the web interface, but I want to confirm with Synology what will happen if/when I click that button…

The data on Owl is synced here (Google Drive): UW Google Drive

However, not all of Owl was fully synced at the time of this failure, so it seems like a decent amount of data is not accessible. Inaccessible data is mostly from individual user directories.

All high-throughput sequencing is also backed up to Amazon Glacier, so we do have all of that data.

 

Here is what happened, in chronological order:

 

  1. Updated DSM via web interface in “Update & Restore”. Did NOT perform manual install.
  2. System became inaccessible via web interface and Synology Assistant.
  3. The physical unit showed blue, flashing power light and green flashing LAN1 light.
  4. No other lights were illuminated (this includes no lights for any of the drive bays).
  5. The attached expansion unit (DX513) showed steady blue power light, steady green lights on all drive bays, and steady green eSATA light.
  6. I powered down both units via the DS1812+ power button.
  7. I turned on the both units via the DS1812+ power button.
  8. Both units returned to their previous status and were still inaccessible via the web interface and Synology Assistant.
  9. I powered down both units via the DS1812+ power button.
  10. I removed all drives from both units.
  11. I turned on the both units via the DS1812+ power button.
  12. I connected to the DS1812+ via Synology Assistant. A message indicated “No Hard Disk Found on 1812+”.
  13. I powered down both units via the DS1812+ power button.
  14. I added a single HDD to the DS1812+.
  15. I turned on the both units via the DS1812+ power button.
  16. I connected to the DS1812+ via Synology Assistant. I was prompted to install the latest DSM. I followed the steps and created a new admin account. Now the system shows 7 drives in the DS1812+ with a message: “System Partition Failed; Healthy”. Disk 1 shows a “Normal” status; this is the disk that I used to re-install DSM in Step 14. Additionally, the system shows one unused disk in the DX513.
  17. I powered down both units via the web interface.
  18. I removed Disk 1 from DS1812+.
  19. I turned on the both units via the DS1812+ power button.
  20. The DS1812+ returns to its initial state as described in Step 3.
  21. I powered down both units via the DS1812+ power button.
  22. I returned Disk 1 to its bay.
  23. I turned on the both units via the DS1812+ power button.
  24. There’s an option to “Repair” the Volume, but I’m not comfortable doing so until I discuss the in/outs of this with Synology. Submitted a tech support ticket with Synology.

Below are pictures of the entire process, for reference.

 

Server status when I arrived to lab this morning.

 

Pulled the HDDs from both units, in an attempt to be able to connect via Synology Assistant.

 

Units w/o HDDs.

 

No HDDs in units made the server detectable via Synology Assistant, but it indicates “Not installed” in the “Status” column…

 

Successfully connected, but the DS1812+ indicates no HDDs installed.

 

 

Added a single HDD back to the DS1812+. Notice, the drive light is green and the “Status” light is amber. This is actually an improvement over what I saw when I arrived.

 

Added back a single HDD to the DS1812+ and now have this setup menu.

 

I’m prompted to install the Synology DSM.

 

Installing DSM. This “Formatting system partition” message might be related to the eventual error message that I see (“System Partition Failed”) after this is all set up…

 

 

 

 

 

 

 

 

Prompted to create an admin account. This does not bode well, since this is behaving like a brand new installation (i.e. no record of the previous configuration, users, etc.).

 

Continuing set up.

 

All set up…

 

 

Added all the HDDs back and detected via Synology Assistant.

 

This shows that there are no other users – i.e. previous configuration is not detected.

 

After putting all the HDDs back in, got this message after logging in.

 

Looking at the Storage info in DSM; seems bad.

 

 

Physically, the drives all look fine (green lights on all drive bays), despite the indication in the DSM about “System Partition Failed” for all of them (except Disk 1). The expansion unit’s bay lights are actually all green, but were actively being read at the time of picture (i.e. flashing) so the image didn’t capture all of them being green. Amber light on expansion unit reflects what was seen in the DSM – the middle drive is “Not initialized”. Note, the drive is actually inserted, but the handle has been released.

 

This is how I left the system. Notice that after rebooting, the expansion unit no longer shows that “Not initialized” message for Disk 3. Instead, Disk 3 is now detected as installed, but not used…

 

Share

Hard Drive Replacement – Microscrope Computer (Dell Optiplex GX620)

Dan noticed that the computer wouldn’t boot, so I looked into it a bit. When attempting to boot, the hard drive (HDD) was making a clicking noise; this is never a good sign.

I replaced the HDD with a clone of the existing (now dead) HDD that I had created back on 20150422 and everything is mostly back to normal.

What hasn’t returned to normal is the usage of Dropbox. Sometime this summer, Dropbox stopped supporting Windows XP and no longer allows usage of the Dropbox app on Windows XP computers. For the time being, this means that all files saved on this computer should be uploaded to Dropbox via a web browser.

Saving files to the Dropbox folder that still exists on this computer will NOT sync! That means they will NOT be backed up.

To resolve this issue, we would need to upgrade to Windows 7. Once I obtain a new backup HDD to create a new clone, I’ll attempt to upgrade this computer to Windows 7. The main reservation I have about this is that the two key pieces of software installed on this computer (Nikon Elements and SPOT) are extremely old and may not function on a newer Windows version. But, I guess we won’t know until we try!

Below are images of the steps I took to replace the dead HDD:

 

 

 

 

 

 

 

Share

Computer Management – Additional Configurations for Reformatted Xserves

Sean got the remaining Xserves configured to run independently from the master node of the cluster they belonged to and installed OS X 10.11 (El Capitan).

The new computer names are Ostrich (formerly node004) and Emu (formerly node002).

 

He enabled remote screen sharing and remote access for them.

Sean also installed a working hard drive on Roadrunner and got that back up and running.

I went through this morning and configured the computers with some other changes (some for my user account, others for the entire computer):

  • Renamed computers to reflect just the corresponding bird name (hostnames had been labeled as “bird name’s Xserve”)

  • Created srlab user accounts

  • Changed srlab user accounts to Standard instead of Administrative

  • Created steven user account

  • Turned on Firewalls

  • Granted remote login access to all users (instead of just Administrators)

  • Installed Docker Toolbox

  • Changed power settings to start automatically after power failure

  • Added computer name to login screen via Terminal:

sudo defaults write /Library/Preferences/com.apple.loginwindow LoginwindowText "TEXT GOES HERE"
  • Changed computer HostName via Terminal so that Terminal displays computer name:
sudo scutil --set HostName "TEXT GOES HERE"
  • Installed Mac Homebrew (I don’t know if installation of Homebrew is “global” – i.e. installs for all users)

  • Used Mac Homebrew to install wget

  • Used Mac Homebrew to install tmux

Share

Data Management – Modify Eagle/Owl Cloud Sync Account

Re-examining our backup options for our two Synology servers (Eagle & Owl), I realized that they were both backing up to the just my account on UW’s unlimited Google Drive storage.

The desired backup was to go to our shared UW account, so that others in the lab would have access to the backups.

Strangely, I could not add the shared UW account (srlab) to my list of Google accounts. In order to verify the shared UW account with Google, I had to connect to the servers’ web interfaces in a private browsing session and then I was able to provide the correct user account info/permissions.

Anyway, it’s all going to our shared UW account now.

 

SELECT GOOGLE DRIVE AS THE SYNC PROVIDER:

 

 

 

 

SHARED UW ACCOUNT IS NOT A CHOICE:

 

 

TRY “ADD ACCOUNT”:

 

BUT ADD ACCOUNT DOESN’T WORK (DROP-DOWN MENU DOESN’T OFFER SRLAB AS A CHOICE)”

 

 

 

REPEAT STEPS, BUT CONNECT TO SYNOLOGY VIA PRIVATE BROWSING SESSION AND IT’S GOOD TO GO:

 

 

SET LOCAL AND REMOTE FOLDERS:

 

 

CONFIRMATION THAT IT’S SET UP:

 

 

AND, IT’S RUNNING:

 

Share

Data Management – Synology Cloud Sync to UW Google Drive

After a bit of a scare this weekend (Synology DX513 expansion unit no longer detected two drives after a system update – resolved by powering down the system and rebooting it), we revisited our approach for backing up data.

Our decision was to utilize UW Google Drive, as it provides unlimited storage space!

Synology has an available app for syncing data to/from Google Drive, so I set up both Owl (Synology DS1812+) and Eagle (Synology DS413) to sync all of their data to a shared UW Google Drive account. This should provide a functional backup solution for the massive amounts of data we’re storing and it will simplify tracking where/what is backed up where. Now, instead of wondering if certain directories are backed up via CrashPlan or Backblaze or Time Backup to another Synology server, we know that everything is backed up to Google Drive.

 

 

 

 

 

Share

Server HDD Failure – Owl

Noticed that Owl (Synology DS1812+ server) was beeping.

I also noticed, just like the last time we had to replace a HDD in Owl, that I didn’t receive a notification email… As it turns out, this time the reason no notification email was received was due to the fact that I had changed my UW password and we use my UW account for authorizing usage of the UW email server through Owl. So, the emails Owl’s been trying to send have failed because the authorization password was no longer valid… Yikes!

Anyway, I’ve updated the password on Owl for using the UW email servers and swapped out the bad drive with a backup drive we keep on hand for just such an occasion. See the first post about this subject for a bit more detail on the process of swapping hard drives.

 

Unfortunately, the dead HDD is out of warranty, however we already have another backup drive on-hand.

 

Below are some screen caps of today’s incident:

 

 

 

 

 

 

 

Notice the empty slot in the graphical representation of the disk layout, as well as the “Available Slots” showing 1.

 

 

 

 

After replacing the HDD (but before the system has rebuilt the new HDD), the empty slot is now represented as a green block and the “Available Slots” is now zero and “Unused Disks” is now 1.

 

Share

Docker – VirtualBox Defaults on OS X

I noticed a discrepancy between what system info is detected natively on Roadrunner (Apple Xserve) and what was being shown when I started a Docker container.

Here’s what Roadrunner’s system info looks like outside of a Docker container:

 

However, here’s what is seen when running a Docker container:

 

 

It’s important to notice the that the Docker container is only seeing 2 CPUs. Ideally, the Docker container would see that this system has 8 cores available. By default, however, it does not. In order to remedy this, the user has to adjust settings in VirtualBox. VirtualBox is a virtual machine thingy that gets installed with the Docker Toolbox for OS X. Apparently, Docker runs within VirtualBox, but this is not really transparent to a beginner Docker user on OS X.

To change the way VirtualBox (and, in turn, Docker) can access the full system hardware, you must launch the VirtualBox application (if you installed Docker using Docker Toolbox, you should be able to find this in your Applications folder). Once you’ve launched VirtualBox, you’ll have to turn off the virtual machine that’s currently running. Once that’s been accomplished, you can make changes and then restart the virtual machine.

 

Shutdown VirtualBox machine before you can make changes:

 

Here are the default CPU settings that VirtualBox is using:

 

 

Maxed out the CPU slider:

 

 

 

Here are the default RAM settings that VirtualBox is using:

 

 

 

Changed RAM slider to 24GB:

 

 

 

Now, let’s see what the Docker container reports for system info after making these changes:

 

Looking at the CPUs now, we see it has 8 listed (as opposed to only 2 initially). I think this means that Docker now has full access to the hardware on this machine.

This situation is a weird shortcoming of Docker (and/or VirtualBox). Additionally, I think this issue might only exist on the OS X and Windows versions of Docker, since they require the installation of the Docker Toolbox (which installs VirtualBox). I don’t think Linux installations suffer from this issue.

Share