Replacing a disk in zfs mirror

Well, it paid off. Last week, I noticed that one of the disks in my NAS had failed. And of all the disks, it was the one with the family photos. So setting up the ZFS mirror saved my bacon. Below is what I saw.

 admin@nas:~# zpool status
  pool: files
 state: DEGRADED
status: One or more devices are unavailable in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or 'fmadm repaired', or replace the device
        with 'zpool replace'.
        Run 'zpool status -v' to see device specific details.
  scan: scrub repaired 0 in 14h27m with 0 errors on Tue Apr 16 15:27:09 2013
config:

        NAME        STATE     READ WRITE CKSUM
        files       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c12d1   ONLINE       0     0     0
            c13d1   UNAVAIL      0     0     0

errors: No known data errors

As you can see, c13d1 had failed. If I really was fancy, I would have had a spare disk in the system and configured as the mirror spare and ZFS would have swapped out the bad drive for the spare automatically. But I didn`t go that route. So off to the store I went and got replacement.

Once I got the new drive, I shut down the system, replace the drive and booted back up. When I booted back up, the error message was still present as expected since the new disk had not ZFS content on it. All I had to do was issue the following command.

zpool replace files c13d1

From there, ZFS would resilver the new drive into the mirror. Depending on how big the drives were, it can take a while.


admin@nas:~# zpool status files
  pool: files
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function in a degraded state.
action: Wait for the resilver to complete.
        Run 'zpool status -v' to see device specific details.
  scan: resilver in progress since Sat May  4 22:25:34 2013
    4.92G scanned out of 1.50T at 18.2M/s, 23h54m to go
    2.41G resilvered, 0.32% done
config:

        NAME             STATE     READ WRITE CKSUM
        files            DEGRADED     0     0     0
          mirror-0       DEGRADED     0     0     0
            c12d1        ONLINE       0     0     0
            replacing-1  UNAVAIL      0     0     0
              c13d1/old  UNAVAIL      0     0     0
              c13d1      OFFLINE      0     0     0  (resilvering)
			  

And once it was done, the mirrot was back to normal.

admin@nas:~# zpool status files
  pool: files
 state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
        pool will no longer be accessible on older software versions.
  scan: resilvered 1.49T in 9h56m with 0 errors on Mon May  6 09:49:31 2013
config:

        NAME        STATE     READ WRITE CKSUM
        files       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c12d1   ONLINE       0     0     0
            c13d1   ONLINE       0     0     0

Advertisement

ZFS Server Greatness

As I mentioned previously, I like to run a PC as a server in my home to centralize all the files instead of having documents, pictures, videos, etc., spread out on various PCs.  But as time goes on, and the amount of files being collected grows, you really start to panic about losing files.  Especially when you have years of digital photos.  So I went in search of a solution for safe storage.  For a while, I used a first generation Drobo.  Great product.  You can swap in and out new drive to easily grow the amount of space available.  But as I was running Linux and trying to use it in a server setup for a year or so, it just wasn’t working for me. Then I found the breakthrough of a lifetime, which is ZFS file system.  It make traditional hardware Raid look like a joke for ease of setup, especially for what I needed.

If you haven’t heard of ZFS, watch this great video I found that explained it all good and got me up and going in no time at http://blogs.oracle.com/video/entry/becoming_a_zfs_ninja.  But when it comes to ZFS, you’re limited by the options for operating systems. ZFS, a technology in then Sun Microsystems Solaris, now Oracle Solaris.  When I first started using it, I started with OpenSolaris as the operating system.  A great operating system with a great community.  Since then, I have migrated to Oracle Solaris 11 Express.

Setup of a ZFS file system was really easy.  What I needed was to have a pair of drives act as a mirror so I have a complete copy of the files on each drive.  As an example, here is how easy it was.

1. First, you’ll need to figure out what Solaris calls your drive. You can find this out by using the format command.

root@nas:~# format
Searching for disks...

AVAILABLE DISK SELECTIONS:
       0. c11d0 < cyl 19454 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@11/ide@0/cmdk@0,0
       1. c11d1 <ST320082-         3ND2FLA-0001-186.31GB>
          /pci@0,0/pci-ide@11/ide@0/cmdk@1,0
       2. c12d1 <WDC WD20-  WD-WMAZA339211-0001-1.82TB>
          /pci@0,0/pci-ide@11/ide@1/cmdk@1,0
       3. c13d0 <ST320082-         5ND0KM1-0001-186.31GB>
          /pci@0,0/pci-ide@14,1/ide@0/cmdk@0,0
       4. c13d1 <WDC WD20-  WD-WMAZA320422-0001-1.82TB>
          /pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0

2. Look at the output and you will see at the start of each entry, an ID like c13d1. This is the name to remember.  So to create my mirror with disks c13d1 and c12d1, I did the following.

zpool create files mirror c13d1 c12d1

That’s it.  A new ZFS pool was created called ‘files’ using the two disks in a mirrored setup.  It also automatically mount the pool as /files in the file system and is “formated” as a ZFS file system.  How easy was that?

Now that other part of ZFS that I like, but is hard to explain is a ZFS dataset.  It’s kind of like a partition, but not bound to a particular size, but easiest to think of it like a partition.  The reason I wanted this was to keep track of the amount of space certain files were taking up. So within my ‘files’ zpool, I created several datasets such as one for photos, another for videos, another for general files, etc.  Each dataset within a zpool can take up as much disk space as it needs but needs to share the space with the other datesets in the zpool.

So to create one of these datasets, as an example, I will create a dataset for my videos under the files zpool.

zfs create files/video

Now I have a new dataset within ‘files’ for videos.  What’s really interesting is that you can nest these datasets. So as I mentioned, I created several datasets for my different data.

root@nas:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
files             1.09T   715G    34K  /files
files/Audio       3.66G   715G  3.66G  /files/Audio
files/Pictures    23.1G   715G  23.1G  /files/Pictures
files/files       34.8G   715G  34.8G  /files/files
files/gallery     1.92G   715G  1.92G  /files/gallery
files/home        28.1G   715G  28.1G  /files/home
files/mail        3.26G   715G  3.26G  /files/mail
files/video       865G    715G   865G  /files/video

As you can see, you will notice that the AVAIL space is identical for each dataset.  This is because they are all sharing the space within the zpool ‘files’.  So they look like partitions but aren’t fixed to a size.  Now I can see that my photos are taking up 23.1 GB of space.

Well, this is the first post of my adventures with ZFS.  If you haven’t had a chance, watch the video that I linked to above about how fun and easy it is to use.