ZFS Server Greatness

As I mentioned previously, I like to run a PC as a server in my home to centralize all the files instead of having documents, pictures, videos, etc., spread out on various PCs.  But as time goes on, and the amount of files being collected grows, you really start to panic about losing files.  Especially when you have years of digital photos.  So I went in search of a solution for safe storage.  For a while, I used a first generation Drobo.  Great product.  You can swap in and out new drive to easily grow the amount of space available.  But as I was running Linux and trying to use it in a server setup for a year or so, it just wasn’t working for me. Then I found the breakthrough of a lifetime, which is ZFS file system.  It make traditional hardware Raid look like a joke for ease of setup, especially for what I needed.

If you haven’t heard of ZFS, watch this great video I found that explained it all good and got me up and going in no time at http://blogs.oracle.com/video/entry/becoming_a_zfs_ninja.  But when it comes to ZFS, you’re limited by the options for operating systems. ZFS, a technology in then Sun Microsystems Solaris, now Oracle Solaris.  When I first started using it, I started with OpenSolaris as the operating system.  A great operating system with a great community.  Since then, I have migrated to Oracle Solaris 11 Express.

Setup of a ZFS file system was really easy.  What I needed was to have a pair of drives act as a mirror so I have a complete copy of the files on each drive.  As an example, here is how easy it was.

1. First, you’ll need to figure out what Solaris calls your drive. You can find this out by using the format command.

root@nas:~# format
Searching for disks...

AVAILABLE DISK SELECTIONS:
       0. c11d0 < cyl 19454 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@11/ide@0/cmdk@0,0
       1. c11d1 <ST320082-         3ND2FLA-0001-186.31GB>
          /pci@0,0/pci-ide@11/ide@0/cmdk@1,0
       2. c12d1 <WDC WD20-  WD-WMAZA339211-0001-1.82TB>
          /pci@0,0/pci-ide@11/ide@1/cmdk@1,0
       3. c13d0 <ST320082-         5ND0KM1-0001-186.31GB>
          /pci@0,0/pci-ide@14,1/ide@0/cmdk@0,0
       4. c13d1 <WDC WD20-  WD-WMAZA320422-0001-1.82TB>
          /pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0

2. Look at the output and you will see at the start of each entry, an ID like c13d1. This is the name to remember.  So to create my mirror with disks c13d1 and c12d1, I did the following.

zpool create files mirror c13d1 c12d1

That’s it.  A new ZFS pool was created called ‘files’ using the two disks in a mirrored setup.  It also automatically mount the pool as /files in the file system and is “formated” as a ZFS file system.  How easy was that?

Now that other part of ZFS that I like, but is hard to explain is a ZFS dataset.  It’s kind of like a partition, but not bound to a particular size, but easiest to think of it like a partition.  The reason I wanted this was to keep track of the amount of space certain files were taking up. So within my ‘files’ zpool, I created several datasets such as one for photos, another for videos, another for general files, etc.  Each dataset within a zpool can take up as much disk space as it needs but needs to share the space with the other datesets in the zpool.

So to create one of these datasets, as an example, I will create a dataset for my videos under the files zpool.

zfs create files/video

Now I have a new dataset within ‘files’ for videos.  What’s really interesting is that you can nest these datasets. So as I mentioned, I created several datasets for my different data.

root@nas:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
files             1.09T   715G    34K  /files
files/Audio       3.66G   715G  3.66G  /files/Audio
files/Pictures    23.1G   715G  23.1G  /files/Pictures
files/files       34.8G   715G  34.8G  /files/files
files/gallery     1.92G   715G  1.92G  /files/gallery
files/home        28.1G   715G  28.1G  /files/home
files/mail        3.26G   715G  3.26G  /files/mail
files/video       865G    715G   865G  /files/video

As you can see, you will notice that the AVAIL space is identical for each dataset.  This is because they are all sharing the space within the zpool ‘files’.  So they look like partitions but aren’t fixed to a size.  Now I can see that my photos are taking up 23.1 GB of space.

Well, this is the first post of my adventures with ZFS.  If you haven’t had a chance, watch the video that I linked to above about how fun and easy it is to use.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: