I've put these notes on the web for my reference, I hope they are of use to other people...

Not OpenBSD!

I've been using OpenBSD for years, it's probably the most secure publically available OS in the world.
But... It's support for new features lags other OSs. So for ZFS support I moved to FreeBSD.

The background...

I've got quite a large amount of data that I keep on a home server, a mixture of music/videos/photos. I shoot about 2TB of data per year, and try to aim for about 18 months between server upgrades.
I've used hardware RAID solutions before, but really don't like all my data being tied to one particular brand/model of RAID board.
So I looked at the various software RAID solutions. I experimented with a few, but ZFS seemed to do everything I wanted and more. The FreeBSD solution looked stable, FreeBSD and OpenBSD are pretty similar to maintain, in fact I used FreeBSD before I moved to OpenBSD. So I decided to try ZFS on FreeBSD.
I used a spare machine and did some experiments. It was _much_ easier than creating the software RAID on OpenBSD!
Latest version

Phase 1 = Dec-2012:

I used an ASUS C60M1-I motherboard, with 8GB of RAM.
It's got 6 SATA-III (6Gbit/sec) ports, to which I connected:
0: 1.5TB boot drive
1..5: 3TB data drives (In RAID-Z1, so 15TB of discs = 12TB storage)

I booted the FreeBSD install from a USB stick, and installed it from there, so didn't need a CD-ROM or anything else like that.
I used FreeBSD 9.0, this needed some patches added to support the RealTek 8111F network device, but with those done, everything came up OK.
I created the ZFS partition on dedicated discs with:
zpool create myraid raidz1 ada1 ada2 ada3 ada4 ada5
(see later about using GPT partitions)

This created a 12TB RAID array, mounted as /myraid. Yes, it's that simple.
I then setup samba, set up the shares, copied the data to the new array and that was it for a while.

Phase 2 = Dec-2013:

One year on, and everything was running fine, but I was running low on space. When ZFS has less than 10% free space, it really slows down. I figured it was time to build a bigger array.
Most of the time, the dual core 1GHz CPU was OK, but ZFS likes RAM, and 8GB is OK but probably not enough for things grow much more. (I've since tested 2x8GB sticks, and it does give 16GB usable on the C60M1-I despite what the spec sheet says).
I wanted to use more drives, so I looked around for reasonable priced motherboard/CPU with lots of SATA connectors, and several PCI-e slots.

The upgrade

For a reasonable price, I found the Gigabyte F2A88X-D3H. This is a full size ATX motherboard, with 5 PCI-e slots, and 2 PCI slots. It has 4 DDR3 memory slots, and 8 SATA-III (6Gb/s) ports on the motherboard.
I installed an A4-5300 CPU, with 2x8GB RAM + 2x4GB RAM, giving 24GB usable. When the 45W TDP CPUs come out, I'll consider down grading to one of them.
Drives:
0: 2TB boot
1..7: Mix of 3TB and 4TB data drives

This time I used GPT partitioning:
gpart create -s gpt ada1
gpart add -t freebsd-zfs -a 4K ada1
{repeat for each drive in the array}
zpool create myraid raidz2 ada1p1 ada2p1 ada3p1 ada4p1 ada5p1 ada6p1 ada7p1

The "-a 4K" forces the partition to be 4K aligned, this improves performance with the newer drives that internally use a 4K sector size.
Note: I later learned that earlier versions of FreeBSD default to 512 byte data blocks, so for performance you ought to set ashift=12 (or bigger) to make sure the block size is a multiple of 4K as well.

At this point
Server: (Gigabyte F2A88X-D3H motherboard) + with 2x3TB + 5x4TB drives in a RAID-Z2 configuration (5 data + 2 parity) giving 15TB storage (limited to 5x3TB by smallest drive in the array).
Backup: (Asus C60M1-I motherboard) with 5x3TB drives in a JBOD configuration (5 data), giving 15TB storage.

Phase 3 = Feb-2014:

I upgraded to FreeBSD10.0 release build, and "zpool status" warns about the block size being 512 bytes, on drives with 4K sectors. I only found out about the block size issue, after I'd created and filled the array, so there wasn't much I could do.
Maybe this was a good time to upgrade the last two drives to 4TB. With two new 4TB drives I'd have enough disk space to copy the data off of the array, reformat the array (with 4K block size), then copy the data back to the array.

This is where the full size ATX motherboard is great. It's got 8x SATA-III on-board, but for the back/restore I'd need a lot more. With 5 PCI-e and 2 PCI slots, there is lots of options. I used a pair of cheap 2 port SATA-II PCI-e cards, and an old 4 port SATA-I PCI card, to connect the extra drives. During the backup/restore there were 15 SATA drives connected.
I copied the 13TB to the new 2x4TB plus a few other drives I took from another machine. After the copy I verified the MD5 sums of the new copy against the old copy, and the backup. Resolving a few issues that were found.
I format the array in exactly the same way, but with FreeBSD10.0 release build instead of FreeBSD9.x. With the gpart set to 4K alignment, this time it automatically used a 4K block size to create the array.
I then copied all the data back to the array, verified the MD5 sums, did one final "zpool scrub", then remove the backup drives.
I then replaced the 2x3TB with 2x4TB (one at a time, and let the array rebuild between each). With all of the drives at 4TB the array now has 20TB usable space.
Server: (Gigabyte F2A88X-D3H motherboard) + with 7x4TB drives in a RAID-Z2 configuration (5 data + 2 parity) giving 20TB storage.
Backup: (Asus C60M1-I motherboard) with 6x3TB drives in a JBOD configuration (5 data + 1 data/boot), giving 17.5TB storage.

Phase 4 = Jul-2015:

With the previous server I was seeing some errors reported, about once per month I would get an error, which was usually recovered. After I got an error that couldn't be recovered, (so had to restore a file from the backup server I guessed it was time to look at ECC RAM).
Looking at the logs, there were no disk errors being reported, so I suspect that one of the sticks of RAM was failing. So I searched for motherboards with ECC RAM, and 10+ SATA ports.
I found this board with everything I needed: ASRock C2750D4I.
The features that I really liked:
There is also a quad-core CPU version available, the C2550D4I, it's about £60 less, but it was out of stock when I was ordering.
So I got a board, with 4x 8GB ECC DIMMs, and replaced the board in the server.
Problems: During the installation, I was copying some data around, so I added a Digifusion PES340A SATA card. It's based on the Marvell 9215 chipset, and seems stable under FreeBSD. It's a PCIe-2.0 x1 card, so the bus peak performance is 500Mbytes/sec. But HDDs don't often sustain 125MBytes/sec, so it's not really a problem. Once the data was copied, I moved it all into a new case. I used the Node 804, which has 8 hanging HDD bays, a 2 mount bays, for a total of 10x 3.5"e; drives.
I added some extra fans, and removed their joke of a fan controller (it's a switch that selects 5V, 7V or 12V for the fan power).

FreeBSD packages

After installation, I do a portsnap fetch extract to get the up-to-date ports list. Then I usually install:
I usually build each of the from the /usr/ports tree, with these commands:
make config-recursive
make X11=off install


The Roger Writes series

I research / dabble with lots of things, and figured that if I write my notes here, I can quickly reference them, also, sometimes, they are useful to others!
Here is what I have so far:





Homepage.
This page was lasted updated on Thursday, 17-Aug-2023 13:03:14 BST

This content comes from a hidden element on this page.

The inline option preserves bound JavaScript events and changes, and it puts the content back where it came from when it is closed.

Click me, it will be preserved!

If you try to open a new Colorbox while it is already open, it will update itself with the new content.

Updating Content Example:
Click here to load new content