Of late, my NAS, which has admittedly gone through so many upgrades it's ridiculous, has started getting really squirrelly. The last clean install was right around the release of OMV 3.0, which was Debian Jessie based, and it's currently on 4 (Stretch), with 5 looming on the horizon. Additionally, I've sort of broken networking on it; combine that with OMV5 moving from XML to Saltstack for everything, and... undesired changes on the horizon to OMV (Like BtrFS default and restricted alternate FS options), I've decided to look at a custom setup.

I'm a ZFS user, so ZFS interop was critical to me. This automatically leans me towards Ubuntu, as DKMS is no longer a requirement, it's close to Debian, most of my VMs are running Ubuntu, etc. The downside is, no latest-and-greatest for ZFS. I could do 19.04, but I'd rather just wait until spring with the finalized 20.04 release and rework then; most of the latest-and-greatest ZFS features would require rebuilding the pool anyhow.

The other bugaboo though, was conversion. Taking something that grew organically and making it into something that can survive a move is never, ever fun. (Unless you're a masochist, or a homelabber - but I repeat myself.) I'd been slowly working my way accidentally towards this end, partly because OMV4 ditched a lot of media management plugins in favor of using its (mind you, technically unofficial) Docker plugin to manage those services as containers. The unofficial part is fair, because all those plugins were unofficial plugins as well.

So I had my media management stack running in Docker, along with a couple of pi-holes for experimenting, a UniFi controller for the APs, and Portainer to play with. All of which were implemented through the OMV Docker plugin, which is to say "using docker run statements". None of which were easily retrievable at first.

Quite some time ago I came across Assaflavie's runlike container, so I was able to get the run commands out, and later found rekcod by nexdrew, which does much the same thing. So that lets me recreate the original, but it doesn't make it CLEAN. And, of course, I'd heard about docker-compose, and that seemed ideal. Looking around I found composerize.com, which is a basic website that will convert a run command to a docker-compose statement. Sweet, this should be quick!

24 hours later...

What's emerged from the rabbit-hole I fell down is a single document that, soup-to-nuts, will set up all of my docker containers, create the networks they live on, create docker volumes out of existing ZFS datasets (using mount --bind trickery), and stand the entire media stack up with one command. I'm currently working on the management stuff, though none of them are actually critical except for a single pi-hole container.

I'm decidedly not done yet, I've discovered .env files that will allow me to set strings and other variables (which will be incredibly useful for my config folders, among other things). I've had to force myself to stop with some of this; the last part was me figuring out how I want to get Traefik set up and running everything in place of the custom, highly janky, and exceptionally difficult to duplicate HAProxy I have running on my pfSense box. I plan to work on it, but it's not required for reinstalling the OS.

Oh, yeah, the OS! So I grabbed a spare Supermicro board I had kicking around, set up on my highly technical test bench (consisting of a cardboard box on top of an open 1U chassis so I can borrow the power connections), hooked up a spare 240GB SSD, nuked it and installed Ubuntu 18.04. Installed ZFS, Samba, NFS, Webmin (not the vulnerable version, thanks), Docker, Docker Compose, and that's about it. Testing is limited but the docker-compose file is on there, so when I have that SSD mounted in the existing server, it should just be docker-compose up and we're done.

For those wondering what hardware this is going on, it's my existing NAS - Supermicro CSE-836 chassis with an expander backplane, Supermicro X9DBL-3F motherboard with 2x E5-2420v2 CPUs and 64GB of ECC RDIMMs (2x16GB, 4x8GB). The chassis bays have 4x10TB Seagate Ironwolf Pros, 4x8TB WD Easystore stripped drives (functionally WD Reds, but only one of them has an actual Red label), and 4x5TB Seagate 5900RPM desktop drives. Each set of four is in a RAIDZ1 VDEV and they're all in the same pool. There's also a Dell H200 connected to an HP SAS Expander with a single external port on it feeding a JBOD chassis that I'm pretty sure is SAS1, and has a random assortment of 2TB disks in it that I'm not actually using at the moment.

Plenty more to come on this - after all, I need to install the new SSD into the hardware it's going to live in - but I wanted to get this documented now before I forget again. (In fact, I took a break from that to... set up an entire public blog for the sole purpose of documenting this move, among others. DigitalOcean + one-click images made that painless though.)