Unmaintained space

blacksmith
computers
drumming
bujutsu
gaming
metal
beer
diy
...


I recently achieved a satisfying level of configuration on my new Helios64 ARM NAS, therefore I think it's time to put it into words for my own memory, and for sharing with the world.
If you didn't read the previous part about the hardware, it may probably interest you, but it's not required to follow this post.

TL;DR: Debian Bullseye + ZFS + Docker compose + a bunch of services, everything run fine!

OS

The Helios64 being quite recent, the support is not yet completely upstreamed in U-Boot nor in the Kernel.
That means that only a small set of distributions are currently supporting this NAS. The official ones are currently only Armbian builds of Debian 10 (Buster) and Ubuntu 20.04 (Focal).

For the fun, and because I wanted to give it a shot, I went on trying to put Fedora on it, before realizing at the end of the installation that it was not supported yet. Why the installer booted is still a mystery, but I didn't take the time to really investigate that, and I suspect that the bootloader I had flashed before helped a lot in that process.

As I'm really comfortable with Debian, I choose Buster for a first real install, and simply followed the official guide which is really easy and clear.

Sadly, as at the time, there were some difficulties for installing ZFS on Buster, I quickly performed a migration to Bullseye (current Debian testing), and got everything running well on this OS. As it's only for personal use, it's no problem being a bit bleeding-edge, and so far, this Debian testing installation has been very stable!

Storage

As I spoiled in the previous section, the storage runs on ZFS for multiple reasons that are beyond the scope of this article (check out the Internet on this filesystem if you've never heard of it!).

Hardware and redundancy

I'm really far from being an expert regarding all the available HDD on the market, so I basically purchased some 4TB Seagate Ironwolf, mainly because I've seen some of them running for quite some time at work, and they have some good reviews on the Internet.

It took me quite some time to think about the redundancy layout, but I finally settled on RAID-Z2, because in the case of one disk failure, that would allow me to continue using the NAS with some confidence while waiting for the new replacement disk to arrive. Besides, with 5 4TB HDDs, that grants me about 12TB of usable space, which is far enough for my personal use!

Poor man's benchmarks

I knew that ZFS had the reputation of eating all the available RAM, and I even experienced it already on some other machines, so I was a bit afraid of its performances on this 4GB NAS. Therefore I quickly went on some poor man's benchmarks with dd, to get rough results on what I could expect:

  • Writing 4GB with 1MB block size
# dd if=/dev/zero of=test bs=1M count=4000 conv=fsync status=progress
4194304000 octets (4,2 GB, 3,9 GiB) copiƩs, 16,5515 s, 253 MB/s
  • Writing 4GB with 10MB block size
# dd if=/dev/zero of=test bs=10M count=400 conv=fsync
4194304000 octets (4,2 GB, 3,9 GiB) copiƩs, 15,2559 s, 275 MB/s
  • Reading 4GB
# dd if=test of=/dev/null
4194304000 octets (4,2 GB, 3,9 GiB) copiƩs, 89,0503 s, 47,1 MB/s
  • Writing 20GB with 1MB block size
# dd if=/dev/zero of=test bs=1M count=20000 conv=fsync
20971520000 octets (21 GB, 20 GiB) copiƩs, 75,329 s, 278 MB/s
  • Reading 20GB
# dd if=test of=/dev/null
20971520000 octets (21 GB, 20 GiB) copiƩs, 464,902 s, 45,1 MB/s
  • Copying 20GB
# dd if=test of=test2 conv=fsync
20971520000 octets (21 GB, 20 GiB) copiƩs, 1378,97 s, 15,2 MB/s

As always with ZFS, the results are a bit surprising, but can easily be explained once you are aware that ZFS runs checksums at every read. Most importantly, even if the performances are not the best we've seen, they will be largely decent enough to get my home services up and running!

Partitionning Voluming Datasetting? Filesysteming?

As with most modern filesystems, the notion of partition tends to disappear in favour of things like logical volumes, or in the case of ZFS, datasets, that are far more flexible and reliable.

As it's fairly easy to roll that back, I went into the extreme by making a dataset per logical entity I have to store. This gives me a shitload of "filesystems" (the most common kind of ZFS dataset), but that's not really a problem, and besides, that allows me to quickly monitor the used storage without firing up du ncdu. This can also provide some fine quota tuning in the future if need be.
The icing on the cake is ZFS's built-in NFS server, that allows a simple zfs set sharenfs=on storage/data/tvshows to let Kodi access the TV shows without hassle!

# zfs list
NAME                              USED  AVAIL     REFER  MOUNTPOINT
storage                          2.05T  8.38T      170K  /storage
storage/backup                   36.3G  8.38T      170K  /storage/backup
storage/backup/bep               36.3G  8.38T     36.3G  /storage/backup/bep
storage/config                   2.40G  8.38T     3.80M  /storage/config
storage/config/jackett           4.12M  8.38T     4.12M  /storage/config/jackett
storage/config/jellyfin          1.43G  8.38T     1.43G  /storage/config/jellyfin
storage/config/nextcloud          634M  8.38T      634M  /storage/config/nextcloud
storage/config/pihole            44.6M  8.38T     44.6M  /storage/config/pihole
storage/config/radarr             236M  8.38T      236M  /storage/config/radarr
storage/config/sonarr            49.8M  8.38T     49.8M  /storage/config/sonarr
storage/config/swag              24.3M  8.38T     24.3M  /storage/config/swag
storage/config/transmission      2.51M  8.38T     2.51M  /storage/config/transmission
storage/config/wireguard          263K  8.38T      263K  /storage/config/wireguard
storage/data                     2.01T  8.38T      213K  /storage/data
storage/data/downloads           35.7G  8.38T     35.7G  /storage/data/downloads
storage/data/movies               627G  8.38T      627G  /storage/data/movies
storage/data/music                185G  8.38T      185G  /storage/data/music
storage/data/nextcloud            380G  8.38T      380G  /storage/data/nextcloud
storage/data/postgres_nextcloud   157M  8.38T      157M  /storage/data/postgres_nextcloud
storage/data/tvshows              834G  8.38T      834G  /storage/data/tvshows

Deploying some services

A new installation is the perfect time to think about reworking how every service is deployed. That was especially true as my previous server had been installed more than 6 years ago (2013!), when I was still a second-year student.
I've learned a lot since that time!

Long story short: the awesome guys at Linuxserver.io maintain a bunch of Docker images for most of the common services. If anything was to be missing, it's still very hard to find a project that doesn't have its own Dockerfile.
docker-compose also being a nice and easy way to manage the containers, I would have a very short and consistent configuration in no time.

As the whole configuration is in overall pretty boring, here is my docker-compose.yml, that you just have to drop in /storage/config if you reproduced the filesystems mentioned above.

Every service is basically on its own network, with its own system user, so that everyone is at least somewhat isolated from each other. It's pretty basic, but I think it's sufficient for my personal usage.
The only really interesting bit is about the seedbox and the VPN, which will make a fine post on its own, but won't be covered in this article.

With all the currently running services (PiHole, Nextcloud with its PostgreSQL, Swag, Radarr, Sonarr, Jackett, WireGuard, and Transmission), I have about 500MB of free RAM available, and everything is performing quite well.
The load is usually around 1.2, and sometime jumps over 2 when reading a film. Of course, performing big operations in Nextcloud or the media libraries may be slower than on a traditional x86 NAS, but since that remains fairly uncommon and not at all in the daily usage, I'm fine with it!

Hosting a Borg repository too

As a side note, it's worth mentioning that the NAS also performs nightly backups for another server, and that the server part of Borg behaves extremely normally, running its 45GB archive job usually in less than 10 minutes.
Here is the report of last night, for those who seek more accurate stats:

Time (start): Fri, 2020-12-11 04:38:56
Time (end):   Fri, 2020-12-11 04:46:54
Duration: 7 minutes 58.36 seconds
Number of files: 204638
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:               45.59 GB             39.67 GB            239.98 MB
All archives:              422.20 GB            366.55 GB             39.10 GB

                       Unique chunks         Total chunks
Chunk index:                  184323              1986320

Speaking about backups, the Helios64 does not yet have some off-site backup, but rest assured that it's something that is planned, and will be detailed in another future blog post.

Conclusion

The Helios64 has been running fine for two weeks now, without noticeable performance issues (Nextcloud taking a few seconds waking up after some time idling doesn't count!), and in overall, I'm super happy with it!

Both the hardware and the software please me very much, and the configuration with a single docker-compose.yml file is so painless that I won't go back to a classic, bloated system anytime soon.

See you in the next post detailing the VPN and seedbox setup!

Oh, and by the way, the IAE was an awesome event, thank you CIG!