For my own memory:
- sixdimensionalarray/esxidown: A shell script to shutdown VMware ESXi host servers
- Spearfoot/utility-scripts-for-freenas-and-vmware-esxi: All-In-One utility scripts for FreeNAS and VMware ESXi
- When supported in your VM, always use the paravirtualised adapters VMXNET3 for network and PVSCSI for disk:
- [Archive.is] My Dream System (I think) | FreeNAS Community
- via: [Archive.is] BUILD – ESXi Home Server + FreeNAS | FreeNAS Community which has very interesting scripting posts pointing to the above GitHub repositories in pages [A]2/[A]3/[A]4 .
- “If you plan to use the storage for ESXi VMs, your only viable option is mirrors. So either go RAIDZ2 and don’t use the storage for VMs, or go with mirrors and pay the 50% penalty for redundancy.”
- FreeNAS on ESXi:
- [WayBack] FreeNAS vs NAS4Free – FreeNAS – Open Source Storage Operating System
- [WayBack] ZFS/ESXI All-In-One, Part 2 – ; [WayBack] ZFS/ESXi All-In-One, Part 1 –
- [WayBack] VM on ZFS in ESXi? : homelab
[WayBack] Best Hard Drives for ZFS Server (Updated 2017) | b3n.org
ZFS, dedupe and RAM:
- [WayBack] Het grote ZFS topic – Opslagtechnologie – GoT
- [WayBack] Reddit – Why are FreeNAS vs NAS4Free RAM Requirements for ZFS different? : DataHoarder
- [WayBack] ZFS dedupe (again): Is memory usage dependent on physical (deduped, compressed) data stored or on logical used? – Super User
- [WayBack] reddit – ZFS dedup on a pure backup server — RAM requirements considering l2arc : zfs
ZFS, FreeBSD, ZoL (ZFS on Linux) and SSDs:
- Via [WayBack] Solved – ZFS with only one ssd | The FreeBSD Forums
- [WayBack] How I Learned to Stop Worrying and Love RAIDZ | Delphix (backed with plenty of tables and graphs)
- TL;DR: Choose a RAID-Z stripe width based on your IOPS needs and the amount of space you are willing to devote to parity information.
- Guidance on a choice between:
- best performance on random IOPS
- best reliability
- best space efficiency
- A misunderstanding of this overhead, has caused some people to recommend using “(2^n)+p” disks, where p is the number of parity “disks” (i.e. 2 for RAIDZ-2), and n is an integer. These people would claim that for example, a 9-wide (2^3+1) RAIDZ1 is better than 8-wide or 10-wide. This is not generally true. The primary flaw with this recommendation is that it assumes that you are using small blocks whose size is a power of 2. While some workloads (e.g. databases) do use 4KB or 8KB logical block sizes (i.e. recordsize=4K or 8K), these workloads benefit greatly from compression. At Delphix, we store Oracle, MS SQL Server, and PostgreSQL databases with LZ4 compression and typically see a 2-3x compression ratio. This compression is more beneficial than any RAID-Z sizing. Due to compression, the physical (allocated) block sizes are not powers of two, they are odd sizes like 3.5KB or 6KB. This means that we can not rely on any exact fit of (compressed) block size to the RAID-Z group width.
- If you are using RAID-Z with 512-byte sector devices with recordsize=4K or 8K and compression=off (but you probably want compression=lz4): use at least 5 disks with RAIDZ1; use at least 6 disks with RAIDZ2; and use at least 11 disks with RAIDZ3.
- To summarize: Use RAID-Z. Not too wide. Enable compression.
- (Unarchived) FreeNAS All SSDs? – Hardware / Build a PC – Level1Techs Forums
- [WayBack] ZFS on all-sdd storage | iXsystems Community
I wouldn’t worry so much about the cost of the drives if you have to replace them in a few years. They’re constantly getting bigger and cheaper. If you really need to replace them in 3 years it’s not going to be the end of the world. Just think, a 256GB SSD can be purchased for about $100 today and 3 years ago the same drives were like $400+. To boot, they are faster than they were 3 years ago.
It’s quite possible that by the time you need to be worried about buying replacement drives for your pool you’ll be able to buy a single drive that can hold 1/2 your pool’s data for $100.
Don’t fret it. Buy the SSDs and be happy. Tell your boss you did the analysis and all is well. Just don’t buy those TLC drives. Those seem very scary for ZFS IMO.
…
There are some companies that have forked ZFS and set it up as you describe (separate vdevs for metadata using high-endurance SLC NAND) but there’s nothing like that in OpenZFS at the moment.
- [WayBack] ZFS with SSDs: Am I asking for a headache in the near future? | Proxmox Support Forum
- [WayBack] SSD Over-provisioning using hdparm – Thomas-Krenn-Wiki
- [WayBack] Optimize SSD Performance – Thomas-Krenn-Wiki
OpenSuSE related
- [WayBack] How To Install ZFS On Linux
- [WayBack] openSUSE software: ZFS packages
- [WayBack] Getting Started · zfsonlinux/zfs Wiki · GitHub
- [WayBack]
I’ve been using it on Leap 42.2 for a while with no problems. Generally I’d guess it should work fine on Leap since it uses a stable LTS kernel.
On Tumbleweed you’d have problems if the kernel updates before the zfs on linux project has released an update version supporting that kernel. So tumbleweed (or any rolling distribution) + zfs certainly seems rather flaky to me.
…
What appeals to me is the ZFS scrub feature. Nothing else really seems to guard against bit-rot / corruption quite like this.
…
- [WayBack] LEAP 15.0 So why every kernel update messes ZFS?
- [WayBack]
It’s not compatible with the new kernel yet. https://github.com/zfsonlinux/zfs/releases
Samba/CIFS related
- [WayBack] Samba · zfsonlinux/zfs-auto-snapshot Wiki · GitHub
- [WayBack]
- [WayBack] The Grey Blog: Accessing ZFS Snapshots From Windows With “Previous Versions” (Solaris)
- [WayBack] How to access ZFS snapshots over SMB | iXsystems Community
–jeroen