Setup RawDR Active/Active HA Storage with ZFS and NFS Shared Storage
RawDR live HA Failover and live move of a Zpool between 2 physical servers
Setup RawDR HA Storage on free Citrix XenServer on 2 Hetzner Dedicated PX92
RawDR HA behavior caused by hardware reset of a Hetzner Server running Citrix XenServer
RawDR Max Performance Hyper-Converged Setup on 2 Hetzner PX92 with free Citrix XenServer
Raw + D + R [rɔːdia:(r)]
We are specialists in Citrix technologies, therefore we are always in touch with all direct and indirect dependent technologies like storage, network, security, databases, etc. We have a deep understanding of the VDI-related data flow. So RawDR was built from the perspective of a VDI architect not from the perspective of a storage architect, as there are absolute dissimilar needs.
You may either ask your IT-partner for commercial support or find one partner for commercial support on our partner website which will be released on 31 January 2019. In the meanwhile you and your IT-partner may also contact us for technical questions directly, afterwards technical support will only available through our partners.
An enterprise storage solution must take care that data is stored on non-volatile storage, which are normally HDDs, SSDs or NVMes. All-Flash solutions just use SSDs or NVMes with some Raid protection. This Flash memory must be enterprise-capable (meaning data consistency, error-correction, power-loss-protection) and they cost around 5-10x compared to normal Desktop/Workstation flash memory. A hybrid solution combines the strengths of HDDs and cheap flash memory, as HDDs can deliver good sequential write speeds (100-250MB/s while horribly slow at 4k random writes at <2MB/s) with high capacities and Read-Cache doesn’t need to be enterprise ready and protected against power loss. ZFS write algorithm for the cache will implicitly preserve the modul and delivers a very high endurance. Data quality is always saved by the checksumming of ZFS. So RawDR delivers the Read-Performance of the RAM and NVMe modules (each delivering 100.000-200.000 random 4k Read IOPs) and write performance at HDD sequential write level while take advantage of the high capacities and low costs of HDDs.
While ZFS is capable of online deduplication and you could enable it at dataset-level within RawDR, it costs a lot of memory and write performance. So, we do not recommend in general, to use Deduplication for virtual machines with RawDR. If you are now disappointed, please keep in mind, most space is saved by thin provisioning (also used by RawDR). In combination with the LZ4-compression you get good space saving. Deduplication works great in test labs and presentations, but you may be disappointed of the DeDup savings after 1 year of a real workload. DeDup of zero-blocks can also be achieved by compression. Regarding DeDup you can see a big difference between installing the VDIs by cloning an image or by deployment solutions (e.g. SCCM's Task Sequence).
The peak performance of each storage solution, which has not reached its limits, depends only on latency. While Hyper-Converged solutions suggests minimal latency, because the storage controllers resists on the same physical hardware as the VMs, latency can be significantly higher against a storage system that is located several kilometers away. In most cases software-switches within the hypervisor increase the latency significantly. You may verify this thesis by trying to ping from one VM to another VM, both sitting on the same hardware and compare this result with a ping between two physical servers. To sum it up, you should always keep an eye on the latency.
The biggest problem is to measure performance. Almost every storage benchmark solution can only test local hardware and is not designed for Virtual Infrastructures. Most of them write a large file on the filesystem against all following tests will run afterwards. This is a IO-pattern which will never occur within virtual environments, so such tests cant give any feedback.
Next to this, what are you going to benchmark? The Caches or do you want to completely bypass the caches? Both options are unrealistic. Do you want to benchmark peaks or do you want to benchmark a steady load?
To summarize it up: RawDR gets the maximum performance out of your hardware and it will give you the maximum protection (checksumming, incremental snapshotting). But a local M.2 NVMe module may outperform most enterprise solutions because ist does not care about data protection.
A reaper VM takes the vDisks of the local storage of the hypervisor and deliver this via iSCSI to the Grinders. Either as single disks or as striped disks.
A RawDR Grinder runs one or multiple ZPools. A Zpool mirrors the storage delivered by the reaper or from other iSCSI targets (e.g. LUNs of your existing storage solutions). Within this Zpool you can create ZFS datasets which will then be exported via NFS to the hypervisors Storage Repositories or Datastores. Each ZFS dataset can be snapshotted separately and you can set different attributes on them (e.g. quotas, compression, etc.). For each Zpool you have a separate HA IP so the RawDR Inspector can move the Zpool or in a case of an Grinder failure failover the Zpool to a another working Grinder. The RawDR Grinder also takes responsibility for the Read-Caching and mirror the volatile data which was not saved on the reapers.
The RawDR Inspector takes responsibility for the availability of a Zpool on a Grinder and takes steps to failover to another Grinder in the case of an outage. To avoid data corruption or split brain the inspector may fence the failed Grinder from the RawDR Storage Layer.
The Inspector also keeps a copy of volatile data of Zpools which was not saved to the reapers.
For the RawDR Reapers we recommend 2 vCPU and 512MB RAM. For the RawDR Inspector we recommend 2vCPUs and 4GB Ram. For the Grinder it depends on the storage mission: For dedicated VMs, we recommend 4 vCPUs and all RAM you can dispense, at least 8GB. If you do not have much RAM you should consider buying NVMe modules and give them to the Grinders.