Looking for the silver bullet that’s not there? My (and Datrium) response to the Data Locality debacle
Recently there’s been an exchange between industry luminaries on data reduction, data locality, and data protection. Howard Marks wrote a thoughtful piece <here> that expands on VSAN’s approach to data locality. Josh Odgers, the brilliant blogger now at Nutanix, responded with some notes <here> that in my opinion struggled to close the issue. They both seem to be reaching for a silver bullet that’s not there.
The objective of this blog post is to demonstrate that Data Locality is essential for enhancing application performance, and explain how it is possible to solve the application locality and the management complexity dilemmas seamlessly yet wielding high performance and data reduction benefits
When it comes to Performance, just get out of the way of Intel.
If you can let server hardware serve applications as fast as possible and remain stateless, performance will be as good as possible, and what’s left is making administration simple.
When the Datrium team set out to build the best converged system possible, they considered many of these issues. In addition to all this, one overriding concern we had was simplicity – figuring out what features to enable when is a complete waste of time. If the feature is on a per-VM or per-some-group-of-objects basis, then the complexity truly becomes unmanageable. It is simply not possible to track 1000s of VMs and figure out what needs to be enabled and when. So: All features must be On all the time.
Incidentally, this is one area where modern arrays like Pure Storage nailed it, but most HCI vendors have checkboxes galore. If you can also add capacity or bandwidth/IOPS at will, then you will have a solved a real problem at scale.
via Andre Leibovici at myvirtualcloud.net