A Perfect Storm for Storage
Perhaps one of the greatest hidden benefits from last year’s VMware “monster” initiatives is the framework it creates for Virtual storage appliances.
Think about how a normal VM user would utilize a VM with 100GB of VRAM and a terabyte of storage? Maybe some financial service providers or banks have some massive oracle RAC that needs 1/2 a terabyte of VRAM and a 1/5th of a petabyte of storage, but the other 99.95% of us just don't need that…… unless we use it for virtual storage.
In the journey to the Software Defined Datacenter (SDN), virtual storage is one of the last and most critical frontiers. VM’s with a 2TB VMDK limit make it very hard to serve up a lot of storage, and considering how many 80GB VM’s you can put on 2 TB’s of storage. You may wonder if it is even a good idea. Now, how do you protect that? Can you use local storage or SAN or both? The recipe for storage has always been a black-box secret guarded closely by storage vendors.
Now, consider the current state of the industry;
-SSD drives
-Flash I/O accelerators
-10gig Ethernet
These technologies have created a perfect storm for storage appliance. Now, the technology that was once a barrier to entering a highly available virtual infrastructure is accessible to all at a relatively low cost! Or is it? One of the oddest observations of the virtual infrastructure diaspora is the amnesia it has created on the fundamentals of computer technology. Although virtualization has produced significant performance improvements, it does not violate what I like to call “core computer technology physics”. For example, if you have a workstation, and you load all of your programs and data on the same drive, the system will really slow down whenever the OS needs to fetch data from the local disk. This is called IO contention, and is most commonly observed by monitoring disk queue length and disk busy time.
Granted, SSD technology can minimize this, but everything can’t run on SSD’s without spending a lot! IT professionals seem to have forgotten this, and typically configure and/or create I/O bottlenecks in the virtual environment. And we still need to protect these large VM’s especially if they are running on local storage!
The good news is VMware subtly introduced some key features in the 5.x version:
-local storage replication
-cache drives
These features allow you to oversubscribe an ESX host memory and cache it to a SSD or an I/O acceleration device. Now take the next logical step. What if we used this ESX host to host a virtual storage VM? Better yet, let’s put some hybrid drives and SSD’s in it as well. And just for fun, how about a flash acceleration device? Well, maybe not the flash device just yet. Here we are going from milliseconds to nano seconds, but even 10gig may not reach that kind of response time. So for now, we will save those I/O cards for our next blog...
But it still needs something ... It needs something to intelligently cache that high-use data on the SSD’s . But how much SSD cache do I need? This is where that other old “Core Computer Technology Physics “issue pops up again. We are back to expensive black box storage vendors again!
Maybe not. What if we made that ESX host have hot-swap drive bays so we could just add SSD disk as needed? And maybe we use an open source storage technology like ZFS, which has built-in and configurable read/write cache.
So let’s see how we did. The system described above easily has 12 TB’s of storage and initially expandable, redundant SSD cache of 500GB all for under $4,000. If you add some high-performance LSI controllers, it will still be below $6,000. Add one more array just like it and you can use Vsphere’s local storage replication to make this into a tier one storage with 10K IOPS all for under $13,000.
Name (required) Name Is Required
Email (required) Email Is Required Invalid Email Address
Website
Comment Is Required
Notify me of followup comments via e-mail