What is the best storage system for VMware?

What is the best storage system for VMware “Hyper-V, Xen, KVM?

Today we are going to write about what is the best storage system for VMware.

This days server and desktop virtualization system are not the only one storage market system such as NetApp.
For instance, this year Larry Ellison made the important statement. According to his information, up to 60% of NetApp’s business is the storage of Oracle data store and current market trends such as VMware, MS Hyper-V, Xen, and hardware solutions for them are the most advanced, technological and fast growing segment of the software and server market.

It’s not surprising that NetApp has been so busy with it since almost the moment the industry was born. After all, it so happened that the ideas and principles in the NetApp storage system surprisingly well aligned technologically with what the vendors of virtualization system are going in their field.

Before we begin to look in detail at the basic NetApp tricks in this area, let’s just list what the most exciting and “catchy” features of NetApp storage systems are.
Consider the same what these potential storage systems become real advantages for virtual systems.

  • The ability to use to connect to the server disk storage to ESX, posting data on NFS protocol appeared in VMware back in version 3.0, but alas, still the option to connect storage, especially for newbies, it seems something unusual. Experienced people use this option for a long time as a large-scale example. This is can bring the experience of such giants as T-Mobile (the world’s largest cloud system SAP hosts, about 2 million customers), Oracle, SAP, Deutsche Telecom, Rackspace, and many other such “transnational” that use NFS as a primary data transfer protocol from disk system storage to servers and hypervisors. According to analysts of the consulting company Forrester, NFS, as storage access protocol servers, VMware, steadily growing in prevalence and popularity, reaching 36% for today (18% two years ago), and has already surpassed by the popularity of iSCSI.

Among the many benefits of using NFS in VMware (I think this topic later I will devote a separate, detailed article) include such exciting features like:

  • The ability to create was greater than using VMFS, data store (up to 16 TB in one piece).

  • An opportunity not only to expand the data store (which allows VMFS), but compress (what VMFS does not allow, and often required, in a dynamically distributed cloud Wednesday), with step total 4 KB.

  • High granularity. Unlike VMFS on the data store, you can operate (for instance Saved in backup and restore from it) is not entirely data store, and individual virtual disk separates the virtual machine or its configuration file. This convenient if you are using a large data store with dozens or hundreds of servers on it.

  • Thin by design. «Virtual disks of virtual machines on the NFS datastore are ordinary files on the network segment. They take what they have written data, not as much as we have reserved when you create them. Terabyte VMDK, which recorded so far only 3 GB storage system will take only 3 GB.

  • Deduplication, about which more below (and above) previously on the blog), frees up a place that becomes immediately available to the ESX Server, and where he can directly post their new data. Deduplication for LUNS with VMFS also frees up space on the storage system, but it does not become readily available to the virtual machine.

  • Finally, connection and operation with NFS are carried out on all the familiar and habitual Ethernet. You don’t need to put a separate, unique and expensive infrastructure using FIBRE CHANNEL, Gigabit and 10g Ethernet you can stay in a common (and inexpensive) Ethernet support and the performance difference with FCP and iSCSI, as shown by the results of VMware itself is insignificant.

  • No problem with a queue size limit on SCSI and SCSI commands lock (which is especially important for large, dynamic configurable systems, cloud type, use stored data without the support VAAI, which let me remind you, there are only the “top” licenses of VMware Enterprise Plus).

However, the advantage of a general, multi-protocol storage (storage) is that the decision to use NFS, for example, you are not imposed. The unified storage type storage system can work on any of the existing data access protocols. You can use any present Protocol, moreover, any set of protocols simultaneously. If you have a need to use the “block” protocols for LUN (for instance, you need to use RDM), you can take for the FCP or iSCSI, NFS-want to use it, at the same time as the first, on the same storage system.

For example, interesting may be option use FCP or iSCSI 10 GB for a few particularly productive and critical VM, iSCSI on Gigabit Ethernet and VMFS for others. Not so “their” VM, but wanting to work on the block on the NFS protocol, and make a great store (up to 16 TB of data storage), for example, tens or even hundreds of relatively UN-utilized on VM i/o. Invaluable in real-life flexibility.

Probably one of the most popular, and spectacular opportunities for NetApp storage systems under the virtual environment (not only under VMware ESX, but under VMware View (VDI), MS Hyper-V, and Citrix Xen) is the ability to deduplicate data Dan, i.e. the removal of them repetitive fragments.

Duplication works especially effective in a virtual environment because it is evident that most of the files are virtual disks for virtual machines will contain the same OS, the same or very similar data in them.
For this reason, data store duplication can give 50% or more (in practice, meet results and up to 75-85%) save space. That is, after its storage, duplication available seems to be increasing for you twice-three times.

It is, especially pleased that you don’t have to pay for its performance. In the vast majority of cases, users do not see deduplication after no discernible by the eye performance of their stores.
But the decrease in disk-Based storage has been just one benefit. A critical and exciting is also the fact that the deduplication and the data in the cache!
Imagine a storage system, the host server attached to it reads data blocks that fall within the cache, how many would fit into it, and the rest are not trapped, slowly are read from the disk (cache miss).

Now imagine that same host or hosts read duplicated blocks and cache when it knows that the blocks that are requesting the hosts are identical in content and took on the physical disks are one and the same place.

Unlike the conventional detergent data storage where all the blocks on the disks are indistinguishable from each other. Even blocks completely same content will occupy space in the cache when reading (classic data storage knows nothing about the substance of the block, for it is “block number three, number five hundred and forty”). The NetApp system knows that the block was exposed, deduplication and read three teams from hosts can be considered from disk into the cache and give hosts only one block, identical to the contents of all three requested.

Thus, reducing by half and more storage capacity of the disk, through deduplication and sharing storage blocks (block sharing technology), we more than doubled to cut volumes held These blocks in the cache, that is, virtually, get an equivalent amount of cache on the system twice and more capacious. After all, we now have some blocks not only are not duplicated on the disks, but also in the memory cache.

The situation with the economy seats and a virtual increase of the capacity of the cache becomes even more useful because NetApp actively uses its relatively recent notion. This cache correctly solved a very severe for virtual infrastructures problems boot storm, storm login. Imagine a big company using VMware View or similar desktop product, and which at the same time include thousands of jobs, as well as all other “storms.” Because of the working (and duplicated), a set of data, often with a stock placed in such a “mega cash” and roughly an order of magnitude (10 times) reduces the time delay (so-called latency) and storage system performance.

An important feature is the resource savings, power supply, cooling, and rack space. Because full disk Regiment takes 4U rack, consumes around 340 Watt/h and 1400 BTU/HR of heat, and the Flash Cache, which can ensure the performance of several such shelves, consumes 18 Watt/hour, does not take place in the rack, allocates 90 total BTU/HR of heat. For larger systems, this can be very, very substantial saving.

Thin Provisioning which I’ve previously described is ideally suited for the tasks of “cloud” storage, especially for tasks when the busy volume can arbitrarily increase, and the number of customers using the store tens and hundreds. Allocating disk space for such clients dynamically using over provisioning, i.e. the model when a customer “sees” that the amount of free space, in which it requested, and on the system disks storage is only to the extent, how many actual recorded data on disks.

In doing so, I would like to note that the performance difference between thin and thick disks for VMware. Also, in practice, zero impact “fragmentation” in this “growing” when you burn the disc.
Draw your attention also to the fact that “hardware” thin provisioning storage systems can work not only with the small disks of VMware but with thick-(excluding eager available thick). And if you, for whatever reason, don’t want to or can’t use their thin-mechanism for VMware, or use VMware hypervisor, in which thin-disk mechanism yet, you still can get all thin provisioning capabilities, because they implemented for your independent, storage system.

I have already talked about what snapshots, and as they use NetApp. I’m sure you already know what it is, and how convenient this way to create snapshots and restore them State of the data at the desired moment. However, as you may know, VMware has its mechanism for creating snapshots at data storage level. But those who have already tried, he retracted it. Indeed, it should be noted, as many attempts to implement snapshots in storage systems or software were not very successful, with lots of unpleasant side effects such as a decrease in performance when you use and, in the case of VMware-a bunch of complications at their disposal. In General, must join the opinion of renowned Russian specialist on VMware, Michael Mikheyev: “Snapshots are evil», but with one amendment:” VMware Snapshots are wrong because snapshots NetApp storage tools that are another matter entirely (“… -Welcome”;). And here’s why.

Through the use of the mechanisms of the WAFL resulting not only snapshots don’t slow down the store, but also solve the issue with the work of snapshots in VMware. Which allows use them as widely as possible, not only for the “fixing” of the States of virtual machines but also as full backups.

For this purpose, a special software product-SnapManager for Virtual Infrastructure, which takes over all tasks, create a copy of the content with data storage VMware restore such text data or part thereof.

The snapshot storage mechanism works integrated with VMware snapshots so that when the storage system creates a snapshot, to ensure consistency file system and the State of the VM. Tthe I/O operation at the time of taking snapshot should suspend. It invokes the snapshot mechanism. VMware suspend for a fraction of a second job the hypervisor in this data storage, makes a snap and releases a hypervisor without actually creating a “good” snapshot, VMware and using only hassle-free “hardware” snapshot Storage System.

FlexClone. Candidates for deduplication can duplicate data, and you can just avoid duplicates, NetApp is even called the term non-duplication. The same technology “shared blocks,” which is used in deduplication, when on one physical block can refer hundreds of “logical” file system blocks employed in Fiche under called FlexClone.

Technology is partly similar to the technology of Linked Clones, but works for any task, as implemented actually by data storage itself.

When you create a clone of your data (volumes, LUNs) file is not copied its contents to the new instance and creates a new copy of the metadata that points to the previous blocks of data, and that’s changed, compared with the original, the blocks already will take place. It turns out this “differential drive.
Now, if there is such a need, from the master image, you can create a virtual machine in minutes. The hundreds of images working VM on quite a small space storage because only the changes will result in clones.

Are pleased that almost all of these features are available from NetApp storage system integrated with center ad hoc control center» storage system. Now VMware administrator may not jump between two or more management tools separately for data storage, in its native form, separately for VMware center. Now everything is managed in the Vcenter.

This Panel-page integrates with the Vcenter server and is available free for all NetApp storage systems. Also, with the tool supplied a unique tool that automatically determines the optimal configuration, for example, multipath, SCSI timeouts and other required parameters by Vendor Best Practices.

Finally, quite fluently, worth mentioning about the exciting features secure-multi-tenancy. The possibility, if necessary, such share storage across multiple “virtual,” independent” equally, storage subsystems. For example, if you are in the Organization are forced to assume according to the requirements of internal security policies, differentiate and store such a system requires absolute isolation (even for admin!), let’s say HR or Finance Department, administrators of other departments. Now physically one storage system can work in such “logically split” form.

NetApp storage systems also were among the first to make support VAAI, allowing part of the hypervisor server transfer tasks to perform on a storage system, such as creating and padding with zeros partitions, copying partitions, or a new, more “throughout” SCSI-system lock, and thus increase productivity to high infrastructures.

NetApp also develops and produces an interesting tool to analyze and optimize the performance of virtual infrastructures OnCommand Insight (formerly Akorri BalancePoint), which is available and regardless of the NetApp storage systems, I mention it twice to get up, not for those who overpowered my helping this my yesterday indecently large text.

So, in summary:
I believe that NetApp storage systems have a natural and best choice today for any virtualization Wednesday, for example for VMware vSphere, VMware View, MS Hyper-V, Citrix Xen, and others, as offer time several important and convenient features:

  • Multiprotocol-IE work on several different access protocols: FC, iSCSI, NFS, and simultaneously, without having to share the storage system or the data on it, and referring to these General, uniform manner.

  • Deduplication-allows you to save space on the storage systems, reducing by half and more space through a process of removing duplicate fragments, e.g. files in those virtual disks, without compromising performance, and also, maybe even increase it through virtual capacity dedupe-aware cache.

  • Thin Provisioning-makes it easier to administer and saves disk space and allows for more convenient to distribute tasks to the cloud “place.”

  • Flash Cache-increases productivity by using flash memory for the Organization of an efficient cache layer store most “hot” blocks of data to the flash workspace, not using this capricious and expensive SSD.

  • Snapshots-allows you to almost instantly to create “snapshots” of the State of the data, and create backups of them, and instantly recover from these virtual machines without sacrificing performance and without taking up space unnecessarily on repository under these “snapshots.”

  • FlexClone-creates the ideal clones “data, such as a virtual machine disk or user data, which occupy on disk the data only to the extent of changes related to the “original” clone that lets you store hundreds of 3-d clones in a small space.

  • VMware Storage Console allows you to conveniently administer storage system in the embedded in vCenter application interface page and automate some routine procedures. As well as automatically optimize critical storage system settings for best results, and to enable the administrator to manage itself next VMware allowed him to store settings, VMware ESX, not distracting on such trivia storage system administrator.

  • Use some nice features, such as secure multitenancy (safe for users divided into isolated storage system virtual filers), VAAI, and so on, on which I told almost nothing to make this article infinite.

To date, such a comprehensive set of features for working in virtual Wednesday does not offer any other vendor storage systems. And it’s because we haven’t talked about performance, reliability, and ease of administration that is worth a separate article.
Thus, for a price comparable to the prices of similar storage systems from other manufacturers. You get on with great opportunities to connect with support for different protocols with more reliability and protection at the expense of RAID-DP and snapshots, better performance due to the Flash Cache, and with more capacity, at the cost of duplication, FlexClone and thin provisioning.

So, it seems to me that if you have plans to deploy a virtual server infrastructure or cloud system and already have identified themselves or another storage system the manufacturer. Before you make a choice, it makes sense to get acquainted with NetApp storage systems carefully. So that in most cases, you don’t have to buy a pig in a bag, because most companies NetApp partner you can get a system in “trial,” to assess its capabilities directly to the gland,” correctly your task.
That’s why I call the NetApp storage system “the perfect choice for VMware” (as well as Hyper-V, Xen, KVM, and so on).

And how do you think what kind of opportunities in your storage system is not enough to consider it “the ideal solution for virtualization?