Friday, March 29, 2013

Didja Know? VMware Converter for Correcting Misalignment!

Ahoy,

Another in my series of "Didja Know?"  This might be old news to you, but I just found out that version 5 of the VMware vCenter Converter will fix virtual machine misalignment!  So what is misalignment and why should you care?  That's a blog all in its own, but basically it's when the guest file system doesn't match the block boundaries of the storage system.  Okay, so what?  The problem is data gets split between storage blocks and you get partial reads and writes to many more blocks than you would need if the storage and OS were aligned.  This can result in a lot more work on your storage device than is necessary.  For those that don't know, this is much more common than you realize and just until recently OS's had alignment issues with many storage vendors.  Here's a really good paper that explains misalignment in great detail!

There are a lot of tools to correct misalignment and the best practice is to fix it before you even put a machine into production.  Today I'm going to show how VMware vCenter Converter can fix alignment.  VMware vCenter Converter is normally a tool to convert physical machines into virtual machines, but I was delighted to hear that they added this feature!

Below I'm using the NetApp Virtual Storage Console to clone a Windows XP machine.  The VSC tells me the machine is misaligned and if I'd like to proceed with the clone?  If I proceed, I will carry over the misalignment to the clone.  VSC has tools built in to correct misalignment as well, but that's a demonstration for another day.





























So let's fix the problem!  I run the VMware Converter and tell it that I want to Convert a machine.






 Select where the misaligned machine lives and the machine itself.


Enter the credentials of the vCenter you'd like the converted machine to go to.


Give the new machine a name and where it should be located.


Select the datastore the machine should live on and what virtual machine version you'd like.


Click on "Data to copy".


Here's where things change!  Choose, "Select volumes to copy". 


Ensure the "Create optimized partition layout" is checked.  This will correct the misalignment on the clone.


Once the convert is completed use the VSC to check for misalignment.  Here I do a "Create Rapid Clones".  This time you can see the converted virtual machine past the misalignment scan because it is now aligned!






























I hope you enjoyed this "Didja Know" and that it will help keep your environment misalignment free!

Until Next Time!

Thursday, March 28, 2013

Let's Talk about Linked Clones

Hi All,

Today I thought I'd address a topic I've had many questions about in the past.  The mystery behind VMware View's Linked Clones.  The technology is based upon their snapshot technology and is quite ingenious!  It is very storage efficient, creates clones fast and is easy to manage.  Unfortunately we started to see problems in the field when a large number of Linked Clones were deployed.  The controller would be getting slammed and as a result the customer's desktops would suffer.

The first I heard of the problem I was new to VDI and was told Linked Clones are misaligned, so I created a bunch, logged into a Windows 7 clone and ran an "msinfo32".  Nope, divisible by 4096, not misaligned!  I was told it's not the master image that's misaligned, but the "delta" disk.  What's a "delta" disk?!  The delta disk is where all of the changes a user makes to their desktop get stored.  Since the master image is read-only, writes get written to the delta disk.  Now this is the part that didn't make sense to me.  The delta disk is NOT misaligned, what happens is it can write data as small as 512 bytes.  Now this wouldn't be a problem if every write Windows did was 512 bytes, but it's not.  You get a mix of 512, 4K, 16K, 32K, etc.

Data ONTAP WAFL blocks have 4K boundaries and if Windows wrote everything in 512 bytes, everything would be cool!(8 512 bytes make up 1 4K).  Here in lies the problem.  Say I'm writing to the disk, I write 512 bytes of data, that data is aligned, no problem!  Okay, but the next chunk of data I write is 4K.  Ugh oh!  It won't fit because I only have 3584 bytes left, so I need to carry over the last 512 bytes from the 4K write to another 4K block which requires additional I/O.  Now I go and read that 4K of data and instead of being able to read just one block, I now have to read from two because the data was split over two NetApp 4K blocks.  You got it, more I/O!  Take a look at the picture, it should help.











What I've drawn is 512 bytes of data being written. Next 4K of data and you can see that last 512 bytes gets written to a brand new NetApp block.  And last I write 1K of data.  If I access that 512 bytes of data or that 1K of data, no problem.  The problem is larger chunks of data that can get truncated and placed onto multiple blocks which will increase I/O when it's written AND when it's read.  Is this behavior unique to NetApp?  Nope, it will effect any storage vendor that doesn't use 512 bytes as their block size.

So what can be done about this?  In vSphere 5.1 , VMware introduced a new filesystem format called SESparse.  VMware View 5.2 is the first VMware product that uses the new SESparse disk type.  SESparse has a couple of functions, it can clean up unused space in clones, but what really excites me is the smallest chunk of data it can write to disk is 4K enabling the delta disk to stay block aligned!










Here I have another 512 byte chunk of data being written to disk, and than it jumps to the next 4K boundary and is ready to write the next chunk of data, which I've drawn as 4K.  This does use a little more space, but this will help immensely if you're having performance issues due to the partial reads and writes.  If you're okay with traditional Linked Clones, you don't need to use SESparse, you can stick with VMFSSparse.  There are also some caveats to look out for, so check the VMware site for best practices.

I hope this helped clear up some of the confusion around Linked Clones and why I'm excited about the new version of View!

Until Next Time!

Thursday, March 21, 2013

NFS vs. Block (FC/iSCSI) Protocol

Hi All,

Protocol seems to be one of those topics you don't discuss over dinner, like politics!  Storage folks tend to be one or the other NFS or Block and I've heard some very heated debates on this topic.  Yep, nerd fights!  I used to be a die hard block guy, especially for enterprise storage.  NFS was cool for file sharing or home directories, but BLOCK was king when it came to databases, mission critical applications, etc.  When I went to work for NetApp my opinion quickly changed, especially when I began working on virtualization products like VMware.

When I'm talking with customers I'm frequently asked, "What is better block or NFS?"  It's a bit loaded question because the customer usually has an idea of what they like better, plus they've probably already spent a ton of money on a new infrastructure.  The last thing I want to do is call their baby ugly!  I usually tell them *I* prefer NFS for these reasons, and I list out some reasons.  But in case they're a block shop, I remind them that NetApp can do both NFS and block concurrently. :-)

I was reading our internal discussion groups this morning and saw a great post by Nick Triantos.  For those of you that don't know Nick, he's a brilliant guy and an avid blogger.  I'm always amazed how quickly and precisely he knows the answers to things that would take me much more time to find out!  He answered a question on NFS vs. iSCSI for VMware this morning and I'd like to share it with you.  Enjoy!


"Datastore resizing is another difference...With NFS you can resize up or down on the fly. with VMFS you can only increase the size of the datastore. That means if you ever need to rebalance a VMFS datastore by Storage VMotioning VMs to another datastore, you now have captive storage you can't reuse, unless you create a new datastore of the required size, move your VMs into it and destroy the old one. Is it a hard thing to do? No. Does it require additional steps on the server and the storage side? Yes.

Deduplication, post process dedup that is, over NFS, allows a vmware admin to immediately realize the space savings on the host without any additional work. The same is not true for block protocols as additional work needs to be done on the storage array.

No limitation on NFS as to the datastore size. The limit is whatever the file server support. The same is not true for block protocols where the datastore size imposed is at 64TB. Although I find hard to believe one will create a 64TB datastore anyway.

Also,  each virtual disk file on NFS has its own I/O queue directly managed by the NFS server. This is not true for block protocols which have per LUN queues and can become a point of IO contention. All that translate to higher fan-in ratios in terms of the number of VM to an NFS datastore vs VMFS datastore. Partners have been telling me for a long time they have customers with 250 and 300 VMs in a single NFS datastore. The only way you can pack that many VMs in a single VMFS datastore without issues is if they are powered off. :-)

The benefit of NFS is day to day operational efficiency and granularity. The architecture does require some thinking upfront and is largely dependent on the switching infrastructure, but once you lay it down, everything else is a breeze.

In the interest of full disclosure...there are some caveats with NFS...No support by Microsoft for Exchange deployments, although at VMworld, last year, I met with customers that ignored the support statement and have been running with it with no issues. You also can't use Microsoft Failover Clusters (you can't use it with Native vSphere iSCSI either). So as long as these are not required, NFS is the right choice, IMO.

Last but not least, considering that Virtual Machines are comprised of a bunch of files, why would use a block protocol to manage them to begin with, if you had a choice?"

Monday, March 4, 2013

Where Are My Aggregates in VSC for Clustered ONTAP?


Hi Friends,


So you've setup your Vserver, you've done your discovery in vCenter with VSC and added your Clustered ONTAP system, but when you try to provision a datastore your aggregates are no where to be found?!  Yep, another one I ran into that I hope I can spare you some of my frustration. :-)
 
 Here we can see the clustered ONTAP management console for our cluster has been added to VSC.


 But when we try to provision storage, no aggregates show up?




The reason is actually by design.  Remember, everything now happens at the Vserver level, I can do all kinds of cool stuff including migrating a volume from one aggregate to another, as long as that aggregate has been added to the Vserver.  So here's what we've missed.  After we created the Vserver, we forgot to assign aggregates to this Vserver!

Take a look at the available aggregates.  You'll see we have agg0 and aggr1.  Just like in 7-Mode, we don't want to mess with aggr0 since that's where the OS lives.  So lets add all of the aggr1 aggregates from all of the nodes to our Vserver so we can start provisioning storage!

Remember the cool thing about clustered ONTAP is you can migrate volumes and this is one of the building blocks.


Let's go back to VSC and create that datastore!

This time we can see the aggregates, so we're in business and if/when we need to migrate to a volume on another aggregate, the Vserver is aware of them.


Something easy to overlook, but easy to fix!  I hope this was helpful.

Until Next Time!


Friday, March 1, 2013

Look Mom, I'm on YouTube!



Hi All,

Recently I filmed a couple of videos with Citrix and they are ready for viewing!

The first is a panel discussion at a Citrix SE event where we discussed the common storage myths and how NetApp vanquishes those myths!


The second is the first Tech Talk in a series with a Citrix field expert and me.  We discuss what Citrix is doing in the VDI realm and how write optimizations on NetApp will help make a customer’s VDI implementation successful.
http://www.youtube.com/watch?v=wN-F_nFTrDY&list=UUwDXCIzgP3jg6Sm4ZTrmpxA&index=2

I hope you enjoy them!