Friday, December 21, 2012
Thursday, December 20, 2012
Clustered ONTAP Explained Through Brains
Hi All,
A loyal follower of mine asked if I would put more pictures into my blogs, so here goes. I have this thing for brains, so I figured I would explain Clustered ONTAP through the use of brains.
Warning - Do NOT Try This At Home
Say you were a mad scientist and you began splicing brains together. In my example you only have two brains so far. Even though each brain came from a single person, each half has different functions and thoughts that are linked together through the corpus callosum, which allows both halves to communicate. In your crazed genius you were able to bridge two brains together with another special corpus callosum, so now all four halves can communicate to one another. Now we have two separate brains hooked up together in a cluster. Each half has it's own functions, but those functions can pass to different halves, or to a different brain all together!
There, you have Clustered ONTAP! Can't see it yet? Each half of brain is a node in the cluster which is designed to perform functions or volumes in the case of Clustered ONTAP. Each half or node can communicate and take over functions of the other half or it's partner, hooked together through an HA interconnect. The two brains are hooked together through a clustered network running 10 gigabit. In my example both brains are in a single Vserver, so they can share each others functions, but they could be set to not pass workloads to each other if put you them in separate Vservers or jars.
Take a look at my picture. The brain on the left represents me, and you can see I have two functions in mind. In Clustered ONTAP these would be volumes where data is being stored. So when my better half kicks me in the butt in the morning and tells me it's time to go to work, that function or volume passes from her brain to mine. So now I'm processing that volume too. When I tell her it's time to have fun and pass over the play video games volume and she'll start to play video games.
I hope you enjoyed my explanation of Clustered ONTAP through the use of brains.
Until Next Time!
A loyal follower of mine asked if I would put more pictures into my blogs, so here goes. I have this thing for brains, so I figured I would explain Clustered ONTAP through the use of brains.
Warning - Do NOT Try This At Home
Say you were a mad scientist and you began splicing brains together. In my example you only have two brains so far. Even though each brain came from a single person, each half has different functions and thoughts that are linked together through the corpus callosum, which allows both halves to communicate. In your crazed genius you were able to bridge two brains together with another special corpus callosum, so now all four halves can communicate to one another. Now we have two separate brains hooked up together in a cluster. Each half has it's own functions, but those functions can pass to different halves, or to a different brain all together!
There, you have Clustered ONTAP! Can't see it yet? Each half of brain is a node in the cluster which is designed to perform functions or volumes in the case of Clustered ONTAP. Each half or node can communicate and take over functions of the other half or it's partner, hooked together through an HA interconnect. The two brains are hooked together through a clustered network running 10 gigabit. In my example both brains are in a single Vserver, so they can share each others functions, but they could be set to not pass workloads to each other if put you them in separate Vservers or jars.
Take a look at my picture. The brain on the left represents me, and you can see I have two functions in mind. In Clustered ONTAP these would be volumes where data is being stored. So when my better half kicks me in the butt in the morning and tells me it's time to go to work, that function or volume passes from her brain to mine. So now I'm processing that volume too. When I tell her it's time to have fun and pass over the play video games volume and she'll start to play video games.
I hope you enjoyed my explanation of Clustered ONTAP through the use of brains.
Until Next Time!
Wednesday, December 19, 2012
NVRAM - Our Catalyst Friend
Hi Folks,
This will be a quicky, but I felt it deserved it's own post. I get a lot of questions regarding the purpose of NVRAM, it's a bit of an enigma, so I thought I'd clarify it functionality.
Here's what it is NOT:
1. Performance Acceleration Card
2. Read Cache
3. Write Cache
3. Used to calculate parity
4. Used by WAFL to map where data should be placed on disk
NVRAM's is a short term transaction log and it enables RAM to coalesce writes to minimize head spin and writes to disk. While those writes are in RAM, they are mirrored to NVRAM. All of the goodness and logic built into ONTAP and WAFL are made possible due to the insurance NVRAM gives to it. If the power goes out, all of those writes in RAM are gone, so NVRAM steps in and flushes all of it's contents to disk since it has a battery backup. This way nothing is lost! So NVRAM doesn't make writes more efficient (that's somebody else's job), it's there to catch data if there's a power outage.
Until Next Time!
This will be a quicky, but I felt it deserved it's own post. I get a lot of questions regarding the purpose of NVRAM, it's a bit of an enigma, so I thought I'd clarify it functionality.
Here's what it is NOT:
1. Performance Acceleration Card
2. Read Cache
3. Write Cache
3. Used to calculate parity
4. Used by WAFL to map where data should be placed on disk
NVRAM's is a short term transaction log and it enables RAM to coalesce writes to minimize head spin and writes to disk. While those writes are in RAM, they are mirrored to NVRAM. All of the goodness and logic built into ONTAP and WAFL are made possible due to the insurance NVRAM gives to it. If the power goes out, all of those writes in RAM are gone, so NVRAM steps in and flushes all of it's contents to disk since it has a battery backup. This way nothing is lost! So NVRAM doesn't make writes more efficient (that's somebody else's job), it's there to catch data if there's a power outage.
Until Next Time!
Monday, December 17, 2012
Why Clustered ONTAP for Virtual Desktops - Ponce de Leon Would Have Loved it!
Hi All,
For those of you that have been NetApp fans for awhile I'm sure you've heard of Clustered ONTAP. It's had a variety of names, over the years, GX, Cluster-Mode, C-Mode, but essentially what you were seeing is the evolution of a new butt kicking product!
What's one of the hardest things we have to face with technology? We've all been hit by it, upgrading! Technology moves so fast that by the time you buy anything, it's already outdated. Replacing the hardware sucks, but migrating your data sucks worse! How about patching? That's an administrator's nightmare! As painless as vendors try to make patching it almost always requires downtime, and it's inevitable that something doesn't come back up.
With Clustered ONTAP, no longer do you need to halt production to add new hardware, to patch or migrate data! How is this possible? Let's take a look at our friend Data ONTAP 7-Mode and vFilers. So what's a vFiler? basically, a virtual filer inside of a filer. And like a hypervisor, you can have multiple vFilers within a single filer. Take that power and capability increase it and you've got vServers in Clustered ONTAP. With Clustered ONTAP, everything is dealt with at the vServer layer. So what? Remember when we didn't have Flexvols and how cool it was when you got them? Yeah, it's like that.
In general, vServers can span multiple heads and aggregates and what that gives you is the ability to move stuff on the fly. Move stuff on the fly you say? So when I need to do maintenance, I can just migrate volumes to another node and keep production running? Why yes, yes you can! You can move volumes, with no downtime, upgrade, replace, service a node and your users will be none the wiser. In essence, your cluster is now immortal! Muahahaha! The cool thing is you can have high end and low end nodes in your cluster for both dev/test and production all in one.
So what's the big deal about virtual desktops and Clustered ONTAP? All of the coolness I've stated above, PLUS, say you have your desktops separated out by department. When crunch time hits and users need more power, you can move those users to a faster node! What if a node crashes? Migrate the users and their data to another node without them knowing. Or say you're a service provider and have multiple companies living on your cluster. You don't want them interacting and vServers can do just that. Completely different volumes, networking, etc. Plus, you can give administrative rights of a vServer to each group, while still being the master of the cluster.
Until Next Time
For those of you that have been NetApp fans for awhile I'm sure you've heard of Clustered ONTAP. It's had a variety of names, over the years, GX, Cluster-Mode, C-Mode, but essentially what you were seeing is the evolution of a new butt kicking product!
What's one of the hardest things we have to face with technology? We've all been hit by it, upgrading! Technology moves so fast that by the time you buy anything, it's already outdated. Replacing the hardware sucks, but migrating your data sucks worse! How about patching? That's an administrator's nightmare! As painless as vendors try to make patching it almost always requires downtime, and it's inevitable that something doesn't come back up.
With Clustered ONTAP, no longer do you need to halt production to add new hardware, to patch or migrate data! How is this possible? Let's take a look at our friend Data ONTAP 7-Mode and vFilers. So what's a vFiler? basically, a virtual filer inside of a filer. And like a hypervisor, you can have multiple vFilers within a single filer. Take that power and capability increase it and you've got vServers in Clustered ONTAP. With Clustered ONTAP, everything is dealt with at the vServer layer. So what? Remember when we didn't have Flexvols and how cool it was when you got them? Yeah, it's like that.
In general, vServers can span multiple heads and aggregates and what that gives you is the ability to move stuff on the fly. Move stuff on the fly you say? So when I need to do maintenance, I can just migrate volumes to another node and keep production running? Why yes, yes you can! You can move volumes, with no downtime, upgrade, replace, service a node and your users will be none the wiser. In essence, your cluster is now immortal! Muahahaha! The cool thing is you can have high end and low end nodes in your cluster for both dev/test and production all in one.
So what's the big deal about virtual desktops and Clustered ONTAP? All of the coolness I've stated above, PLUS, say you have your desktops separated out by department. When crunch time hits and users need more power, you can move those users to a faster node! What if a node crashes? Migrate the users and their data to another node without them knowing. Or say you're a service provider and have multiple companies living on your cluster. You don't want them interacting and vServers can do just that. Completely different volumes, networking, etc. Plus, you can give administrative rights of a vServer to each group, while still being the master of the cluster.
Until Next Time
Friday, December 14, 2012
Citrix VDI with PvDisk and NetApp Best Practices - Part III Restore
Hi All,
Here it is, the long awaited completion of the Backup and Recovery saga. As before, remember to try this in your development environment and not in production! Use at your own RISK!
One of your users calls you up and tells you that they had all their data on their PvDisk and they accidentally erased some of it. Now, before you go rushing off and restore the whole thing, we've got a few questions we need to ask!
1. Has the user added new data to his/her PvDisk since the last backup?
2. If yes, can they save it to another location before you do the restore?
3. If no, the restore will take more time, can they wait? (I'll explain)
With the NetApp VSC you can restore the entire virtual machines, individual disks, or individual files. I'll walk you through these.
The easiest is to restore an entire machine.
1. Log into Virtual Center, find the machine, right click it, select NetApp > Backup and Recovery > Restore
2. Next select the snapshot/backup you'd like to restore.
3. Choose the entire virtual machine and Restart VM. This will overwrite the entire virtual machine and roll back all changes to when the snapshot/backup was taken. Be careful with this because any changes made after the backup will be GONE!
4. Once you're happy with the choices you've made, review the summary and finish the restore.
The virtual machine will now be restored! Now if you only want to restore certain datastores, you would go back to VM Component Selection and select only the datastore you wanted to restore.
Now what if you wanted individual files? Here you would right click the machine, select NetApp > Backup and Recovery > Mount
This is where things get really cool! Notice in the screen shot you select the snapshot/backup you want and it also has listed all the virtual machines in that backup. Since all of those machines are in the volume and the magic happens at the volume level, you can restore any files from any of those machines. So what happens next? After you mount this datastore there are a number of things you can do. It's given a unique identifier to not confuse ESXi that there are duplicate datastores mounted. You can now browse the datastore, for VMware files or edit the settings of the original desktop or ANY desktop and mount up the backed up VMDK as a new hard drive on the desktop! How cool is that?! I love this feature!! Grab the files your user needs and than remove the hard drive and un-mount the temporary datastore.
There's more you can do, but this is a quick glimpse of the restore power of VSC. I hope the wait of this blog was worth it, and if not, well than too bad. :-)
Until Next Time!
Here it is, the long awaited completion of the Backup and Recovery saga. As before, remember to try this in your development environment and not in production! Use at your own RISK!
One of your users calls you up and tells you that they had all their data on their PvDisk and they accidentally erased some of it. Now, before you go rushing off and restore the whole thing, we've got a few questions we need to ask!
1. Has the user added new data to his/her PvDisk since the last backup?
2. If yes, can they save it to another location before you do the restore?
3. If no, the restore will take more time, can they wait? (I'll explain)
With the NetApp VSC you can restore the entire virtual machines, individual disks, or individual files. I'll walk you through these.
The easiest is to restore an entire machine.
1. Log into Virtual Center, find the machine, right click it, select NetApp > Backup and Recovery > Restore
2. Next select the snapshot/backup you'd like to restore.
3. Choose the entire virtual machine and Restart VM. This will overwrite the entire virtual machine and roll back all changes to when the snapshot/backup was taken. Be careful with this because any changes made after the backup will be GONE!
4. Once you're happy with the choices you've made, review the summary and finish the restore.
The virtual machine will now be restored! Now if you only want to restore certain datastores, you would go back to VM Component Selection and select only the datastore you wanted to restore.
Now what if you wanted individual files? Here you would right click the machine, select NetApp > Backup and Recovery > Mount
This is where things get really cool! Notice in the screen shot you select the snapshot/backup you want and it also has listed all the virtual machines in that backup. Since all of those machines are in the volume and the magic happens at the volume level, you can restore any files from any of those machines. So what happens next? After you mount this datastore there are a number of things you can do. It's given a unique identifier to not confuse ESXi that there are duplicate datastores mounted. You can now browse the datastore, for VMware files or edit the settings of the original desktop or ANY desktop and mount up the backed up VMDK as a new hard drive on the desktop! How cool is that?! I love this feature!! Grab the files your user needs and than remove the hard drive and un-mount the temporary datastore.
There's more you can do, but this is a quick glimpse of the restore power of VSC. I hope the wait of this blog was worth it, and if not, well than too bad. :-)
Until Next Time!
Tuesday, December 11, 2012
Get Neil in Your Email!
Hi All,
Not sure if you've noticed, but I added a cool subscribe widget on the right hand side of the blog. So if you'd like to get my blogs emailed to you directly enter your email and click subscribe! I don't think you'll get any spam. I signed up to see if it worked and don't think I've gotten any spam yet, or maybe if goes to my spam folder. :-)
Many thanks to the folks that have already subscribed, I hope to continue putting out quality blogs that you'll enjoy!!
Not sure if you've noticed, but I added a cool subscribe widget on the right hand side of the blog. So if you'd like to get my blogs emailed to you directly enter your email and click subscribe! I don't think you'll get any spam. I signed up to see if it worked and don't think I've gotten any spam yet, or maybe if goes to my spam folder. :-)
Many thanks to the folks that have already subscribed, I hope to continue putting out quality blogs that you'll enjoy!!
Sharefile - Sharing is Caring
Hi All,
I'm going to change gears a bit here and talk about something new! Don't worry, I haven't abandoned virtual desktops, just expanding into new waters. Today I'm going to talk to you about Citrix Sharefile on NetApp storage.
Haven't heard of Sharefile yet? How about Dropbox? It's a very cool technology, and it allows users to share their files in the cloud so you can share with friends, co-workers, other companies, etc. Ever need to send a Word document or Power Point that's larger than 10 megs? Many times our email systems have been programed to not allow this size of files through. 10 megs might not sound like a lot of data, but multiply that time the thousands of other users that might be sending large file and you can quickly over load an infrastructure. Also, I can save my files in the cloud, and can access it on any device!
I stand by my original assessment that users are whacky and will do things they're not supposed to, like post company data on a third party application out in the cloud. The cloud let's us do remarkable things, the problem is, it lets us do remarkable things. Think about it.... Where is your data, is it secure, who's downloading it, who's viewing it, who's benefiting from it? Maybe your competition? In a perfect world, we could trust and there wouldn't be bad people. Unfortunately we don't live in that world and there are people who will take advantage.
So what's a company to do? Users demand Dropbox capability, but as an administrator you need to secure that data....
<<Curtain Please>>Now introducing Sharefile! <<Screams from The Audience>> Sharefile will give your users the ability to post their files into the cloud, but let security administrators sleep at night. Citrix allows you to store your data on-premise, on Citrix managed storage or a combination of the two. Being employed by a storage company, I'm going to suggest the on-premise suggestion. :-) For more than obvious reasons, I like the on-premise because my user's data is in house. Since the data is on-premise, I can de-dupe, compress and keep an eye on it. Think of it as "your" cloud!
A lot more to talk about in later blogs!
Until Next Time
I'm going to change gears a bit here and talk about something new! Don't worry, I haven't abandoned virtual desktops, just expanding into new waters. Today I'm going to talk to you about Citrix Sharefile on NetApp storage.
Haven't heard of Sharefile yet? How about Dropbox? It's a very cool technology, and it allows users to share their files in the cloud so you can share with friends, co-workers, other companies, etc. Ever need to send a Word document or Power Point that's larger than 10 megs? Many times our email systems have been programed to not allow this size of files through. 10 megs might not sound like a lot of data, but multiply that time the thousands of other users that might be sending large file and you can quickly over load an infrastructure. Also, I can save my files in the cloud, and can access it on any device!
I stand by my original assessment that users are whacky and will do things they're not supposed to, like post company data on a third party application out in the cloud. The cloud let's us do remarkable things, the problem is, it lets us do remarkable things. Think about it.... Where is your data, is it secure, who's downloading it, who's viewing it, who's benefiting from it? Maybe your competition? In a perfect world, we could trust and there wouldn't be bad people. Unfortunately we don't live in that world and there are people who will take advantage.
So what's a company to do? Users demand Dropbox capability, but as an administrator you need to secure that data....
<<Curtain Please>>Now introducing Sharefile! <<Screams from The Audience>> Sharefile will give your users the ability to post their files into the cloud, but let security administrators sleep at night. Citrix allows you to store your data on-premise, on Citrix managed storage or a combination of the two. Being employed by a storage company, I'm going to suggest the on-premise suggestion. :-) For more than obvious reasons, I like the on-premise because my user's data is in house. Since the data is on-premise, I can de-dupe, compress and keep an eye on it. Think of it as "your" cloud!
A lot more to talk about in later blogs!
Until Next Time
Friday, December 7, 2012
Let's Share a WAFL
Hi All,
Today I'd like to talk to you about WAFL.
No, not waffles, the NetApp Write Anywhere File Layout. I'm often asked about NetApp controllers write performance and if it can do RAID 1+0 or RAID 5, etc, so I felt it would be handy to discuss a bit about WAFL and how NetApp uses RAID for data resiliency. I've been focusing a lot on PVS and for those that know PVS, you know how write intensive it is. That's where WAFL comes in.
NetApp is one of those companies that did things differently, they built a new idea from the ground up and it really shows when you start to investigate how data is written to disk. Random writes are probably some of the most expensive operations because the platters have to spin up, the heads have to find the data, etc. etc. Instead of doing things the traditional way, a NetApp controller will hold that data in memory and wait until it has a bunch more blocks to write to disk. Without going into a lot of technobable, at an optimal time all that randomness is coalesced and written to disk, avoiding multiple spin ups. The coolness factor is just beginning. The blocks can be written anywhere on disk because the OS has a map of where the free space is, hence speeding up writes even more. Even cooler still, blocks don't have to overwrite previous blocks at the same location, you guessed it, speeding up writes even more!
But Neil, there's all that data in memory, what happens if the power goes out? Ah, I'm glad you asked! We have a card built into the controllers called NVRAM with memory and a battery. It's job is to mirror what's in volatile memory and copy it to disk if the lights go out.
So, back to RAID. NetApp uses RAID 4 and RAID DP (basically RAID 4 with a second parity drive for resiliency). But Neil, aren't there better technologies that that?! Ah, glad you asked that too! See, if NetApp didn't do things differently than yes, I'd agree with you, but with the WAFL intelligence built into the box, RAID is just a way to protect the data once it's actually on physical disk. So you see, you get RAID 1+0 resiliency but at a much lower cost!
So what you ask? Well, in your PVS environment that's 90% writes, you're got a storage platform that was created with writes in mind! This is a brief and watered down explanation and if there's interest I'll go into more detail, but I wanted to share some of the cool factor at the core of NetApp that often gets over looked.
Yes yes, I know, I still owe you an article on restoring PvDisk. I just got my brain back from holiday, give me a break. :-)
Until Next Time!
Today I'd like to talk to you about WAFL.
No, not waffles, the NetApp Write Anywhere File Layout. I'm often asked about NetApp controllers write performance and if it can do RAID 1+0 or RAID 5, etc, so I felt it would be handy to discuss a bit about WAFL and how NetApp uses RAID for data resiliency. I've been focusing a lot on PVS and for those that know PVS, you know how write intensive it is. That's where WAFL comes in.
NetApp is one of those companies that did things differently, they built a new idea from the ground up and it really shows when you start to investigate how data is written to disk. Random writes are probably some of the most expensive operations because the platters have to spin up, the heads have to find the data, etc. etc. Instead of doing things the traditional way, a NetApp controller will hold that data in memory and wait until it has a bunch more blocks to write to disk. Without going into a lot of technobable, at an optimal time all that randomness is coalesced and written to disk, avoiding multiple spin ups. The coolness factor is just beginning. The blocks can be written anywhere on disk because the OS has a map of where the free space is, hence speeding up writes even more. Even cooler still, blocks don't have to overwrite previous blocks at the same location, you guessed it, speeding up writes even more!
But Neil, there's all that data in memory, what happens if the power goes out? Ah, I'm glad you asked! We have a card built into the controllers called NVRAM with memory and a battery. It's job is to mirror what's in volatile memory and copy it to disk if the lights go out.
So, back to RAID. NetApp uses RAID 4 and RAID DP (basically RAID 4 with a second parity drive for resiliency). But Neil, aren't there better technologies that that?! Ah, glad you asked that too! See, if NetApp didn't do things differently than yes, I'd agree with you, but with the WAFL intelligence built into the box, RAID is just a way to protect the data once it's actually on physical disk. So you see, you get RAID 1+0 resiliency but at a much lower cost!
So what you ask? Well, in your PVS environment that's 90% writes, you're got a storage platform that was created with writes in mind! This is a brief and watered down explanation and if there's interest I'll go into more detail, but I wanted to share some of the cool factor at the core of NetApp that often gets over looked.
Yes yes, I know, I still owe you an article on restoring PvDisk. I just got my brain back from holiday, give me a break. :-)
Until Next Time!
I'm Back!
Hi Guys and Gals,
I'm back! It's been a tough battle, but I got my brain to come back from holiday. I'd like to give a shout out to my mentor's blog:
http://rachelzhu.me/
For those that know Rachel Zhu, you know how super smart she is and knows Citrix and VDI like the back of her hand. She's writing a great multi-part XenDesktop best practice on storage series. I encourage you to take a look! We've been heavily testing XenDesktop on clustered ONTAP and she has some great insights.
All for now!
I'm back! It's been a tough battle, but I got my brain to come back from holiday. I'd like to give a shout out to my mentor's blog:
http://rachelzhu.me/
For those that know Rachel Zhu, you know how super smart she is and knows Citrix and VDI like the back of her hand. She's writing a great multi-part XenDesktop best practice on storage series. I encourage you to take a look! We've been heavily testing XenDesktop on clustered ONTAP and she has some great insights.
All for now!
Subscribe to:
Posts (Atom)