Last Friday I was brainstorming with Gabrie van Zanten about the optimal placement of the VMDKs across our LUNs. We tried to come up with an algorithm that could give us insight in what would be our optimal storage layout.
First read the complete post here.
This weekend I was continuously thinking about this challenge.
I came up with some more requirements:
- Don’t focus on VMs, but focus on the VMDKs. This because we have VMs that won’t fit completely in one LUN of 500GB. VMs might have different VMDKs with different IO behaviour.
- Take the VM memory size into account, because we don’t have reservations we will have a swap file of this size.
- Some VMs, like SQL Servers, need their VMDKs spread over different LUNS for better performance. So we will need some kind of affinity rules.
- Check if the average IO load is above the maximum recommended value of VMware or the storage box vendor. Maybe we will need more LUNs.
- Take the current storage layout as a starting point because some VMDK’s might be in the right place allready. So we don’t have to reshuffle everything.
- Maximum LUN fillrate will be 90%.
- Use a deviation of +/-10% on the average IO to work with.
Then a complete other thought came across my mind.
Why do we still have VMFS?
If we can assign LUNs directly to the VM, we can let the storage box balance the IO load.
First we can think of using RDMs, but you will quickly find yourself running out of SCSI IDs, because off the maximum of 256 LUNs.
You could add extra HBAs, but do you have sufficient PCI slots?
In the next version of ESX there’s the new Cisco Nexus 1000v to give back the network management tasks we took from the network guys.
So where is the virtual fibrechannel switch to give back the storage management tasks we took from the storage guys?
- Unable to login to your ESX server Tweet Ivo Beerens posted this article last week on the defunct cimservera processes that render an ESX Host unmanageable. See also this VMWare KB Article. Symptoms include: Unable to log...