October 26th, 2017 by Kevin OBrien
In the Lab: Migrating Workloads to VMware vSAN
For as much work as we've done around VMware vSAN in terms of site content and reviews, it may come as a surprise to some that we hadn't been using any vSAN in production. With the refresh of our lab servers complete (12x Dell EMC PowerEdge R740xd), we decided to solve this problem by repurposing a handful of PowerEdge R730 servers and extra SSDs we had on hand. The result is a modest vSAN 6.6 configuration that we will use to host VMs needed for testing, in addition to serving as a platform for testing out and reporting on new vSAN features. This deployment though brings up an immediate question that vSAN buyers are often curious about: how do I migrate existing workloads to vSAN?
As with most enterprises, our primary lab storage is a mix of iSCSI shares and Fibre Channel storage with most of our data residing on Dot Hill arrays and a Fusion-io ION. There are a number of different ways to approach this migration, from moving VMs between datastores with a basic SVMotion command with the storage locally attached to a host that can see both datastores, or a two-step migration where both the host and datastore changes.
If you have a storage array with iSCSI LUNs, this process is pretty easy. Add the iSCSI storage device to one of your vSAN hosts if you haven't already, add the iSCSI target from your storage array, and quickly have access to that VM inside the vSAN cluster without even having to migrate the datastore yet. If you have a storage array that communicates over FC, you can either leverage a FC HBA if one exists inside your server already, or add one to the host. If the FC storage won't be used much in the future, the cost and downtime associated with installing the HBA might not be worth it, if that's the case we move on to the two-step process.
When moving VMs between ESXi hosts where the compute and storage will change in the same movement, you can move the VM anywhere in your environment as long as the devices can communicate to one another inside your vCenter. This is an option that has the most compatibility, but may not be the fastest transfer path if a native one already exists. For individual VMs or small groups, this may not be a problem, but with quite large VMs or large batches a faster transfer path may be warranted.
In our case we can see that the transfer of the VM took only a few minutes, with the 10G network link between the two hosts helping push speeds up to around 400MB/s. Overall this is one of the easier steps associated with getting your vSAN platform up and running, but there are a few ways to approach it depending on what your needs or hardware capabilities are.
In this piece we've talked quite a bit about how easy it is to migrate existing VMware VMDKs and iSCSI shares to vSAN. In any organization that's already virtualized, this process is pretty easy. VMware also has a tool for bare metal migrations; we used VMware vCenter Converter prior to get legacy workloads into a virtualized state. No matter how you get there, vSAN offers quite a bit in terms of operational efficiency and TCO savings over traditional IT deployments. We're enthusiastic to realize those benefits in our lab and look forward to publishing more content around real world experiences.