In cloud computing, virtual machine (VM) migration plays a key role in maximizing the server efficiency, saving energy, and facilitating system maintenance. However, while virtualization systems such as vSphere and Hyper-V enable the overall migration time to be reduced by moving more than one VM at the same time, the resulting rapid growth in the volume of data temporarily stored at the buffer can lead to significant packet losses and a catastrophic drop in the system throughput. The present study proposes a method for mitigating this so-called TCP Incast effect by sorting the VMs in order of descending load and then interleaving the migrations of the more lightly-loaded VMs with those of the more heavily-loaded VMs. By doing so, the overlapping time of the VM migrations is reduced, and hence the risk of overflow-induced packet losses is decreased. A server consolidation ratio is derived to estimate the energy saving achieved via server shutdown given pre-defined constraints on the CPU and memory resources of the destination servers. The simulation results show that the proposed staggered VM migration scheme achieves a similar server consolidation performance as existing sequential VM migration methods, but results in both a lower data storage requirement at the buffer and fewer vMotions.
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications