Abstraction and encapsulation are among the most important concepts in software programming. Similar thought process is also applied to the data center design and management for cloud computing, especially in so-called software-defined data center (SDDC).
Currently, to support broad hybrid-cloud computing suited for most of the enterprise environments, at least two distinct approaches in data center design are competing in the market. One is the hyper-convergence hardware-based cloud appliance approach and the other one is the SDDC approach. Offerings in the first camp include Microsoft’s Dell-based Azure-on-board Cloud Platform System (CPS), VCE Vblock, HP CS700, IBM PureFlex, etc. This approach packages integrated compute, storage and virtual networks together in hardware containers with software management tools. It can also be preloaded with certain platforms or applications. The system can be switched on and linked to enterprise’s existing network to build hybrid cloud on premises almost instantly. The performance and future scalability will be limited by the capacity and numbers of these containers. Companies need to invest on these new appliances, but may save on many of the design, deployment and operation tasks.
On the other side, Google is the pioneer in SDDC camp. From the beginning of their online search business, Google used cheaply collected machines and storage units, bundled them together and programmed software to control everything. From shared compute, storage to virtual networking, all Google global data centers can be managed remotely. Failures from any of the hardware are monitored and switched off instantly without any interruption to the application tier or users. Amazon’s AWS also adopts SDDC with self-designed homogeneous servers.
The SDDC is gaining more steam in cloud industry. The concept has been clarified as distributed virtualization for all elements of the infrastructure – compute, storage, networking and security. It targets on the total abstraction of the application layer from the underlying hardware layers and thus allows service SLAs and automation of the management tasks for each element of the cloud computing. SDDC can promise unlimited scalability, performance and the important self-service to customers. The cheaper hardware scenarios usually attract more attentions, but there are often hidden costs associated with software resources and testing, especially with many of the open-source solutions.
Most of the SDDC solutions today are based on homogenous commodity hardware, but the real needs and challenges from today’s enterprises call for utilizing the existing heterogeneous hardware and network situations. Several companies are trying to come up with more answers, through distributed virtualization by abstraction and encapsulation. For example, VMware NSX extends software-defined network (SDN) concept with vSwitches built in VMware hypervisors to creates virtual networks and encapsulates existing network topology and security configurations, but it still yet to fully support hybrid cloud scenarios.
These are simply different stages in the development towards modern computing. Today’s continued breakthroughs in the research and development of super-fast computer chips along with the realization of nano and quantum technologies may start challenging all traditional hardware someday. The future definitely looks forward to the true distributed computing where the compute power will not be limited to a few data centers or any enterprise environment alone. Better designed software, especially smart algorithms, will still be the key to capture all future possibilities.