Category Archives: Trend of Technology

A Glance Through the Cloud War and Modern Datacenter Debates

Cloud Service is the most intense battle ground in today’s IT world. Cloud providers compete fiercely on scale, performance, manageability, security, and of course, the cost and price of the services. The prevailing concepts of cloud computing and cloud services have pushed so much re-thinking of the modern datacenter designs.

Google seems to have its own unique edge on both infrastructure and performance: its business has been running on “cloud” from the  very beginning – serving high quantity of queries on large-scale server farms. Many of their public cloud offerings came directly from the adaptations of their internal technologies. For example, software-defined storage (persistent disks) and compute clusters, BigQuery on data performance, open source-based AppEngine on Managed VM architecture for developer communities(See picture below), etc. Google proudly claimed that their cloud platform offers scale at 1 million QPS (query per second), 0-100 VMs cold-start in 38 sec, consistent performance in multi-tenant datacenters, etc. They also own greener datacenters with PUE (Power Usage Effectiveness) at 1.12 vs. 1.58 for industry average, which means 88% energy has been dedicated on core Compute. (MSFT’s PUE is about 1.13-1.2 ?)

A recent free seminar on Cloud Computing by multiple companies, including Google on its Next Gen Cloud Platform, piqued the interest of TriStrategist to take a surface glance into the underlying themes of the cloud competitions and modern datacenter debates. It seems the basic questions fall into these categories:

1. Enterprise Compute vs. End-user Compute(Consumer Compute), or the combination
Far from reaching maturity at present, different needs of the cloud users may eventually drive completely different designs and features on cloud service offerings.

2. Software-driven datacenter or not
Many current datacenters are using both hardware and software to drive cloud. Google is a big proponent of Software-defined Data Center (SDDC) concept. It uses software to enable advanced global networks for cloud, uses software code to load balance and schedule compute and storage requests for consistent high throughputs. Live runtime migrations to different physical media with no downtime and real-time collaborations for developers are all done through its software layers.

3. High-end hardware vs. commodity hardware
Google adopted commodity hardware approach to achieve massive scale at a lower cost. It also allowed easier abstraction for their software architecture. It avoids some of the high cost of hardware replacement/upgrades cycles.

4. Convergence of Compute + Storage +Network or distributed virtualization
The approach differs significantly in virtualization design and manageability. Companies are exploring diverse technologies in these areas.

5. Homogeneous building blocks vs. specialty & mixed technology stacks
Some companies start delivering the “containers” of pre-configured Compute+Storage+Network as building blocks with easy management tools for faster datacenter deployment. Many existing datacenters are far from benefiting from such an approach.

6. Modular distributed architecture vs. vertical stacks managed by specialty tools
In general, the more modular in design for underlying hardware layers, the easier to allow software abstraction to achieve consistent performance and low-cost manageability. It may still depend on the needs of the cloud users though.

There are apparently no easy answers to these questions. Common debating factors like time-to-market vs. scalability, flexibility vs. automation, Big Data vs. real-time can both be conflicting and co-existent. Performance, energy efficiency, security and total cost will be the driving forces for evaluating solution needs among all public/hybrid cloud providers and datacenter builders. We could also see in the future the needs for cloud services diverge drastically which may result in even more interesting playgrounds.

Google Cloud Managed VM Architecture

The Age of Human-interacting Software as the Drivers to Hardware Has Come


For decades in the past, the capabilities of silicon chips have been the drivers for the software business- how we design the software and what functions can be performed based on these silicon chips. Capital-intensive fabs of the silicon industry decided that only a limited few large players can dominate the markets and thus the products based on these chips have more or less the similar fashions on how they can behave – like the PCs in decades. Limitations of the chips were directly reflected in the limitation of the software. Everything behave like computer or machine logics.

Tidal waves of consumer electronics are coming. Mobile devices have proliferated in every corner of the society – smaller nimbler faster chips, multi-functional units, versatile designs, lighter weight, smaller footprint, multiple global vendors, ubiquitous customizations,…, all come from the boom of consumer needs which challenge seriously the traditional ways of making hardware and software. Now things have completely changed- hardware chips can almost be produced just in time based on the needs of the device manufacturers and software companies. New techs like 3D printing, new manufacture concepts of leased manufacturing based on customized specs, etc. allow design innovations to blossom like never before. Enhanced human interaction needs also render beauty, intuitiveness, versatility, flexibility, connectivity, portability in the design of every modern product.

This is a new revolution- It’s all human again in using the technologies.