Category Archives: Trend of Technology

Today’s Nanotechnology and New Chip Design Concepts*

It’s all about scale. Once we shifted our reference frame to molecular or atomic level, a whole new world of possibilities has emerged in front of us.

First raised by the famed physicist Richard Feynman in 1959, the idea of making things at the foundation particle level has triggered a revolution in many fields, including physics, chemistry, biology, material science, medical science, life science, electrical engineering, bio-engineering, chemical-engineering, manufacturing, military and space engineering, etc.. A nanometer (1 nm) is equal to one billionth of a meter and the nanoscale is typically between 0.1-100 nm in size. Most atoms are 0.1-0.2 nm wide; a DNA strand is about 2 nm; a blood cell is about several thousand nm and a strand of human hair is about 80,000 nm in thickness. At nanoscale, quantum mechanics dominates and matters can display many unique properties that are not available under normal scale.

Experimental nanotechnology did not come into tangible existence until 1981 when an IBM research lab in Switzerland built the first scanning tunneling microscope (STM) which allowed the possibility of scanning individual atoms. Subsequently in the following decade, moving single atoms became possible. Other newly discovered techniques around the same time also made manipulations of atoms a true engineering reality, although not a small feat even in today. In 1991, the first carbon nanotube was created, wrapped by a single sheet of graphene (from carbon atoms), which is about 6 times lighter and 100 times stronger than steel. Its properties and electrical conductivity make it a favorite candidate as a nanoscale building block in many applications, especially in high-tech world.

Today, one application area of nanotechnology that has attracted some intense focus is on the extremely small-scale electronic circuitry design. With increasing difficulties to meet Moore’s Law of doubling density of transistors on a single IC with shrinking sizes desired in modern electronics, limitations of silicon chips, including heat and energy constraints, become more obvious and costly to maneuver. Nanotechnology has already rushed to the promising frontier as the replacements for future chips. Many creative ideas are being tested at present.

The graphene chips have already been created in various forms, but they can be too easily damaged in the assembly process to make it a practical production choice for most computers. IBM released an advanced version of the graphene chip in early 2014 by using a new manufacture technique to address the fragility problem. However, at the meantime, the possibility of other nanomaterials to compete with graphene has already come into the stage, for example, some new nanomaterial and assembly method demonstrated by Berkley Lab this year. At atomic levels, once assembled properly, many particles or mixed structures could potentially display the electrical and optical properties needed for building nanochips. This area of the chip manufacturing development will likely see intense competitions in the future.

Various techniques have long been investigated along the ideas of overcoming the limitations in existing chip design by modifying the silicon structure, but many challenges remain in engineering. Earlier this year, UC Davis researchers established a bottom-up approach to add nanowires on top of the silicon, which could create circuitry of smaller dimensions, bear higher temperatures and allow light emission for photonic applications that traditional silicon chips are incapable of. They found a way to grow other nanomaterials on top of silicon crystals to form nanopillars as the stations for nanowires to connect and function like transistors, thus form complex circuits. The most appealing aspect of this method is that it does not require significant changes to today’s manufacture process of silicon-based ICs.

Another completely new thinking of chip design concept at nanoscale comes from utilizing the quantum nature of the natural particles to define binary “0” and “1” instead of from silicon crystals. For example, by switching the direction of a single photon hitting a single atom residing in one of the two atomic states, the resulted direction of the photon could well represent the “0” and “1” logic. Quantum computing is therefore born. If manipulated successfully, quantum states may possibly allow simultaneous existence of more than just “0s” and “1s”, which could promise more powerful potentials for future computers.

With drastically different and innovative landscape in chip designs enabled by nanotechnology today, what are the needs and implications to future software design? One field in the near future that software will definitely play a significant part along with nanotechnology development is the need for logical algorithms to control and manage the “self-replicate” process of the “active” nanocells, especially for AI developments. For application software development, TriStrategist thinks that once nanoscale manufacturing becomes a norm in computer and electronic industry, its flexibility and versatility could only imply that chips can be more adaptably designed based on myriad human needs and the software application programs that run on top of it.

*[Note:] This blog was in fact drafted and published on August 27, 2014, to make up for previous week’s vacation absence.

Containers and Cloud

The container concept for cloud infrastructure deployment is not new. These containers serve as pre-fab blocks that contain the equipment, configuration and management tools to allow fast plug-and-play data center setup. The uniformity of the hardware is both the strength and weakness of such a concept, trading versatility and flexibility in infrastructure deployment with agility and speed. Google first designed and implemented its container data center in 2005. Microsoft built its first container-based 700,000-sqft Chicago Data Center in 2009.

Now the “container” concept has been smartly extended from cloud infrastructure to data and cloud software. A cloud software container can work as a portable module for cloud applications to move between hybrid cloud PaaS or be offered as a flexible component of PaaS. Many such designs and implementations are still in the making. An example today which just started gaining popularity is called Docker, a Linux-based open-source OS-virtualization implementation. It essentially functions as a middle-tier abstraction layer or wrapper to shield off the cloud platform complexities from the application developers. Within 15 months of its inception till today, the total downloads of its trial version have exceeded one million and the community is fast growing. There are a few major global supporters for such an implementation and the startup company in San Francisco recently got a new round of $40 million venture funding.

Compared with the prevailing concept of Virtual Machines for deploying cloud-based applications, the Docker “container” goals at making the applications easily portable among hybrid cloud platforms with agility, zero startup time and one-click management. Of course there are certain trade-offs with the benefits of portability and platform-neutrality. Since it uses kernel sharing for platform reach, some security controls have to be sacrificed. Also Docker only supports single-host applications because the “container package” can get a lot more complex with multi-server applications across platforms and it’s a hard problem to resolve. Some supplemental solution proposals are out there on the market to overcome the limitation. Still, the “software container” concept and solutions are very new and yet to be tested for any IT production situation.

Current Docker v1.0 does not support Windows applications, but it is still a good start towards fulfilling a key market need. This in fact is also one workable approach for today’s distributed computing.

Would You Like a Robot Maid?

Aided by Hobble telescope, cosmologists have long verified the accelerated expansion of the universe since the Big Bang. Similar conclusion may be drawn for human society’s advancement and the complexity associated with it, although there is no such instrument or directed experiment except the hypothesis from a few expansive and insightful minds. If human society’s complex expansions are following similar natural patterns, it could mean that our imaginable future will always come sooner than we think. Technology developments may well demonstrate this hypothesis over time. Among them, human-interacting robots, one type of the machine beings of the future, may come into our daily life soon, not just in sci-fi movies.

Most of the robotic helpers today are still very much industrial-focused, machine-looking, ugly, clunky tools, but that’s within the early iteration process of the robotic development – limited by all related technologies and the needs. With rapid advancements of industrial design, artificial intelligence, material science, etc., better-sized, nicer-looking domestic robots which can help with basic chores of cleaning and cooking, and also interact with humans in some autonomous ways, may come into existence earlier than we have anticipated.

Wall-E in 2008 Disney Movie

We can’t totally expect them to be human-looking or super smart in the coming decade, but they should be prettier than the rustic Wall-E, or at least as cute as Eve (both are robotic characters in 2008 Disney movie Wall-E).

Eve
Eve in 2008 movie Wall-E

No doubt, they will become more and more intelligent and capable with each iteration of the releases.

More than ever, every piece of the robotic design needs the considerations of both hardware and software. The current major challenges for robotic developments are at the design of actuators and robotic software. Robotic software functions as the nerves and blood circulations of the robot beings and huge room of gap exists in this area today. Open-source-based Robotic Operating System is a very interesting concept in recent years, but a lot more industry support and focus are needed to truly propel the robotic industry to a new level so that we, as consumers, can expect to order our favorite robot maid to our household in a few years.

The Current State of Distributed Computing

Distributed Computing has always been a desire in both industries and academics. With the current cloud power, one would assume that distributed computing should become a lot easier. Yet, besides a distributed program can run faster if it would run correctly, nothing has become easier on coding the program to make sense or truly work in distributed ways as intended. Allowing simultaneous leverages on the compute power and different datasets across different locations, hardware, platforms, languages, data sources and formats is definitely a non-trivial task today, almost the same as it was yesterday.

The “why” part of the distributed computing is easy to understand, but even with the help of existing popular cloud platforms and industry money, academic researchers and industry developers still have zero consensus on “how” it should be done. Huge room of creativity remains in this area today and numerous tools mushroomed, especially within open source community, but each highly customized solution results in very little consistency and leveragability. Sustainability and manageability are nightmares for everyone too.

For example, for decades, to utilize extra compute power on many idle machines for large computational tasks has always been a very attractive idea for academics. University of Wisconsin at Madison has been developing an open-source-based distributed computing software solution called HTCondor. Yet, allocating compute tasks to heterogeneous hardware, retrieving data and files in various paths and formats, handling multiple platforms, as well as managing the shared compute states are still huge challenges which involve customer coding. On the other hand, some startups have the right ideas to focus on defining new abstraction architecture to separate data from the mechanics of handling them, and design higher level coding definitions, but all are in infant stage right now.

It appears that this is a definitely a great time and space for some industry deep pockets to step up and come up with more user-friendly and productive software solutions as TriStrategist called out before [See our May 2014 blog on “the Internet of Things]. It needs a straightforward hybrid-cloud-based software architecture and an easy-to-use high-level programming language, with sound abstraction of the tiers of data passing, protocols, pipelines, control mechanisms, etc., and a set of platform-neutral configurable (preferably UI-based) data plumbing tools that are more touchable than the pure developer-driven open-source packages on the market.

Eventually, to truly realize distributed computing scenarios will need machine and code intelligence.

Energy Competition for Modern Data Centers

Future extended power of cloud computing very much lies in the available energy power of the modern data centers that support it.

Traditional data centers are usually huge energy hogs and wasters. An average-efficiency 4MW IT capacity data center with a PUE (Power Usage Effectiveness) of 2.0 could use about 70GW of electricity per year with an annual bill of nearly $5 million. That much capacity could power a US town of 7,000 homes (already the highest household consumption in the world). If the PUE is reduced to 1.2, it could save 40% of the total cost for the company and save enough electricity for about 3,000 additional homes.

The modern cloud data centers are designed towards more energy-efficient and greener, with higher capacity running ratio. An intense competition is seen on the energy front for modern data centers with companies scratching their heads to find smarter ways to save energy cost and build more energy-efficient ones. It’s not just a cost imperative for large infrastructure players, but more and more a strategic one for the future.

Apple by 2013 announced that its worldwide Data Centers already use 100% renewable energy including wind, solar, hydro, geothermal and bio fuels. Its largest data center in Maiden, NC, has a 100-acre solar farm combined with a bio fuel cell installation, which was completed at the end of 2013.

Google says each of its self-designed data centers uses only half of the energy consumption of most of the other data centers. Google has the smallest PUE in the industry, about 1.12 by 2012. Now at Data Center Europe 2014 in May, Google disclosed that they are running Machine Learning algorithms to further control the cooling system and may shed another 0.02 off its PUE. That will make just about 10% of IT equipment energy to use on non-compute operations, currently the highest efficiency among modern large-scale data centers.

Microsoft is also trying to improve its green energy image. MSFT just signed a 20-year 175MW wind farm deal last week for a project outside Chicago to continue its renewable energy pursuit. In 2012, MSFT initiated a joint project with Univ. of Wyoming to experiment its first zero-carbon fuel-cell-powered data center using greenhouse gas methane. In April 2014, MSFT announced the increased investment and expansion of Cheyenne data centers, bringing a total investment to $500 million. Several other new energy initiatives are also being explored globally on various scales by MSFT.

Globally, governments are also paying attentions to the data center building for cloud computing age and the energy competition associated with it. China, which is lagging behind in cloud infrastructure due to the tight control of the land and power usage by the government and foreign firms’ fear of data privacy, recently sponsored huge initiatives for the constructions of modern cloud-focusd data centers in its July 2014 data center policy meeting in Beijing. It recommended the areas of the country less disaster-prone and with more abundant natural energy resources for strategic large data centers, and also mandated that all future data centers need to meet PUE of less than 1.5.

Increased operation efficiency and greener energy sources mean less carbon emissions, less environmental footprint and longer sustainability. In the near future, when worldwide customers select their cloud providers, they may not simply choose which one offers better performance and capacity, but may choose which one is energy smarter for longer term sustainability and better social reputations. Attentions to energy innovations and competitions are definitely non-negligible for any infrastructure player or any large enterprise of the current age.

Machine Learning and AI, Where Science and Technology Merge

Could some super intelligent machine being exist in our future? The answer is likely yes.

Science-fiction movies since birth have tried to lead our imaginations and predict how the future world and technologies would look like. We all have in our minds some images of the intelligent supercomputers from the movies, Deep Thought in Hitchhilder's Guide to Gallaxy
for example, the Deep Thought in 2005 movie The Hitchhiker’s Guide to the Galaxy (see pic) or the intelligent master control system in 2008 movie Eagle Eye, known as the government’s intelligent gathering supercomputer ARIIA or ARIA(see pic below).

The ARIA in Eagle Eye In many of these movies, a common theme had been that when such an intelligent “machine being” became overly powerful or misguided by evils, human heroes had the obligation to destroy it before it could destroy mankind. Although there is a distinct possibility that such a super intelligent machine being could exist in our real future, increasing evidences from today suggest that its picture be totally different. It will not be a centralized physical super machine or system as in the movies, but rather more likely it will take an invisible form living in the future clouds – in the complex webs of networked systems that could exist everywhere, on earth, in orbits or even on remote stars. The plot in the movie series The Matrix (1999 and 2003) seemed closer to this scenario. It would become a lot harder to be destroyed though if indeed evil thoughts took control. Let’s wish the age of ultra-capable RoboCops or human surrogates (as in 2009 movie Surrogates) that could draw intelligence and power through such invisible all-around machine forces won’t come into reality too soon before we find out the answer to this age-old question yet: Could one day machine-learnt intelligence indeed surpass human intelligence?

Machine Learning (ML) is a branch of Artificial Intelligence (AI). It’s the study of using machines’ computing and large data processing power, analyzing past and present data through programming and algorithms, to offer predictive capabilities without the inputs of human intuitions. The next stage will lead to more advanced AI that allows the simulations of the cognitive powers of the human brain by the machines. In fact these desires and concepts, as shown in generations of sci-fi movies, have existed for a very long time and nothing is new. Many commercial companies, including Google, Microsoft, Amazon, IBM, etc., have been playing with these concepts in their data-mining related product and service offerings such as search, cross-selling, online gaming, etc. People and countries also have been building better and faster supercomputers for decades to shrink the computing time. However only with the recent compounding growth of the compute power by clouds or clusters, these ideas, and many more enhanced possibilities in advanced AI, are becoming closer and closer to reality, and exciting again.

Machine Learning and AI are great examples of those fields in which when science and technology merge together, unlimited potentials emerge. Even with increasingly scalable and seemingly unlimited compute power, machines can only learn as intelligently as the algorithms that direct them. That’s the field of Data Science, the multi-disciplinary science of extracting insights and knowledge from data. Math and statistics are only part of what Data Science needs. Versatile skills in many areas are needed to truly make intelligent sense out of the amount of data in our hands today and the gigantic yet-to-come in order to predict the future or build human capabilities in machines.

Although still in the basic stage, IBM’s $1 billion investment announced early this year in Watson, a cognitive ML technology on cloud, and the coming July release of Microsoft Azure ML, are seen as the start of the large-scale commercial propagation of Machine Learning, both as part of the cloud offerings on their individual cloud platform. Once these facilitating tools and services become available to the masses, the power of science and technology coupling will become even more evident.

At least in our current age, there is no doubt that humans are definitely still in control of the machines.

The Future of Enterprise IT

Today it is generally recognized throughout the IT industry that a well-connected, inter-operated, flexible multi-cloud ecosystem will be the near-future and future picture of Enterprise IT. This may be an over-simplified way to put it.

Many of the IT departments nowadays are busy fitting their existing IT into the cloud world or vice versa, and occupying themselves with cost evaluation, infrastructure alignment, vendor identification, skill acquisition, automation design, etc. Beyond the infrastructure, most of the existing mission-critical enterprise IT applications are far from being optimized for cloud computing either. Therefore many commercial opportunities exist today in both cloud and Big Data space to help companies’ IT departments get into the buzz. The complexities and efforts involved can hardly be overestimated, yet the direction can be even fuzzier.

Let’s first ask what would be the future picture of the Consumer IT space? With the coming of the Internet of Things (IoT) and many futurist movies, it is not hard to imagine that plug-and-play, simplicity, connectivity and speed, anywhere and anytime, will be in the future of Consumer space. Privacy and security concerns will more or less be delegated to the enterprise service providers and network providers in the ecosystem.

But for Enterprise IT, the future picture is likely going to be more complex. One thing is certain: the future of the enterprise IT departments will need handle a lot of more than the current demands to support enterprise operations. The increasing business and market trends to offer intelligence services by collecting, consuming, processing more and more market and consumer data, of their own or from others, connecting and transferring between more and more systems, will be in the future scope for enterprises. With cloud compute, storage, network technologies far from maturity today, existing enterprise applications mostly out-of-date in the cloud world, standards and regulations are still in the making, the fast-changing and foggy future demands from consumers and IoT to Enterprise IT are just adding more fuels to the fire. It’s too early to assume that incremental changes and upgrades here and there, which have been the norm for large enterprise IT departments for decades, can sufficiently and effectively transform current IT systems and applications safely into the future when the new setting is needed.

However, if an enterprise still has the confidence to move into the future of the next decade or further, have they thought about starting right now to directly invest on a new and flexible IT picture of the future by designing entirely new multi-cloud, Big-Data-capable infrastructure and application architectures from the scratch instead of focusing on tweaking the existing? This approach can be started today with much agility and speed than on incremental changes to make the existing fit. Although many technology details are not completely ready today, the ecosystem, the connectivity and plumbing concepts are already here and many innovations have already started. All the needed technologies and affordable choices will only become more readily available in the days (not years) to come. The justification of the different approaches will involve the time and cost evaluation, but that depends how an enterprise view the market, the future and the existing and new business challenges and opportunities. This will like become an interesting case study in the business schools on “marginal cost” vs. “total cost”. It could be quite counter-intuitive for the decision-makers. What could be viewed as an easier or more obvious choice today by selecting smaller “marginal cost on investment” based on the existing, could end up becoming a much more expensive “total cost on opportunity” of the near future.

If Enterprise IT departments believe that the future picture of Enterprise IT will be an innovative picture quite different than that of today, then a different mindset may be needed.

Data and Sense

Big Data is one of the most frequently discussed topics in technology world today, among enterprises or startups, yet it’s also one of the most confusing ones. It’s all about data and sense. So far throughout the industry, there appears to have more data than the sense generated from it.

– Why do we need to worry about Big Data and more data?
– What kind of Big Data are we talking about?
– What sense are we trying to make out of processing large amount of the targeted data in business?
– Are we dealing with a human issue or a machine issue?

These would be the initial questions that we should ask ourselves before talking about the problems and solutions to Big Data. Lately each time TriStrategist listened to a talk about Big Data, it was always about a different problem space of the Big Data, and of course different technical approach by different companies. To understand the core issues, here is our simple process of making sense for ourselves on the subject:

What would these “Big Data” contain?

– Structured vs. unstructured data (Examples of unstructured data include those data from the social media, etc)
– Real time vs. offline (or time-lagged) data
– Dynamic vs. static data

What general senses can we expect from studying Big Data?

Operation intelligence: Fast real time analysis and real-time responses(from milliseconds to seconds). For example, for high-speed trading, eCommerce, financial transactions, online auctions, online gaming, etc.
Business intelligence: Data mining, trend analysis with more data and less time (minutes to days or longer)
Machine learning for human intelligence: For many big crazy ideas through data that were not able to be done or tested before within reasonable efforts and time. This is about the predictive basics using Big Data.
Advanced Artificial Intelligence: The new capabilities to simulate various human brain cognitive powers with Big Data processing will enable unprecedented development in AI, which in turn will shine new lights into robotic advancements.
New discoveries: With imagination and originality to look into data and make new senses from the past unknowns.

What are the top technical areas of challenges with Big Data that people are trying to solve today?

1. Data Plumbing – Better system architecture and faster algorithms to handle and process the ever increasing amount of data from all sources, especially unstructured data where relational DB methods deemed unsuitable, into machine-understandable format that can be ready for fast analysis;
2. Processing time – Significantly reduced processing time or faster response time for business operations and intelligence, real-time or offline;
3. Real-time synchronization – Incorporate constant data updates in real-time processing, analysis and response;
4. Analytics- Better analysis design to yield more relevant and accurate business insights, to enable new business possibilities;
5. Data Communication – The accurate, fast, smooth, back and forth interactions and transfers of the data among end-users, devices and systems.

Apparently the field of Big Data is of business, operation, social and academic importance. From technical point of view, today’s solutions to Big Data are far and apart, and the best ones are yet to come. With hardware getting cheaper, many companies are using in-memory processing to compete on the time issues, but that might be a costly approach for gigantic datasets. From software side, parallel computation, Hadoop MapReduce is one of the examples, has resurfaced as a very useful thought (while differs in concept and approach from the past due the availability of cloud and cluster computing). Still current usable algorithms are sparse and limited in capabilities, many times difficult to use as well. New approach needs to be seriously investigated. Who knows, it may quite possibly result in some Google-style successful start-ups or business ventures if some brilliance and team work can land a few true breakthroughs, at the algorithm level, not at the hardware level.

The Flood of “the Internet of Things”

Like watching a beautiful butterfly suddenly jumps out of a cocoon, isn’t it fun to be in the moment of watching a new and different picture of future unfolds right in front of our eyes? In fact we are in that moment today for many of the life-changing innovations.

No matter we like it or not, within or beyond our current imaginations, the massive flood of the Internet of Things(IoT, in much broader and expanded sense) are surely coming. The waves have already started. They are going to explode rapidly and display in gigantic scale in the business world and people’s daily life very soon. With the current speed of technology advancement, within a decade or less, we will all be living in the new world of ubiquitous connectivity and taking it as a norm.

The IoT flood will affect many areas of the enterprises and consumers. For an impacted traditional “terrestrial” business enterprise, how can it survive the new flood and the new world? The answer is likely as: not by building a Noah’s Ark, but by transforming the business into a swim-able creature, or an amphibian. Keep in mind it will be competitive in the water and finding a niche will be tough.

TriStrategist thinks a business can “swim” in these fashions in the IoT-flooded world:

Infrastructure players (Including those private, public and hybrid cloud providers, but they need increasingly data-friendly platforms and network bandwidth.)

IoT Technology stack providers (Foundation/common protocols, high-level adaptable programming languages and development tools, etc. Today’s existing platform-dependent technology stacks are not adaptable or friendly enough for quick IoT solution needs.)

Smart IoT product/device providers and solution providers (There are many startups or existing businesses today in this area targeting specific enterprise or consumer needs. More will come.)

BigData analytics and data intelligence providers (Certainly. )

Although all these fashions can be positioned for future revenue generations, upfront investment cost, margins and competitive landscapes can all differ in the long run. How strong, smart and versatile a business can be and can roam comfortably in the new world may well depend on the vision, strength and focus of the business in the current transformation process, and of course their pocket depth. The “Do-it-all Strategy” or the “Waterloo Strategy”*** can either lead to quick failures for smaller companies, or the failure to achieve economy-of-scale timely for larger enterprises, which can be equally dangerous in a very competitive and fast-changing world.

***[See our earlier blog on “6 Common Business Strategy Errors]

A Glance Through the Cloud War and Modern Datacenter Debates

Cloud Service is the most intense battle ground in today’s IT world. Cloud providers compete fiercely on scale, performance, manageability, security, and of course, the cost and price of the services. The prevailing concepts of cloud computing and cloud services have pushed so much re-thinking of the modern datacenter designs.

Google seems to have its own unique edge on both infrastructure and performance: its business has been running on “cloud” from the  very beginning – serving high quantity of queries on large-scale server farms. Many of their public cloud offerings came directly from the adaptations of their internal technologies. For example, software-defined storage (persistent disks) and compute clusters, BigQuery on data performance, open source-based AppEngine on Managed VM architecture for developer communities(See picture below), etc. Google proudly claimed that their cloud platform offers scale at 1 million QPS (query per second), 0-100 VMs cold-start in 38 sec, consistent performance in multi-tenant datacenters, etc. They also own greener datacenters with PUE (Power Usage Effectiveness) at 1.12 vs. 1.58 for industry average, which means 88% energy has been dedicated on core Compute. (MSFT’s PUE is about 1.13-1.2 ?)

A recent free seminar on Cloud Computing by multiple companies, including Google on its Next Gen Cloud Platform, piqued the interest of TriStrategist to take a surface glance into the underlying themes of the cloud competitions and modern datacenter debates. It seems the basic questions fall into these categories:

1. Enterprise Compute vs. End-user Compute(Consumer Compute), or the combination
Far from reaching maturity at present, different needs of the cloud users may eventually drive completely different designs and features on cloud service offerings.

2. Software-driven datacenter or not
Many current datacenters are using both hardware and software to drive cloud. Google is a big proponent of Software-defined Data Center (SDDC) concept. It uses software to enable advanced global networks for cloud, uses software code to load balance and schedule compute and storage requests for consistent high throughputs. Live runtime migrations to different physical media with no downtime and real-time collaborations for developers are all done through its software layers.

3. High-end hardware vs. commodity hardware
Google adopted commodity hardware approach to achieve massive scale at a lower cost. It also allowed easier abstraction for their software architecture. It avoids some of the high cost of hardware replacement/upgrades cycles.

4. Convergence of Compute + Storage +Network or distributed virtualization
The approach differs significantly in virtualization design and manageability. Companies are exploring diverse technologies in these areas.

5. Homogeneous building blocks vs. specialty & mixed technology stacks
Some companies start delivering the “containers” of pre-configured Compute+Storage+Network as building blocks with easy management tools for faster datacenter deployment. Many existing datacenters are far from benefiting from such an approach.

6. Modular distributed architecture vs. vertical stacks managed by specialty tools
In general, the more modular in design for underlying hardware layers, the easier to allow software abstraction to achieve consistent performance and low-cost manageability. It may still depend on the needs of the cloud users though.

There are apparently no easy answers to these questions. Common debating factors like time-to-market vs. scalability, flexibility vs. automation, Big Data vs. real-time can both be conflicting and co-existent. Performance, energy efficiency, security and total cost will be the driving forces for evaluating solution needs among all public/hybrid cloud providers and datacenter builders. We could also see in the future the needs for cloud services diverge drastically which may result in even more interesting playgrounds.

Google Cloud Managed VM Architecture