The Current State of Distributed Computing

Distributed Computing has always been a desire in both industries and academics. With the current cloud power, one would assume that distributed computing should become a lot easier. Yet, besides a distributed program can run faster if it would run correctly, nothing has become easier on coding the program to make sense or truly work in distributed ways as intended. Allowing simultaneous leverages on the compute power and different datasets across different locations, hardware, platforms, languages, data sources and formats is definitely a non-trivial task today, almost the same as it was yesterday.

The “why” part of the distributed computing is easy to understand, but even with the help of existing popular cloud platforms and industry money, academic researchers and industry developers still have zero consensus on “how” it should be done. Huge room of creativity remains in this area today and numerous tools mushroomed, especially within open source community, but each highly customized solution results in very little consistency and leveragability. Sustainability and manageability are nightmares for everyone too.

For example, for decades, to utilize extra compute power on many idle machines for large computational tasks has always been a very attractive idea for academics. University of Wisconsin at Madison has been developing an open-source-based distributed computing software solution called HTCondor. Yet, allocating compute tasks to heterogeneous hardware, retrieving data and files in various paths and formats, handling multiple platforms, as well as managing the shared compute states are still huge challenges which involve customer coding. On the other hand, some startups have the right ideas to focus on defining new abstraction architecture to separate data from the mechanics of handling them, and design higher level coding definitions, but all are in infant stage right now.

It appears that this is a definitely a great time and space for some industry deep pockets to step up and come up with more user-friendly and productive software solutions as TriStrategist called out before [See our May 2014 blog on “the Internet of Things]. It needs a straightforward hybrid-cloud-based software architecture and an easy-to-use high-level programming language, with sound abstraction of the tiers of data passing, protocols, pipelines, control mechanisms, etc., and a set of platform-neutral configurable (preferably UI-based) data plumbing tools that are more touchable than the pure developer-driven open-source packages on the market.

Eventually, to truly realize distributed computing scenarios will need machine and code intelligence.

Energy Competition for Modern Data Centers

Future extended power of cloud computing very much lies in the available energy power of the modern data centers that support it.

Traditional data centers are usually huge energy hogs and wasters. An average-efficiency 4MW IT capacity data center with a PUE (Power Usage Effectiveness) of 2.0 could use about 70GW of electricity per year with an annual bill of nearly $5 million. That much capacity could power a US town of 7,000 homes (already the highest household consumption in the world). If the PUE is reduced to 1.2, it could save 40% of the total cost for the company and save enough electricity for about 3,000 additional homes.

The modern cloud data centers are designed towards more energy-efficient and greener, with higher capacity running ratio. An intense competition is seen on the energy front for modern data centers with companies scratching their heads to find smarter ways to save energy cost and build more energy-efficient ones. It’s not just a cost imperative for large infrastructure players, but more and more a strategic one for the future.

Apple by 2013 announced that its worldwide Data Centers already use 100% renewable energy including wind, solar, hydro, geothermal and bio fuels. Its largest data center in Maiden, NC, has a 100-acre solar farm combined with a bio fuel cell installation, which was completed at the end of 2013.

Google says each of its self-designed data centers uses only half of the energy consumption of most of the other data centers. Google has the smallest PUE in the industry, about 1.12 by 2012. Now at Data Center Europe 2014 in May, Google disclosed that they are running Machine Learning algorithms to further control the cooling system and may shed another 0.02 off its PUE. That will make just about 10% of IT equipment energy to use on non-compute operations, currently the highest efficiency among modern large-scale data centers.

Microsoft is also trying to improve its green energy image. MSFT just signed a 20-year 175MW wind farm deal last week for a project outside Chicago to continue its renewable energy pursuit. In 2012, MSFT initiated a joint project with Univ. of Wyoming to experiment its first zero-carbon fuel-cell-powered data center using greenhouse gas methane. In April 2014, MSFT announced the increased investment and expansion of Cheyenne data centers, bringing a total investment to $500 million. Several other new energy initiatives are also being explored globally on various scales by MSFT.

Globally, governments are also paying attentions to the data center building for cloud computing age and the energy competition associated with it. China, which is lagging behind in cloud infrastructure due to the tight control of the land and power usage by the government and foreign firms’ fear of data privacy, recently sponsored huge initiatives for the constructions of modern cloud-focusd data centers in its July 2014 data center policy meeting in Beijing. It recommended the areas of the country less disaster-prone and with more abundant natural energy resources for strategic large data centers, and also mandated that all future data centers need to meet PUE of less than 1.5.

Increased operation efficiency and greener energy sources mean less carbon emissions, less environmental footprint and longer sustainability. In the near future, when worldwide customers select their cloud providers, they may not simply choose which one offers better performance and capacity, but may choose which one is energy smarter for longer term sustainability and better social reputations. Attentions to energy innovations and competitions are definitely non-negligible for any infrastructure player or any large enterprise of the current age.

Nature, Man and Machine

What would a man do if he thinks he can possess unlimited power? Would a machine being as intelligent as a man, without any biological weakness of the human brain, have steadier wisdom, better judgments, or more benevolent thoughts at all time?

Aided by today’s computing power, great development and excitement in Artificial Intelligence (AI) have generated great debates on the role of technologies in our future, essentially the power balance of intelligence machine, man and nature. This is not just a scientific or technological debate topic, at its core a profound philosophical one.

At present, several hundred sectors of the human brain have already been mapped. With exponential growth rates of scientific and technological advancements of the modern time, the enthusiasts have good reasons to predict that machines could one day replace the entire functions of a biological human brain or even surpass it in intelligence – the predictive capabilities. Brain cells could be replaced by cell-sized nano-chips that could run faster, process more information than biological ones. Future possibilities seem limitless. Some even predicted that eventually “human machine beings” could control the fate of our entire universe.

The concept of a future human with numerous embedded tiny machine chips in the brain is as disturbing as the notion that the fate of the universe could be controlled by future strange beings. It would be a pathetic image of the human fate even if immortality seemed touchable. Human brains have biological shortcomings, but machines will not be short of bugs or short-circuits either. So are we expecting the future of our transformed species, or of all living species, in a constant upgrade process, and a few privileged would always carry the newest chips coded with more power and fantasies? What would a world of existence be then? Extremely boring or incessantly warring?

Humans are nature’s creation and nature’s creations always come with myriad forms of mysteries beyond pure logics. It’s quite doubtful if machines can reproduce every bit of human experience, connectedness and response with the outer world. Although technological innovations always astound us with their speed and far-reaching effects, nature magnanimously surprises us more. Whenever we reach a breakthrough in science and technology, another profound unknown is often standing there, long waiting. Such as in the study of particle physics, whenever we thought we had reached the end of the mysteries of creations, the picture often became more elusive to us.

There is less doubt that man-made machines could one day harness the power of anything that can be realized by logics and patterns, but great doubts definitely remain if any human or machine being’s creation will present itself in the harmonious way as nature has done. Ancient philosophers from thousands of years ago, in Taoism, Buddhism, etc., had long discovered the Way, the natural flows, the harmony and rhythms that nature displays and requests, the ultimate wisdom of the universe. They did it simply by looking at the vast space and distant stars, and submitting oneself as one tiny being in the immense universe to humbly receive the wisdom and inspirations from nature.

Nature, with its majestic, magical and awesome, has given us the yin and yang, the essential life elements of air, water, earth, fire, wood and metal, enhancing yet counter-balancing each other. Nature deserves our respect, our reverence, our modesty and awe. Human is just one of the millions of species on earth and very likely one of the countless species in the universe. There could well be other more intelligent biological species in the universe that we have not met yet. It is the harmony with nature that we should ultimately seek in every one of our endeavors, not superiority or dominance, so that we can hope to extend the life span of our beautiful planet for many more generations of human species to live on and enjoy in peace.

History has repeatedly reminded us the consequences of human arrogance, overzealousness and conceit with our perceived power over nature, when men confused “exploration” with “domination”. The sudden decline of the once most powerful Egyptian Empire around 2100 B.C. in the middle of the infatuation with pyramid building; the mysterious disappearance of Mayan civilization after all the impressive stone structures in the jungle; the Great Famine of China followed by the Great Leap Forward campaign which cost more than 30+ million lives over a few short years at the turn of 1960s …. If the legend and prophecy of the Island of Atlantis were true, it would be another nature’s sounding alarm to the human race.

If the boundary of our farthest intelligence and imagination is called a singularity, TriStrategist thinks that it is not any technological limit that men couldn’t reach, nor any law of physics. It is the realm that after exhausting all the knowledge and intelligence that men and machines have accumulated, we could no longer perceive the slightest hint from nature about its next balancing force that could be imposed on us. That is the true singularity and likely it won’t be pretty.

“To help mankind in balance with nature through technologies” will always be one of TriStrategy’s core missions.

IBM’s Counter-Disruptive Strategies

IBM’s profitability and stability have been relying on the combination of specialty servers, software and services which have a high switching cost for enterprise customers. On the server side, IBM has been doing fairly well in high-end server market, about 57% market share by beginning 2014.

Although IBM’s recurrent revenue streams and financial performance are still strong, they are facing cloud-computing as a disruptive threat, especially the commodity-hardware-based, software-driven cloud services promoted by Google, Amazon, VMware, etc. With compute power doubling faster and faster and price getting cheaper and cheaper, cloud services quickly nudge into the large enterprise IT and scientific computation space where IBM used to dominate. Because most of IBM’s software and service business are also chained to their specialty high-end servers, the threat is definitely looming larger for IBM’s once prized niche markets. It’s hard for IBM to compete in the low-end cloud service market without some fundamental changes for such a huge company.

To counter the disruptive threat, IBM adopted several strategies simultaneously.

First, IBM is offering its own cloud platform, but in a different fashion. In January 2014, IBM announced to sell their low-end server (mostly x86 based) units to Lenovo of China In order to focus on other strategic goals including cognitive computing, Big Data and cloud. IBM is also trying to acquire more cloud software companies to help migrate their existing software and services to its own “distributed” cloud platform in order to keep its existing lucrative customers who still offer the largest profit margins. Timing and luck still yet to play out, but this strategy may or may not serve to sustain long-term advantages for IBM.

Second, instead of competing head-on in the low-end cloud services, IBM invested $1 Billion from January 2014 to form a 2000-people new business unit in a new location, separate from the corporate headquarter, to transform IBM’s super machine “Watson” onto cloud. Targeting to provide cognitive machine learning technologies and services to enterprises, this strategy is to compete at the higher-end value chain of the cloud services. It’s common for a new initiative to adopt an independent division to avoid being aversely impacted by existing non- efficient corporate culture or playbooks, but a new organization of 2000 headcounts seems overly extravagant. It’s hard to be interpreted as a bold innovative move with due efficiency. On another note, the original famous Watson, IBM’s super intelligent machine built on IBM’s DeepQA Technology (somehow sounds very similar to the Deep Thought of the movie The Hitchhiker’s Guide to the Galaxy. 🙂 See our earlier blog on Machine Learning) was a room-sized giant “machine being” on a costly cluster of 90 IBM high-end servers, with a total of 2880 processor cores and 16 Terrabytes of RAM. Now it is said that by switching to cloud-based computing platform and software, its speed increased 24 times, 2300% performance improvement and a new size of a pizza box. Still this seems to be another tweaking of the existing, a middle-road strategy for dealing with disruptive innovations, which was not generally favored by many past business lessons.

Third, few companies have been able to flex the R&D muscle like IBM does. It has a long tradition of investing heavily on R&D which has paid great dividends over the history of IBM. IBM has invested in many obscure ideas, including building neurosynaptic (to mimic the brain) and quantum (to mimic subatomic particles) machines, replacing silicon with carbon nanotubes and carbon films, etc. IBM just announced in July a $3 billion R&D budget in chip research to further shrink the size of the circuitry and seek new materials for chip designs. If these ideas succeed, the landscape of intelligence market could be changed dramatically. For example, parallel to the prevailing trend of a future all-encompassing super intelligent machine being on cloud, a neurosynaptic machine of a small size could well be positioned in a different value network for the intelligence market of the future. Although many of these ideas are still a bit far from commercialization at the moment, TriStrategist thinks that these ideas may truly be IBM’s savers in the near future. They in fact have the true potentials as the type of innovations that generate the next industry leaders.

Time will tell which strategy could result in best outcomes for IBM and how well they can execute. It will provide great business lessons for many companies in managing and dealing with disruptive innovations.

Google’s Disruptive Innovations

Business owners and residents in Kansas City, Missouri, must have been very happy with their new internet connection choice provided by Google Fiber since Q4 2012. As the first city selected for Google Fiber Network, for about the same monthly fee as their past often single-selection-only cable or ISP service, now they can get 100 times faster connection speed (in gigabytes). Since then, Google has been aggressively expanding its Google Fiber to many other cities. Many more cities are eagerly waiting.

This is just one of the examples of the recent innovations by Google. It happens in the very traditional telecommunication broadband service sector which have been tightly controlled by a few large and long-time businesses for years and where innovations have been slow.

Since the start of the company, Google has never been lacking innovations in its products or services, but its most profound one in recent years, per TriStrategist, is not in the latest smart wearables, Android-related products or map services, but the unique expansion of wide-area wireless access since 2013 in the remote terrains of sub-Saharan Africa and Southeast Asia, where for so many years the costs have never been justified for the wires to be laid by telecom companies or poor local governments. By Google Blimps, satellites (Google’s recent Skybox purchase may add to the toolbox) and other locally suitable mechanisms for these remote areas (MSFT is said to be in this endeavor in some way as well), billions of people may soon expect to be connected to the internet and to the rest of the world.

Yes, future commercial gains for Google are in the middle, but the implications of such moves are far more significant – bringing down barriers of information access, promoting health, education and businesses, propagating democracy and social progress, reducing poverty and improving equality… It finally gives a modern and perhaps the most effective way to unleash the human potentials and productivities of the billions of people in these disadvantaged areas in the world.

Combining Google Fiber and Blimps moves, Google delivered disruptive innovations head-on to the long-tradition telecomm and ISP service market. In an urgent defensive move, besides fighting against Net Neutrality in court, global 30+ telecom companies, refusing to be reduced to “dumb pipes” (in their own word) in the new cloud and internet reality, came up a comprehensive counter strategy to try to channel the web and data access through their proposed global NFV (Network Function Virtualization) Network by ETSI standards, but their position and strategy could be misdirected this time.

Not all disruptive innovations can eventually succeed, just as many good inventions only stayed in history as fun ideas. However the true prevailing strength of a disruptive innovation and its willing acceptance by the society, many times are not because it is simply technologically superior or commercially appealing, but because it also contains a certain element of conscience: a conscience of following the natural flow of the way (as described in the “Tao” in ancient Chinese philosophy classic Tao Te Ching), a conscience for the greater good of mankind and the progress of civilizations at large.