All posts by TriStrategist

Would You Like a Robot Maid?

Aided by Hobble telescope, cosmologists have long verified the accelerated expansion of the universe since the Big Bang. Similar conclusion may be drawn for human society’s advancement and the complexity associated with it, although there is no such instrument or directed experiment except the hypothesis from a few expansive and insightful minds. If human society’s complex expansions are following similar natural patterns, it could mean that our imaginable future will always come sooner than we think. Technology developments may well demonstrate this hypothesis over time. Among them, human-interacting robots, one type of the machine beings of the future, may come into our daily life soon, not just in sci-fi movies.

Most of the robotic helpers today are still very much industrial-focused, machine-looking, ugly, clunky tools, but that’s within the early iteration process of the robotic development – limited by all related technologies and the needs. With rapid advancements of industrial design, artificial intelligence, material science, etc., better-sized, nicer-looking domestic robots which can help with basic chores of cleaning and cooking, and also interact with humans in some autonomous ways, may come into existence earlier than we have anticipated.

Wall-E in 2008 Disney Movie

We can’t totally expect them to be human-looking or super smart in the coming decade, but they should be prettier than the rustic Wall-E, or at least as cute as Eve (both are robotic characters in 2008 Disney movie Wall-E).

Eve
Eve in 2008 movie Wall-E

No doubt, they will become more and more intelligent and capable with each iteration of the releases.

More than ever, every piece of the robotic design needs the considerations of both hardware and software. The current major challenges for robotic developments are at the design of actuators and robotic software. Robotic software functions as the nerves and blood circulations of the robot beings and huge room of gap exists in this area today. Open-source-based Robotic Operating System is a very interesting concept in recent years, but a lot more industry support and focus are needed to truly propel the robotic industry to a new level so that we, as consumers, can expect to order our favorite robot maid to our household in a few years.

The Current State of Distributed Computing

Distributed Computing has always been a desire in both industries and academics. With the current cloud power, one would assume that distributed computing should become a lot easier. Yet, besides a distributed program can run faster if it would run correctly, nothing has become easier on coding the program to make sense or truly work in distributed ways as intended. Allowing simultaneous leverages on the compute power and different datasets across different locations, hardware, platforms, languages, data sources and formats is definitely a non-trivial task today, almost the same as it was yesterday.

The “why” part of the distributed computing is easy to understand, but even with the help of existing popular cloud platforms and industry money, academic researchers and industry developers still have zero consensus on “how” it should be done. Huge room of creativity remains in this area today and numerous tools mushroomed, especially within open source community, but each highly customized solution results in very little consistency and leveragability. Sustainability and manageability are nightmares for everyone too.

For example, for decades, to utilize extra compute power on many idle machines for large computational tasks has always been a very attractive idea for academics. University of Wisconsin at Madison has been developing an open-source-based distributed computing software solution called HTCondor. Yet, allocating compute tasks to heterogeneous hardware, retrieving data and files in various paths and formats, handling multiple platforms, as well as managing the shared compute states are still huge challenges which involve customer coding. On the other hand, some startups have the right ideas to focus on defining new abstraction architecture to separate data from the mechanics of handling them, and design higher level coding definitions, but all are in infant stage right now.

It appears that this is a definitely a great time and space for some industry deep pockets to step up and come up with more user-friendly and productive software solutions as TriStrategist called out before [See our May 2014 blog on “the Internet of Things]. It needs a straightforward hybrid-cloud-based software architecture and an easy-to-use high-level programming language, with sound abstraction of the tiers of data passing, protocols, pipelines, control mechanisms, etc., and a set of platform-neutral configurable (preferably UI-based) data plumbing tools that are more touchable than the pure developer-driven open-source packages on the market.

Eventually, to truly realize distributed computing scenarios will need machine and code intelligence.

Energy Competition for Modern Data Centers

Future extended power of cloud computing very much lies in the available energy power of the modern data centers that support it.

Traditional data centers are usually huge energy hogs and wasters. An average-efficiency 4MW IT capacity data center with a PUE (Power Usage Effectiveness) of 2.0 could use about 70GW of electricity per year with an annual bill of nearly $5 million. That much capacity could power a US town of 7,000 homes (already the highest household consumption in the world). If the PUE is reduced to 1.2, it could save 40% of the total cost for the company and save enough electricity for about 3,000 additional homes.

The modern cloud data centers are designed towards more energy-efficient and greener, with higher capacity running ratio. An intense competition is seen on the energy front for modern data centers with companies scratching their heads to find smarter ways to save energy cost and build more energy-efficient ones. It’s not just a cost imperative for large infrastructure players, but more and more a strategic one for the future.

Apple by 2013 announced that its worldwide Data Centers already use 100% renewable energy including wind, solar, hydro, geothermal and bio fuels. Its largest data center in Maiden, NC, has a 100-acre solar farm combined with a bio fuel cell installation, which was completed at the end of 2013.

Google says each of its self-designed data centers uses only half of the energy consumption of most of the other data centers. Google has the smallest PUE in the industry, about 1.12 by 2012. Now at Data Center Europe 2014 in May, Google disclosed that they are running Machine Learning algorithms to further control the cooling system and may shed another 0.02 off its PUE. That will make just about 10% of IT equipment energy to use on non-compute operations, currently the highest efficiency among modern large-scale data centers.

Microsoft is also trying to improve its green energy image. MSFT just signed a 20-year 175MW wind farm deal last week for a project outside Chicago to continue its renewable energy pursuit. In 2012, MSFT initiated a joint project with Univ. of Wyoming to experiment its first zero-carbon fuel-cell-powered data center using greenhouse gas methane. In April 2014, MSFT announced the increased investment and expansion of Cheyenne data centers, bringing a total investment to $500 million. Several other new energy initiatives are also being explored globally on various scales by MSFT.

Globally, governments are also paying attentions to the data center building for cloud computing age and the energy competition associated with it. China, which is lagging behind in cloud infrastructure due to the tight control of the land and power usage by the government and foreign firms’ fear of data privacy, recently sponsored huge initiatives for the constructions of modern cloud-focusd data centers in its July 2014 data center policy meeting in Beijing. It recommended the areas of the country less disaster-prone and with more abundant natural energy resources for strategic large data centers, and also mandated that all future data centers need to meet PUE of less than 1.5.

Increased operation efficiency and greener energy sources mean less carbon emissions, less environmental footprint and longer sustainability. In the near future, when worldwide customers select their cloud providers, they may not simply choose which one offers better performance and capacity, but may choose which one is energy smarter for longer term sustainability and better social reputations. Attentions to energy innovations and competitions are definitely non-negligible for any infrastructure player or any large enterprise of the current age.

Nature, Man and Machine

What would a man do if he thinks he can possess unlimited power? Would a machine being as intelligent as a man, without any biological weakness of the human brain, have steadier wisdom, better judgments, or more benevolent thoughts at all time?

Aided by today’s computing power, great development and excitement in Artificial Intelligence (AI) have generated great debates on the role of technologies in our future, essentially the power balance of intelligence machine, man and nature. This is not just a scientific or technological debate topic, at its core a profound philosophical one.

At present, several hundred sectors of the human brain have already been mapped. With exponential growth rates of scientific and technological advancements of the modern time, the enthusiasts have good reasons to predict that machines could one day replace the entire functions of a biological human brain or even surpass it in intelligence – the predictive capabilities. Brain cells could be replaced by cell-sized nano-chips that could run faster, process more information than biological ones. Future possibilities seem limitless. Some even predicted that eventually “human machine beings” could control the fate of our entire universe.

The concept of a future human with numerous embedded tiny machine chips in the brain is as disturbing as the notion that the fate of the universe could be controlled by future strange beings. It would be a pathetic image of the human fate even if immortality seemed touchable. Human brains have biological shortcomings, but machines will not be short of bugs or short-circuits either. So are we expecting the future of our transformed species, or of all living species, in a constant upgrade process, and a few privileged would always carry the newest chips coded with more power and fantasies? What would a world of existence be then? Extremely boring or incessantly warring?

Humans are nature’s creation and nature’s creations always come with myriad forms of mysteries beyond pure logics. It’s quite doubtful if machines can reproduce every bit of human experience, connectedness and response with the outer world. Although technological innovations always astound us with their speed and far-reaching effects, nature magnanimously surprises us more. Whenever we reach a breakthrough in science and technology, another profound unknown is often standing there, long waiting. Such as in the study of particle physics, whenever we thought we had reached the end of the mysteries of creations, the picture often became more elusive to us.

There is less doubt that man-made machines could one day harness the power of anything that can be realized by logics and patterns, but great doubts definitely remain if any human or machine being’s creation will present itself in the harmonious way as nature has done. Ancient philosophers from thousands of years ago, in Taoism, Buddhism, etc., had long discovered the Way, the natural flows, the harmony and rhythms that nature displays and requests, the ultimate wisdom of the universe. They did it simply by looking at the vast space and distant stars, and submitting oneself as one tiny being in the immense universe to humbly receive the wisdom and inspirations from nature.

Nature, with its majestic, magical and awesome, has given us the yin and yang, the essential life elements of air, water, earth, fire, wood and metal, enhancing yet counter-balancing each other. Nature deserves our respect, our reverence, our modesty and awe. Human is just one of the millions of species on earth and very likely one of the countless species in the universe. There could well be other more intelligent biological species in the universe that we have not met yet. It is the harmony with nature that we should ultimately seek in every one of our endeavors, not superiority or dominance, so that we can hope to extend the life span of our beautiful planet for many more generations of human species to live on and enjoy in peace.

History has repeatedly reminded us the consequences of human arrogance, overzealousness and conceit with our perceived power over nature, when men confused “exploration” with “domination”. The sudden decline of the once most powerful Egyptian Empire around 2100 B.C. in the middle of the infatuation with pyramid building; the mysterious disappearance of Mayan civilization after all the impressive stone structures in the jungle; the Great Famine of China followed by the Great Leap Forward campaign which cost more than 30+ million lives over a few short years at the turn of 1960s …. If the legend and prophecy of the Island of Atlantis were true, it would be another nature’s sounding alarm to the human race.

If the boundary of our farthest intelligence and imagination is called a singularity, TriStrategist thinks that it is not any technological limit that men couldn’t reach, nor any law of physics. It is the realm that after exhausting all the knowledge and intelligence that men and machines have accumulated, we could no longer perceive the slightest hint from nature about its next balancing force that could be imposed on us. That is the true singularity and likely it won’t be pretty.

“To help mankind in balance with nature through technologies” will always be one of TriStrategy’s core missions.

IBM’s Counter-Disruptive Strategies

IBM’s profitability and stability have been relying on the combination of specialty servers, software and services which have a high switching cost for enterprise customers. On the server side, IBM has been doing fairly well in high-end server market, about 57% market share by beginning 2014.

Although IBM’s recurrent revenue streams and financial performance are still strong, they are facing cloud-computing as a disruptive threat, especially the commodity-hardware-based, software-driven cloud services promoted by Google, Amazon, VMware, etc. With compute power doubling faster and faster and price getting cheaper and cheaper, cloud services quickly nudge into the large enterprise IT and scientific computation space where IBM used to dominate. Because most of IBM’s software and service business are also chained to their specialty high-end servers, the threat is definitely looming larger for IBM’s once prized niche markets. It’s hard for IBM to compete in the low-end cloud service market without some fundamental changes for such a huge company.

To counter the disruptive threat, IBM adopted several strategies simultaneously.

First, IBM is offering its own cloud platform, but in a different fashion. In January 2014, IBM announced to sell their low-end server (mostly x86 based) units to Lenovo of China In order to focus on other strategic goals including cognitive computing, Big Data and cloud. IBM is also trying to acquire more cloud software companies to help migrate their existing software and services to its own “distributed” cloud platform in order to keep its existing lucrative customers who still offer the largest profit margins. Timing and luck still yet to play out, but this strategy may or may not serve to sustain long-term advantages for IBM.

Second, instead of competing head-on in the low-end cloud services, IBM invested $1 Billion from January 2014 to form a 2000-people new business unit in a new location, separate from the corporate headquarter, to transform IBM’s super machine “Watson” onto cloud. Targeting to provide cognitive machine learning technologies and services to enterprises, this strategy is to compete at the higher-end value chain of the cloud services. It’s common for a new initiative to adopt an independent division to avoid being aversely impacted by existing non- efficient corporate culture or playbooks, but a new organization of 2000 headcounts seems overly extravagant. It’s hard to be interpreted as a bold innovative move with due efficiency. On another note, the original famous Watson, IBM’s super intelligent machine built on IBM’s DeepQA Technology (somehow sounds very similar to the Deep Thought of the movie The Hitchhiker’s Guide to the Galaxy. 🙂 See our earlier blog on Machine Learning) was a room-sized giant “machine being” on a costly cluster of 90 IBM high-end servers, with a total of 2880 processor cores and 16 Terrabytes of RAM. Now it is said that by switching to cloud-based computing platform and software, its speed increased 24 times, 2300% performance improvement and a new size of a pizza box. Still this seems to be another tweaking of the existing, a middle-road strategy for dealing with disruptive innovations, which was not generally favored by many past business lessons.

Third, few companies have been able to flex the R&D muscle like IBM does. It has a long tradition of investing heavily on R&D which has paid great dividends over the history of IBM. IBM has invested in many obscure ideas, including building neurosynaptic (to mimic the brain) and quantum (to mimic subatomic particles) machines, replacing silicon with carbon nanotubes and carbon films, etc. IBM just announced in July a $3 billion R&D budget in chip research to further shrink the size of the circuitry and seek new materials for chip designs. If these ideas succeed, the landscape of intelligence market could be changed dramatically. For example, parallel to the prevailing trend of a future all-encompassing super intelligent machine being on cloud, a neurosynaptic machine of a small size could well be positioned in a different value network for the intelligence market of the future. Although many of these ideas are still a bit far from commercialization at the moment, TriStrategist thinks that these ideas may truly be IBM’s savers in the near future. They in fact have the true potentials as the type of innovations that generate the next industry leaders.

Time will tell which strategy could result in best outcomes for IBM and how well they can execute. It will provide great business lessons for many companies in managing and dealing with disruptive innovations.

Google’s Disruptive Innovations

Business owners and residents in Kansas City, Missouri, must have been very happy with their new internet connection choice provided by Google Fiber since Q4 2012. As the first city selected for Google Fiber Network, for about the same monthly fee as their past often single-selection-only cable or ISP service, now they can get 100 times faster connection speed (in gigabytes). Since then, Google has been aggressively expanding its Google Fiber to many other cities. Many more cities are eagerly waiting.

This is just one of the examples of the recent innovations by Google. It happens in the very traditional telecommunication broadband service sector which have been tightly controlled by a few large and long-time businesses for years and where innovations have been slow.

Since the start of the company, Google has never been lacking innovations in its products or services, but its most profound one in recent years, per TriStrategist, is not in the latest smart wearables, Android-related products or map services, but the unique expansion of wide-area wireless access since 2013 in the remote terrains of sub-Saharan Africa and Southeast Asia, where for so many years the costs have never been justified for the wires to be laid by telecom companies or poor local governments. By Google Blimps, satellites (Google’s recent Skybox purchase may add to the toolbox) and other locally suitable mechanisms for these remote areas (MSFT is said to be in this endeavor in some way as well), billions of people may soon expect to be connected to the internet and to the rest of the world.

Yes, future commercial gains for Google are in the middle, but the implications of such moves are far more significant – bringing down barriers of information access, promoting health, education and businesses, propagating democracy and social progress, reducing poverty and improving equality… It finally gives a modern and perhaps the most effective way to unleash the human potentials and productivities of the billions of people in these disadvantaged areas in the world.

Combining Google Fiber and Blimps moves, Google delivered disruptive innovations head-on to the long-tradition telecomm and ISP service market. In an urgent defensive move, besides fighting against Net Neutrality in court, global 30+ telecom companies, refusing to be reduced to “dumb pipes” (in their own word) in the new cloud and internet reality, came up a comprehensive counter strategy to try to channel the web and data access through their proposed global NFV (Network Function Virtualization) Network by ETSI standards, but their position and strategy could be misdirected this time.

Not all disruptive innovations can eventually succeed, just as many good inventions only stayed in history as fun ideas. However the true prevailing strength of a disruptive innovation and its willing acceptance by the society, many times are not because it is simply technologically superior or commercially appealing, but because it also contains a certain element of conscience: a conscience of following the natural flow of the way (as described in the “Tao” in ancient Chinese philosophy classic Tao Te Ching), a conscience for the greater good of mankind and the progress of civilizations at large.

TriStrategy’s 3T Framework on Innovation in Service Business*

Almost all businesses today are facing the forces of disruptive technologies. These forces can prove deadly for some long-time traditional businesses. Based on our observations and experiences, TriStrategist designed the “3T Innovation Framework” for a service business to innovate in the right way to stay current, stay ahead and prevent the threat of disruptive forces. The “3T” represents the three underlying concurrent themes for a service business to target on innovations: Talent-centric People Innovation, Time-focused Process Innovation and Transcendent Value Innovation.

3T Framework-Image1-2
3T Innovation Framework for Services. Click to enlarge the image.
3T Framework-Image2-5
3T Innovation Framework for Services. Click to enlarge the image.

TriStrategist thinks that compared with a product-focused company, it is harder for a service business to innovate or keep an innovative edge among competitions. Thus, a long-time service business can be even more vulnerable facing disruptive forces. There are several reasons to it:

1. A service business, even with sound management and constant innovative moves, tend to be on sustaining innovation mode** at best. It tends to stay on course for the continuity of the services and for the satisfaction of their customers. They are easily restrained from making quick or drastic changes by their customers. This renders them a lack of flexibility and versatility to deal with new, distinctively different disruptive forces on the market;

2. An established service business tends to have heavy upfront investments or fairly fixed profitability model. Dealing with a new emerging disruptive service model on the market could mean to uproot or change completely the existing model, which is often to too costly, by their traditional way of opportunity cost calculation, to make up a good financial case in decision making when there is still time to react;

3. In an existing service organization, the continuity of the service to serve their most valuable customers often imply some specialized skills and fixed procedures in the process. Routine work and mindset thus formed. People in these organizations are typically not equipped with the capabilities of quick mindset or skillset switching that are urgently needed to deal with the emerging disruptive forces. Even if the organization is keenly aware of the new forces and their threats, people in the organization, including decision makers tend to be reluctant to face them until it’s too late;

4. The disruptive force for a service tends to first attack in one or a few segments of the value chain, either by advanced technologies or innovative delivery mechanisms. Still it’s often hard for an existing service business to incorporate the new ideas quickly into their existing service model, either due to value market differentiation, technology incompatibility with the rest of the chain, or a lack of expertise in the new way. Thus, the existing business can easily lose the valuable time to combat the new wave, and more often than not, cost them the leading edge or potentially the entire business.

One good example is the movie rental business of Blockbuster vs. Netflix. Blockbuster, founded in 1985, had the heavy upfront brick-and-mortar investments in its numerous neighborhood stores and had dominant position in movie rental business for years. By targeting the same customers and essentially the same service business, Netflix, started in 2002, offered a new delivery mechanism with a lower monthly fee and the convenience of in-house delivery(also a time-saving value for some customers). Blockbuster saw Netflix’ new delivery model coming, but it was hard for them to abandon the existing store model and reset resources to immediately catch up. In fact Blockbuster’s store model had a higher profit margin (apart from the upfront investments) than the initial Netflix’ mail-in model. When they eventually tried to counter offer the mail-in service to their customers, it was already too late as Netflix’ new model has been adopted by the mainstream. Moreover, Netflix quickly started investing in streaming videos/movies which had a hugely enhanced time value in delivery, appealing more to the society thanks to the prevailing cable networks at that time. We all knew how the story ended: by 2010, Blockbuster filed its bankruptcy protection and never came out of it.

Another contemporary example can be seen in book publishing services. The traditional publishing houses have been using the same process for decades. They charged a percentage for editing, publishing and promoting the books for the authors and the entire process is slow. When self-publishing service first came into reality, traditional publishing houses couldn’t deal with it immediately: both due to the different value market (selective authors vs. every writer) and the incompatibility of the publishing process and fee structure. Still self-publishing service poses a serious disruptive force against the traditional publishing service. Today it has started moving up the value chain to attract even the established authors for both the time and fee savings. The battle is still intensely on.

In today’s technology service sector, the battles are even more fierce and scary. TriStrategist will continue to study some of these cases in the subsequent blogs.

“All things are ready, if our mind be so.” – William Shakespeare, in Henry V.

Therefore an effective innovation framework in place is essential for any service business to stay innovative throughout the organization, to smartly decide on resource and investment allocations, and to keep market leadership by preemptively dealing with any potential disruptive force in the current age of explosive innovations, if not becoming a creator. TriStrategy’s 3T Innovation Framework is designed to help on these purposes.

* Note: This blog is a continuation on the subject of Innovation and Services Business, first posted on May 17, 2014. TriStrategist will continue to refine the 3T Innovation Framework from our management consulting work and observations.

** Note: For the definitions of sustaining vs. disruptive innovations and the value network theory on disruptive innovations, please check out Harvard Business School Clayton M. Christensen’ publications and other related discussions.

Machine Learning and AI, Where Science and Technology Merge

Could some super intelligent machine being exist in our future? The answer is likely yes.

Science-fiction movies since birth have tried to lead our imaginations and predict how the future world and technologies would look like. We all have in our minds some images of the intelligent supercomputers from the movies, Deep Thought in Hitchhilder's Guide to Gallaxy
for example, the Deep Thought in 2005 movie The Hitchhiker’s Guide to the Galaxy (see pic) or the intelligent master control system in 2008 movie Eagle Eye, known as the government’s intelligent gathering supercomputer ARIIA or ARIA(see pic below).

The ARIA in Eagle Eye In many of these movies, a common theme had been that when such an intelligent “machine being” became overly powerful or misguided by evils, human heroes had the obligation to destroy it before it could destroy mankind. Although there is a distinct possibility that such a super intelligent machine being could exist in our real future, increasing evidences from today suggest that its picture be totally different. It will not be a centralized physical super machine or system as in the movies, but rather more likely it will take an invisible form living in the future clouds – in the complex webs of networked systems that could exist everywhere, on earth, in orbits or even on remote stars. The plot in the movie series The Matrix (1999 and 2003) seemed closer to this scenario. It would become a lot harder to be destroyed though if indeed evil thoughts took control. Let’s wish the age of ultra-capable RoboCops or human surrogates (as in 2009 movie Surrogates) that could draw intelligence and power through such invisible all-around machine forces won’t come into reality too soon before we find out the answer to this age-old question yet: Could one day machine-learnt intelligence indeed surpass human intelligence?

Machine Learning (ML) is a branch of Artificial Intelligence (AI). It’s the study of using machines’ computing and large data processing power, analyzing past and present data through programming and algorithms, to offer predictive capabilities without the inputs of human intuitions. The next stage will lead to more advanced AI that allows the simulations of the cognitive powers of the human brain by the machines. In fact these desires and concepts, as shown in generations of sci-fi movies, have existed for a very long time and nothing is new. Many commercial companies, including Google, Microsoft, Amazon, IBM, etc., have been playing with these concepts in their data-mining related product and service offerings such as search, cross-selling, online gaming, etc. People and countries also have been building better and faster supercomputers for decades to shrink the computing time. However only with the recent compounding growth of the compute power by clouds or clusters, these ideas, and many more enhanced possibilities in advanced AI, are becoming closer and closer to reality, and exciting again.

Machine Learning and AI are great examples of those fields in which when science and technology merge together, unlimited potentials emerge. Even with increasingly scalable and seemingly unlimited compute power, machines can only learn as intelligently as the algorithms that direct them. That’s the field of Data Science, the multi-disciplinary science of extracting insights and knowledge from data. Math and statistics are only part of what Data Science needs. Versatile skills in many areas are needed to truly make intelligent sense out of the amount of data in our hands today and the gigantic yet-to-come in order to predict the future or build human capabilities in machines.

Although still in the basic stage, IBM’s $1 billion investment announced early this year in Watson, a cognitive ML technology on cloud, and the coming July release of Microsoft Azure ML, are seen as the start of the large-scale commercial propagation of Machine Learning, both as part of the cloud offerings on their individual cloud platform. Once these facilitating tools and services become available to the masses, the power of science and technology coupling will become even more evident.

At least in our current age, there is no doubt that humans are definitely still in control of the machines.

The Future of Enterprise IT

Today it is generally recognized throughout the IT industry that a well-connected, inter-operated, flexible multi-cloud ecosystem will be the near-future and future picture of Enterprise IT. This may be an over-simplified way to put it.

Many of the IT departments nowadays are busy fitting their existing IT into the cloud world or vice versa, and occupying themselves with cost evaluation, infrastructure alignment, vendor identification, skill acquisition, automation design, etc. Beyond the infrastructure, most of the existing mission-critical enterprise IT applications are far from being optimized for cloud computing either. Therefore many commercial opportunities exist today in both cloud and Big Data space to help companies’ IT departments get into the buzz. The complexities and efforts involved can hardly be overestimated, yet the direction can be even fuzzier.

Let’s first ask what would be the future picture of the Consumer IT space? With the coming of the Internet of Things (IoT) and many futurist movies, it is not hard to imagine that plug-and-play, simplicity, connectivity and speed, anywhere and anytime, will be in the future of Consumer space. Privacy and security concerns will more or less be delegated to the enterprise service providers and network providers in the ecosystem.

But for Enterprise IT, the future picture is likely going to be more complex. One thing is certain: the future of the enterprise IT departments will need handle a lot of more than the current demands to support enterprise operations. The increasing business and market trends to offer intelligence services by collecting, consuming, processing more and more market and consumer data, of their own or from others, connecting and transferring between more and more systems, will be in the future scope for enterprises. With cloud compute, storage, network technologies far from maturity today, existing enterprise applications mostly out-of-date in the cloud world, standards and regulations are still in the making, the fast-changing and foggy future demands from consumers and IoT to Enterprise IT are just adding more fuels to the fire. It’s too early to assume that incremental changes and upgrades here and there, which have been the norm for large enterprise IT departments for decades, can sufficiently and effectively transform current IT systems and applications safely into the future when the new setting is needed.

However, if an enterprise still has the confidence to move into the future of the next decade or further, have they thought about starting right now to directly invest on a new and flexible IT picture of the future by designing entirely new multi-cloud, Big-Data-capable infrastructure and application architectures from the scratch instead of focusing on tweaking the existing? This approach can be started today with much agility and speed than on incremental changes to make the existing fit. Although many technology details are not completely ready today, the ecosystem, the connectivity and plumbing concepts are already here and many innovations have already started. All the needed technologies and affordable choices will only become more readily available in the days (not years) to come. The justification of the different approaches will involve the time and cost evaluation, but that depends how an enterprise view the market, the future and the existing and new business challenges and opportunities. This will like become an interesting case study in the business schools on “marginal cost” vs. “total cost”. It could be quite counter-intuitive for the decision-makers. What could be viewed as an easier or more obvious choice today by selecting smaller “marginal cost on investment” based on the existing, could end up becoming a much more expensive “total cost on opportunity” of the near future.

If Enterprise IT departments believe that the future picture of Enterprise IT will be an innovative picture quite different than that of today, then a different mindset may be needed.

On Corporate Culture

People tend to use many descriptive terms on corporate cultures. For example, “casual” vs. “serious”, “competitive” vs. “friendly”, “fast moving” vs. “slow”, “fun” vs. “rigid”, “quick decision” vs. “consensus”, etc. There are also many details to describe the behaviors of the people inside: “casual-dressed” vs. “suits & ties”, etc. So, what is exactly a corporate culture?

Former MIT Sloan Professor Edgar Schein, a known scholar on corporate culture, pointed out that the surface terms may not truly describe a company’s culture. He thinks that culture is a way of working together towards common goals that have been followed so frequently and so successfully that people don’t even think about trying to do things another way. If a culture has formed, people will autonomously do what they need to do to be successful.

From his explanation, a corporate culture is the common group approach, in explicit description or without, to do things inside the company and reach agreements. Other scholars summarize it as the combination of process and priorities inside an organization. Prof. Schein did warn us that it’s too easy to over-simply on what is truly a corporate culture. It comes in multiple levels and is influenced by multiple dimensions and factors with the age of the organization and its leaders, missions, strategies and goals, organizational structure and process, time and space, external forces, etc.

The question is at the interpretation of a “success” in a corporation. TriStrategist thinks a corporate culture is more than the “How” & “What” elements of the process and priorities; it has a lot to do with group-enforced human attitudes and behaviors under human psychology. These attitudes and behaviors, which over time form a “group preference” of doing things, for a large part, came directly from those of the company founders or top leaders. “Corporate Values” are not a culture statement. As the Corporate Mission Statement, they are for marketing a company’s ideal image to the outside, but the true culture inside speaks more about the personalities of the top leaders and people who subconsciously try to be “alike” to their leaders in order to be accepted or “successful” in a corporation. Because of the narrow definition of individual success in a corporate environment, the performance review and compensation system, reflecting how leaders and management value their people and contributions, also impact the culture twist. Underlying “directness” vs. “passive-aggressiveness”, “sharing” vs. “backstabbing” can all be the gradual results of it. People may even subdue their true personality in order to fit into a corporate culture at the workplace. A fun-loving person can easily keep a low key and put on a straight serious face for the 8 hours a day for the sake of making a living for the moment. Besides, there are plenty of stories on companies who failed on scandals because of the behaviors of their leaders and their questionable culture, although they had those shinning righteous corporate values posted on the wall.

The formation of a corporate culture is similar to that of a family culture. Even if the head of the household or parents may not explicitly try to “define” the culture of their family from the beginning, the children will inevitably inherit the day-to-day attitudes and behaviors from their parents inside a family. For example, “a caring, loving family” is where parents respect and care each other and their children. Parents’ honesty, hardworking, kindness to friends and neighbors, attitudes towards money, their rewards and punishment standards (especially) all help define a family culture and directly impact how their kids will behave. Same with a corporation. A competitive leader will generally lead a competitive company. A courteous leader who leads by persuasions may result in a company more “friendly” towards each other and to their customers. A overly consensus culture can seldom lead to great decision-making in a competitive market, but if that is the culture, people inside will accept it as a norm for a long time. A cunning, corner-cutting business culture may pursue aggressively on short-term gains, and that’s a culture too.

Neither a successful company ensures a winning culture, nor does a past winning culture guarantee a company’s future success. The snapshot “success” of a business usually comes with both the vision of the leaders and the luck, not necessarily the superior skills or aptitudes of the top leaders themselves. For example, a product or service may jump to an instant success because of the lack of the competitions in a particular market. No business model can last forever. Both the corporation and its culture evolve over time, but the culture, carrying the burden of a long history and the momentum of a large mass, tends to evolve much slower than the market conditions or business decisions.

A culture can be both a catalyst for many great new things or a barrier to change. The initial success of a great business tend to foster a culture which has embedded elements from their leaders and people that lead it to its success, either with their unique decision-making process, enhanced human psychology (innovative pioneering spirit, courage, passion, drive, etc.), or striking personalities (Steve Jobs as an example, who worshiped brilliance, simplicity, perfection, etc.). In the long run though, as the market and business evolves, its existing culture can directly impact if the business can adapt quickly or if it will last. A culture usually changes somewhat with the leadership change or gets blurry if neglected. Process and priorities can be changed anytime by logics, but a lasting successful business usually fosters a deeply rooted culture which includes those lasting human elements that can endure the test of the time. It is hard, but a long-lasting business needs a true “signature”.