The Fast Shifting Market of Mobile

Samsung, the world’s No.1 smartphone maker by shipment, just reported a shocking decline of its global net profit, profit margin and market share for its mobile division in Q3 2014, compared with just a quarter ago. Per the WSJ report, Samsung shipped about 79 million smartphones in Q3 and saw its market share fall to 24.7% from 35% a year earlier. The upstart Chinese mobile device maker Xiaomi Inc. (the name in Chinese “Little Rice” is from an indigenous idiom of WWII) , which was founded only about 3.5 years ago, tripled its global handset shipment and rose from 2.1% to 5.6% global market share, now the No. 3. Apple remained No. 2, with a 12.3% share, falling from 13.4%. Samsung also lost its No.1 smartphone seller position in China.

The speed of changes in worldwide mobile market, both in handsets and mobile apps can surely cause dizziness. This is an industry that tolerates no mistakes. Competitions are hard-charging on talents, innovations, strategies, speed of executions and boldness.

Xiaomi hired Google Android executive Hugo Barra last year to help expand its global markets and it is moving fast. Xiaomi understands the Chinese buyers of mobile phones very well. It often uses ARM-based fast gaming processors from partners such as Nvidia or Qualcomm, and adopted Google’s free Android platform for quick development. Its newer models got great reviews and sold out online (their low-cost selling channel) within minutes. Their price is often less than half of those sold by Samsung or Apple.

Samsung may have made a strategic error in China as they failed to shift their position. For an expensive smartphone offering similar features and speed, fewer people are willing to pay double, especially with an average replacement rate only around 2 years. On the other hand, Chinese typically buy unlocked phones at full handset prices instead of locking into a long contract with a designated carrier, unlike the practice in the US market. Thus Xiaomi’s pricing strategy certainly gained an upper hand.

Apple at the moment is still enjoying its first entry advantage on many smartphone innovations and its premium brand recognition built up during Steve Jobs’ time. Apple owns its popular technology platform, a significant advantage that Samsung is lacking. It also tries to keep up with constant innovations and strategies. For example, its latest Apple Pay is trying to compete on another value chain – making an iPhone an indispensable daily tool instead of just a high-tech toy or communication device.

On the mobile app side, the number of new mobile apps created every day is staggering. The Finnish company Rovio Entertainment Ltd, once made the stellar mobile game Angry Bird, is now facing the trouble of coming up with another killer app to maintain the revenue growth. This year it had to trim its staff and change a CEO. No single company is likely to dominate the mobile app market because the next killer-app developer or startup could pop up from almost any open garage in the world. In addition, majority of the mobile apps are free today (or have to) despite the development cost. In fact the total app revenue only occupies a small share of the total mobile market capital. Sustained growth for a mobile app company is hard as it constantly requires new striking apps and ingenious money-making ideas.

This is definitely not a market for the faint of heart. The pace of the market may be breathless, but it’s a global vast and open field that favors amazing plays by amazing players, those who can gather brilliant minds, move fast, hold bold visions and execute right-on-target strategies.

Corporate Financial Reporting

Corporate finance data are the most critical indicators of the health of the business. Outside investors rely heavily on earnings reports and corporate finance data published. Believe it or not, financial reporting could be one of the most time-and-resource-intensive activities inside a large multi-national corporation with multiple lines of products and services. With October earnings season in sight, the rewards and punishments emotionally and theatrically played out by the investors on Wall Street after the disclosures put corporate financial reports and reporting process nearly under microscopic focus.

Although General Accepted Accounting Principles(GAPP), currently the most commonly used set of accounting rules in the US, has been adopted for years and numerous regulations well govern the details of financial reporting, surprisingly things are not always clear-cut and in fact very complicated. Even the most straightforward P&L Statement (or Income Statement) can often times be misleading. Investors often want to see the breakdowns of the revenues and costs by various business units, countries or regions, product/service lines, which could offer the most telling information on the growth trends of the company, the effectiveness of the management and their strategies, especially for new investments or specific competitive markets. However these information are often left vague, missing or poorly calculated. Companies can choose to enhance or hide any of these information. Revenues and various costs can be mixed or distributed differently among business units or product/service lines, or just simply put in some obscure bucket. For example, Amazon’s cloud services AWS, which holds the No.1 market share among all global Public Cloud Service providers and considered their fastest growing business sector today(although less than 10% of total revenue), is only listed in “North America, Other” category on their quarterly reports combined with several other smaller lines of investments or services. This may not be intentional and Amazon is definitely not the only large company having this issue for undisclosed reasons. Some may result directly from the challenges of corporate finance governance and reporting process. The calculations of various costs on P&L reports could be even more interesting, especially when it comes to services where both new and traditional product-line activities are linked or unlinked based on internal decisions.

More confusions exist for a corporation reporting from operations in multiple countries. Since International Financial Reporting Standards (IFRS) and US GAAP are inconsistent in many ways, companies have to convert themselves, not just on currency fluctuations, to make into one roll-up report. IFRS is broadly adopted by 100+ countries today, especially in European countries. China and Japan are also trying to convert their accounting rules into it. Thankfully in recent years, increasing calls for actions have raised the urgency of this issue and internationally organized efforts for the convergence of the international accounting standards are under way.

For corporate finance departments, sometimes the difficulties to get a simple, clear and unbiased roll-up financial report on some simplest concepts of revenues and costs from a certain area of the business frustrate even themselves. The challenges often come from the complex corporate structure and the diverse systems used to store and track the business data. With numerous data entry points, multiple legacy source systems owned by different groups, interpretations and controls of scattered and inconsistent data by different reporting personnel, the cumbersome and often manual reporting processes, etc., it’s totally time-consuming, costly, exhausting and error-prone to produce either the near-real-time market reports or the final earnings.

TriStrategist believes that technically the progress of the Big Data technologies today could trigger a revolution in the near future on corporate accounting and finance reporting. Efficient Big Data analytics could almost immediately render many old systems obsolete as any financial or business data in any granularity from any market could be allowed in raw data formats and collected generically and instantly for all factors of the business. They will be saved in the simple, unlimited repository in the cloud, as one source system only. With some straightforward and intelligent rules built-in (the true meaning of the data will matter, regardless of different standards), these data can be processed instantly in parallel, analyzed in very little time, and passed down to various downstream channels to form reports or visual views, either for aggregated periods or real-time volatility.

By that stage, the total transparency of the business will be unstoppable. It will benefit both the investors and corporations at the same time. Investors always crave for transparency. Corporate leaders can definitely look at every corner of the business at any time with unbiased honesty. More market intelligence for smarter business decisions can be made possible with very little reporting time and effort as well.

The Geographic Advantage

When Google started its first fiber network offering in the US, it also came up with a clever idea of free colocation service in its nearby data centers to content providers such as YouTube, Netflix and Akamai. In this way, it can minimize the content buffering and network congestions from transferring large amount of contents from remote hosting locations by the providers and thus greatly improve the customer experience from Google Fiber Network.

Amazon surely has exploited the similar idea more broadly and globally. Amazon, with its conglomerate products and services, is interested in both content delivery by itself and offering public cloud services for customers with streaming contents on their web sites. To fast expand into a dominant global IaaS provider, besides its global data centers, it has designed and implemented “AWS Edge Locations” in its Global Infrastructure strategies. Now Amazon has 50+ of them. These Edge Locations serve as the “caching” locations for local contents and data that are more convenient to its customers but outside its existing data centers. These Edge Locations provided AWS more advantages for serving global customers with local contents and faster data access without the full investment cycles of building data centers at each global location needed.

Google’s colos and Amazon’s Edge implementations not only help themselves and their customers with content and data deliveries, they will also allow the companies to proactively mitigate the potential negative impacts from the coming Net Neutrality ruling which is pending in the US congress. More importantly, they can be extended to significant geographical advantage and flexibility in the cloud service offerings in the near future, both in the US and internationally.

Global geographic advantage will be critical to the success of a global public cloud provider, especially in IaaS space. The importance of this advantage has already been discussed in TriStrategist’s earlier blog on The Positioning of Public Cloud Services-Part III published on September 19, 2014.

This week in the news, SAP signed a pack with IBM on October 14 to leverage the full fleet of IBM global cloud data centers, in addition to SAP’s own 20 of them, to expand SAP services and store SAP data in local regions to better accommodate the new regulatory requirements from different countries. IBM invested $1.2 billion from the beginning of 2014 to expand its global footprint on cloud data centers to about 40, including 15 from the SoftLayer acquisition and 12 existing ones of its own. Although many are still in the plan, IBM’s strategy of pursuing global geographic advantage apparently has already gained itself some needed edge to catch up on the global cloud war. Data security and sovereignty have long been concerns for many countries and governments to adopt public cloud services by global providers, certainly including the US government. The breakout of the NSA PRISM scandal only worsened the situation. Now several European countries have passed regulations or compliance requirements to have their business or government data reside locally in the country or on the continent. Countries in other regions will surely follow suit. Very soon, the global strategic spread of the data centers will become a prerequisite for a public cloud provider to survive in the global space or be reduced to a niche provider in a few markets.

In the next few years, geographic advantage will likely become the most significant deciding factor in the competitions among global IaaS providers.

The Second Machine Age and AI

MIT Erik Brynjolffson and Andy McAfee published a book this year called “The Second Machine Age”. Last month (September), MIT hosted a conference with the same name to “showcase research and technologies leading the transformation of industry and enterprise in the digital era”. Per two professors’ naming convention, the First Machine Age is the Industrial Revolution which started in 18th century and lasted a few decades. In that age, steam-engine-powered machines extended beyond human’s physical limits and greatly expanded humans’ spatial reach. Since then, we have skyscrapers which could not be built by hands; we have trains to cross the continents, airplanes to cross the oceans and eventually spacecraft to reach to the space. Now it is the Second Machine Age, in which computer and digital revolutions enables automation, smart robotics, cognitively intelligent machines to work side-by-side with humans and will greatly extend human’s mental capacities. This Second Machine Age will signal in profound changes to our social, economic and everyday life.

As the Second Machine Age bringing in unparalleled productivity aided by smart machines, although never raised, TriStrategist asks: could it mean that one day intelligent machines may transcend the time limitation that humans experience? Either by some distorted (or virtual) reality or the co-existence of same human brain power at the same moment but different locations (by robotic surrogates or some sort of scaled-out brain mapping to machines)? All seems likely.

Today besides many industrial applications, newer smart robots can do cognitive captures and basic learning, take instructions, be deployed for dangerous rescues, do domestic chores, and be applied in robotic surgery… Humanoid robots can talk with people through Skype, ask logical-thinking questions and become closer to human-like and human-able. More creative and faster-thinking quantum robotics is also in the making. Very soon, smart robots will prevail in every corner of the human life. More and more office job functions could be carried out by either automation tools or by machines. Humans will be freed to pursue more creative and higher-level jobs that machines yet to be able to simulate or humans will have more leisure time on hands. Many could be displaced too, a possibly serious social and economic issue in the Second Machine Age.

Machines are still machines. The true power behind the machines in the Second Machine Age is no longer some physical engines, but Artificial Intelligence (AI), the “brain power” of the machines. We are currently in a new phase of AI: cognitive computing and deep learning. From traditional data mining, to voice recognition, now to cognitive adaptability, logical reasoning, improvisation and real-time interactions, AI has advanced towards human intelligence in big strides these years with the help from the ever-increasing computational power. Some even predicted that machine intelligence could surpass human intelligence in 15 years by 2030. That sounds very scary indeed.

In a recent interview with Walter Isaacson, SpaceX CEO Elon Musk raised his concerns that many people didn’t realize how fast AI has been progressing these days. He worried that intelligent machines with evil intentions could destroy humanity. It seems we indeed need ponder whether humans can still be the owner of the machines or accept them democratically as peers in our lifetime.

Microservices and Further Thinking on Distributed Computing

The challenges of distributed computing perplex and intrigue many minds. The search continues for a more flexible solution for distributed computing with heterogeneous cloud computing at background where information needs to be exchanged frequently, scaled out quickly and managed easily between diverse system and data environments. The concept of “Microservices” started gaining more attraction this year as an innovative software architecture approach, especially after Netflix published a case study to support it.

Simply speaking, the Microservice architecture is following the path of further software granularization to construct each software application as a set of small services, each performing one simple function, running in its own process and communicating with lightweight mechanisms, often through an HTTP resource API such as a REST API. These “micro”-sized services can be deployed independently and grouped together to jointly perform required complex capabilities which traditionally would be handled by one monolithic application with many embedded components and dependencies packaged inside. Errors could be isolated and fixed by re-deploying a few microservices instead of bringing down the entire system. Some startup companies promoting microservices have used the analogy of Apple App Store – so that they can offer PaaS with a rich collection of these microservices for a particular platform.

Differing from the plumbing used in many existing enterprise architectures where complex transformation rules, messaging and interchanges between systems are put on the Enterprise Service Bus in the middle, microservices adopt the idea of “smart endpoints, but dumb pipes”. It emphasizes the ready-deployable nature of the lightweight individual services from the start point to the destination without extra handling in between.

The benefits offered by the fine-grained microservices come with certain cost. For example, instead of a limited number of SLAs for an enterprise application, now numerous SLAs need to be created and maintained at the same time. Each microservice also needs load-balancing and fault-tolerance. Deploying, Managing, communicating, versioning and testing the complexity of the system with a huge number of microservices are demanding tasks.

The most intriguing aspect of microservice concept is that from the decentralized nature of its design patterns, it calls for corresponding changes to the traditional organizational structure by invoking Conway’s Law. Melvin Conway, a computer programmer, in 1968 mentioned that, “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” If one reflects carefully, it could be surprisingly true. Thus microservice concept calls for organizations to be organized around business capabilities instead of functions. Because each microservice, although “micro” in context, is an independently deployable service of its own, it must contain all cross-functional service basics as needed and developers must have UI, database, deployment, etc., full stack of skills to build and own the microservices.

Although microservice’s granularization is one form of software modularization, TriStrategist thinks that it may well be short of achieving the goal as tomorrow’s choice for distributed computing. Meaningful architecture abstraction to separate data from mechanisms, separate information from means of delivery will be needed and modularization will be the key for providing the flexibility required for distributed computing, but to meet the desired results, very likely we need both the re-thinking of software application architecture and cloud platform architecture. The future inter-deployable services, micro or not, need to be platform-neutral to truly address the essence of distributed computing. In addition, instead of thinking either smart endpoints or smart pipes, TriStrategist thinks that we need think more towards adding self-managed intelligence at each cross-point in the distributed picture.

As to the organizational structure, TriStrategist believes that more than ever cross-functional, cross-boundary collaborations are needed for any endeavor to succeed. Capabilities can contain many redundant functions and may not necessarily be cost-effective as division criteria of a large organization. Simplicity, modularity, flexibility and versatility may be the requirements for future structural changes for any organization, just as future software systems may entail.