All posts by TriStrategist

The Second Machine Age and AI

MIT Erik Brynjolffson and Andy McAfee published a book this year called “The Second Machine Age”. Last month (September), MIT hosted a conference with the same name to “showcase research and technologies leading the transformation of industry and enterprise in the digital era”. Per two professors’ naming convention, the First Machine Age is the Industrial Revolution which started in 18th century and lasted a few decades. In that age, steam-engine-powered machines extended beyond human’s physical limits and greatly expanded humans’ spatial reach. Since then, we have skyscrapers which could not be built by hands; we have trains to cross the continents, airplanes to cross the oceans and eventually spacecraft to reach to the space. Now it is the Second Machine Age, in which computer and digital revolutions enables automation, smart robotics, cognitively intelligent machines to work side-by-side with humans and will greatly extend human’s mental capacities. This Second Machine Age will signal in profound changes to our social, economic and everyday life.

As the Second Machine Age bringing in unparalleled productivity aided by smart machines, although never raised, TriStrategist asks: could it mean that one day intelligent machines may transcend the time limitation that humans experience? Either by some distorted (or virtual) reality or the co-existence of same human brain power at the same moment but different locations (by robotic surrogates or some sort of scaled-out brain mapping to machines)? All seems likely.

Today besides many industrial applications, newer smart robots can do cognitive captures and basic learning, take instructions, be deployed for dangerous rescues, do domestic chores, and be applied in robotic surgery… Humanoid robots can talk with people through Skype, ask logical-thinking questions and become closer to human-like and human-able. More creative and faster-thinking quantum robotics is also in the making. Very soon, smart robots will prevail in every corner of the human life. More and more office job functions could be carried out by either automation tools or by machines. Humans will be freed to pursue more creative and higher-level jobs that machines yet to be able to simulate or humans will have more leisure time on hands. Many could be displaced too, a possibly serious social and economic issue in the Second Machine Age.

Machines are still machines. The true power behind the machines in the Second Machine Age is no longer some physical engines, but Artificial Intelligence (AI), the “brain power” of the machines. We are currently in a new phase of AI: cognitive computing and deep learning. From traditional data mining, to voice recognition, now to cognitive adaptability, logical reasoning, improvisation and real-time interactions, AI has advanced towards human intelligence in big strides these years with the help from the ever-increasing computational power. Some even predicted that machine intelligence could surpass human intelligence in 15 years by 2030. That sounds very scary indeed.

In a recent interview with Walter Isaacson, SpaceX CEO Elon Musk raised his concerns that many people didn’t realize how fast AI has been progressing these days. He worried that intelligent machines with evil intentions could destroy humanity. It seems we indeed need ponder whether humans can still be the owner of the machines or accept them democratically as peers in our lifetime.

Microservices and Further Thinking on Distributed Computing

The challenges of distributed computing perplex and intrigue many minds. The search continues for a more flexible solution for distributed computing with heterogeneous cloud computing at background where information needs to be exchanged frequently, scaled out quickly and managed easily between diverse system and data environments. The concept of “Microservices” started gaining more attraction this year as an innovative software architecture approach, especially after Netflix published a case study to support it.

Simply speaking, the Microservice architecture is following the path of further software granularization to construct each software application as a set of small services, each performing one simple function, running in its own process and communicating with lightweight mechanisms, often through an HTTP resource API such as a REST API. These “micro”-sized services can be deployed independently and grouped together to jointly perform required complex capabilities which traditionally would be handled by one monolithic application with many embedded components and dependencies packaged inside. Errors could be isolated and fixed by re-deploying a few microservices instead of bringing down the entire system. Some startup companies promoting microservices have used the analogy of Apple App Store – so that they can offer PaaS with a rich collection of these microservices for a particular platform.

Differing from the plumbing used in many existing enterprise architectures where complex transformation rules, messaging and interchanges between systems are put on the Enterprise Service Bus in the middle, microservices adopt the idea of “smart endpoints, but dumb pipes”. It emphasizes the ready-deployable nature of the lightweight individual services from the start point to the destination without extra handling in between.

The benefits offered by the fine-grained microservices come with certain cost. For example, instead of a limited number of SLAs for an enterprise application, now numerous SLAs need to be created and maintained at the same time. Each microservice also needs load-balancing and fault-tolerance. Deploying, Managing, communicating, versioning and testing the complexity of the system with a huge number of microservices are demanding tasks.

The most intriguing aspect of microservice concept is that from the decentralized nature of its design patterns, it calls for corresponding changes to the traditional organizational structure by invoking Conway’s Law. Melvin Conway, a computer programmer, in 1968 mentioned that, “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” If one reflects carefully, it could be surprisingly true. Thus microservice concept calls for organizations to be organized around business capabilities instead of functions. Because each microservice, although “micro” in context, is an independently deployable service of its own, it must contain all cross-functional service basics as needed and developers must have UI, database, deployment, etc., full stack of skills to build and own the microservices.

Although microservice’s granularization is one form of software modularization, TriStrategist thinks that it may well be short of achieving the goal as tomorrow’s choice for distributed computing. Meaningful architecture abstraction to separate data from mechanisms, separate information from means of delivery will be needed and modularization will be the key for providing the flexibility required for distributed computing, but to meet the desired results, very likely we need both the re-thinking of software application architecture and cloud platform architecture. The future inter-deployable services, micro or not, need to be platform-neutral to truly address the essence of distributed computing. In addition, instead of thinking either smart endpoints or smart pipes, TriStrategist thinks that we need think more towards adding self-managed intelligence at each cross-point in the distributed picture.

As to the organizational structure, TriStrategist believes that more than ever cross-functional, cross-boundary collaborations are needed for any endeavor to succeed. Capabilities can contain many redundant functions and may not necessarily be cost-effective as division criteria of a large organization. Simplicity, modularity, flexibility and versatility may be the requirements for future structural changes for any organization, just as future software systems may entail.

Laws and Innovations

Globally debates are getting more heated with increased headlines about country censorship, government regulations and privacy concerns over technologies, innovations, freedom of internet, etc. At the meantime, it may be worthwhile to take a look at the situation with laws and technology innovations in general, in the US alone.

It seems a perpetual dilemma on where the boundary should be, of having the laws in place vs. letting the innovations roam freely. On one side, technologists typically fear regulations as innovations can only flourish without too much restrictions. On the other side, historically the appropriate laws could potentially level the competition fields so that small players can join and compete as well. For some cases, new laws are apparently needed in order to allow certain innovations to become commercially viable, for example, drone delivery. US FAA does not allow any commercial drone to fly over the US airspace and it could take years to change the stance. As a result, Google had to test their first drone flight in Australia this year and Amazon did theirs in Canada. In 2011, several years before the appearance of its first model, Google did push Nevada State to pass the law to allow driverless cars to test on the streets, but for drones, it hasn’t been that lucky.

It appears, either for the laws to restrict or promote innovations, the law setting process for many technology fields have been slow and inadequate, especially with today’s speed of innovations, as in robotics, drones, driverless cars, and many other instances. Commercial businesses have been the primary drivers for watching the laws, calling for the laws, initiating the debates or fighting the fights to either promote innovations and business or to get more freedom of access. The gap may exist not only in current legislative systems and processes, but also in legal educations. When many businesses need lawyers for new-technology related fights, either in the US or internationally, it’s simply difficult for them to find good lawyers who can understand the nuances of their technologies and at the same time know the related laws.

It may be promising to notice that today’s law schools and legal science departments are very busy paying attentions to technologies, either trying to apply technologies in their daily studies and practices, building applications or simply learning the basic technology concepts. Some law students start to fear that comprehensive applications that can walk through the basic laws by logical steps may even replace some of their jobs in the future. It is quite possible, but on the other hand, future complex laws and issues related to emerging technology innovations will be constantly demanding new types of tech-savvy lawyers.

One thing today’s lawyers definitely got it right: In the current real world, few problems can be solved by one discipline alone, including law, once a prestigious discipline in its closed aura. Cross-discipline studies have become increasingly important. Students and knowledge workers who often have diverse interest and pursue diverse experiences should take heart as different skills will be called upon one day or if one seeks. Equally, constant learning is always a required propensity.

The Positioning of Public Cloud Services – Part III

Successful positioning of new products or services helps a company stand out as a market leader or locate a market niche. It comes from the right insights and anticipations into the future markets, the right understanding of one’s own strengths and limits, as well as the confidence to win. The right positioning is crucial for a player who does not have the luxury to experiment all possible scenarios at the beginning of a market entry. Even for a large player with deep pockets, in a fast growing and shifting market as cloud computing today, losing money could happen much faster than gaining dominance. The right positioning from the beginning can help establish a company’s innovative leadership and reputation in customers’ minds and quickly gain market shares. Catch-up games in a competitive market, especially with disruptive innovations, are often more costly and have lesser chance to win.

For the right positioning as a Public Cloud provider, TriStrategist thinks a company needs to decide first on these initial differentiations:

  • A “Department store” or a “specialty store”?
  • A global player or a local player?
  • An infrastructure player or a technology solution player?
  • A fit-all provider or a specific market-segment provider?

These choices are not necessarily mutually exclusive. As the cloud computing market evolves, offering choices and levels will evolve with it. Nonetheless, these considerations will help narrow down one’s targets, save money and time at the beginning of the game in a new, vast but low-barrier market.

A winning position always comes with the right set of strategies to win. TriStrategist believes that Public Cloud providers need to focus on establishing the following one or more strategic advantages with their positioning, fast and clearly, in order to ensure long-term sustained growth or gain market leadership.

1. Geographic advantage

At the current early adoption stage of cloud computing, IaaS may still be the initial entry choice and logical offering for many, especially for “department store” players, although things could quickly change. IaaS has the nature of low-barrier-for-entry and low-barrier-for-switching. It has already started moving towards a commodity service proposition similar to utilities. Surviving utility companies usually own certain geographical dominance due to the heavy geo-centric infrastructure investments to fence off competitions from the price war alone. Not surprisingly, Public Cloud providers on IaaS may well need a similar defense strategy today.

Globally, country and jurisdiction barriers to business remain. Data privacy and sovereignty will be on-going concerns. The strategic placements of data centers will continue to be critical for cloud providers, for redundancy, for data security and privacy, for low latency, and for the availability of local offerings and support. For the next few years, the most significant battle field for global geographical advantage will likely be in China. China still has very low cloud infrastructure and service coverage due to the tight controls and protections from the government, but things can change quickly as Chinese government and its booming industries cannot tolerate the lag any longer. The importance of this market is not only at its huge commercial potentials, but also at its geographical size and central location for the entire APAC region and the whole globe. Today, Microsoft, IBM, Amazon all have been very busy in gaining some foothold in this market.

2. Big Data Advantage

Data to cloud computing is the water to natural clouds in the sky, flowing in and out in various forms. Eventually all data will live in clouds, public or private. If we believe 90% of the data in the world today was created in the past two years, we only saw the tips of an iceberg as more and more data will be generated and flooding in, especially unstructured data. The data reality of the cloud computing age demands compelling Big Data stories and an outstanding ecosystem of Big Data solutions and tools from a successful Public Cloud provider. These offerings also need be on-par or able to connect with the great innovations in data technologies of today and tomorrow.

3. Business Transformation advantage

Cloud computing has also displayed on several fronts the disruptive nature to businesses, especially by SaaS and its possible future variances. Today the worldwide market adoption rate for public cloud services is still low and the differences of on-going technology levels in business and IT operations are extremely wide and apart. Even in the US, on one side, some industry frontiers are moving forward with mind-blowing speed and explosive innovations, such as in computing & IT, biotech, materials, etc. However on the other side, many traditional businesses including governments are still running on very old technologies, slow, fearful, lack of visions, resources or confidence to change. The survivals of many businesses depend on their speed to transform. How to demonstrate the future possibilities and provide quick and easy solutions to transform their IT and business better than competitions should definitely be in the core strategies of Public Cloud providers. This will likely be the area bearing the highest growth potentials and profit margins in the near future for cloud computing.

4. Flexibility in platform and offerings

Flexibility ensures future extensibility. Market demands will vary and growth scenarios enabled by cloud computing will vary. Even today at the earlier stage, flexible offerings tailored to customers’ specific needs are often preferred, in both IaaS and SaaS. Tuning into this advantage will allow more room for a player to win in the long run.

More than ever we live in a ubiquitously connected and technology-driven world. Cloud computing opens up new dimensions of possibilities where many innovations, disruptive or continuous, will be born. If the future competition is going to resemble a marathon race no longer on gravity-controlled earth, but in zero-gravity space, then every business must have an open mindset to envision, explore and experiment from today.

For further in-depth discussions on the right positioning and winning strategies specific to your business, or how to implement them in details, please contact TriStrategy.

The Positioning of Public Cloud Services – Part II

In today’s Public Cloud market, each large player displays their own unique strengths in the offerings. The competitions are fierce on price (with IaaS in particular) and on features of higher growth potentials to achieve better economies of scale. New services and features are being added rapidly in order to compete and differentiate more effectively.

Amazon Web Services(AWS) apparently has the early-entry advantage for IaaS offerings. They pioneered some of the industry concepts such as “pay by usage hours”, etc. Recent surveys indicated that AWS currently leads in market share at more than 50% among all sizes of businesses using Public Cloud. The key strengths of AWS are at least three-fold: the total cost advantage in IaaS by using commodity hardware with promises of unlimited capacity and compute power; the full range of Linux-based open-source solutions for Big Data; and the global geographical zoning coverage. Amazon S3 storage system, very practical for storing massive unstructured data, currently hosts trillions of objects and processes 1 million requests per sec. It offers extremely low cost at $0.03 per GB per month (or about $30 per TB per month), leading in industry. AWS’ packages of Linux-based solutions appeal to a broad market base which have been experimenting mixed IT solutions with newer technologies and open-source, especially for Big Data. Various Hadoop ecosystem tools for Big Data processing and analytics are wrapped inside AWS managed services such as Kinesis, Elastic MapReduce (EMR), etc. Customers only need to focus on tuning the number of cluster nodes needed for processing the data load (and the associated usage cost) instead of wasting time to twist Hadoop code which is often a huge resource challenge for open-source adopters. Another leading factor for customers to choose AWS is at its geographical coverage worldwide. Amazon claimed to have AWS coverage in 190 countries. Its zoning strategy not only offers great redundancy, but also allows customers to control geographic instances of their choice and let many believe that their data can reside in the region of their choice with needed Amazon support in place. However, with currently 30+ major AWS offerings and all kinds of unfamiliar terminologies, things can become quickly confusing to customers. Amazon may need to better organize their offerings and provide better education to the market.

Google has clearly positioned their cloud platform Google Web Services as the platform for developers. Google has significant advantages over developer areas as their software-driven cloud architecture is a direct extension of their internal developer platform and solutions from supporting the gigantic search service. Google’s AppEngine is a strong PaaS contender and adopted by a large share of small and medium companies on the Public Cloud market. Google runs data-driven business. Their developer solutions including MapReduce, BigQuery, etc., often become the most widely chosen ones in the industry. As other companies are still learning and implementing Google’s earlier solution such as MapReduce as the top analytic choice for Big Data, including Amazon AWS, Google has already come up with more innovative ones. For example, DataFlow has already replaced MapReduce and Big Query as a faster solution internally, but not fully disclosed to external yet. Google is also marketing the other advantages of its software-driven cloud, such as high uptime, virtual storage, zero startup time, consistent throughput, etc. The possibility of real-time collaborations on cloud appeals favorably to developers and many business users. Google’s Android developers can leverage the cloud platform directly for mobile applications, which will be a huge growth area for clouds. Although Google faces the dilemma of “the open-source openness” in competitions, its speed on innovations and the flexibility in their cloud architecture make them a formidable competitor in future cloud offerings, especially in application areas. In fact Google is pursuing aggressively on new cloud-based workplace software to compete more effectively in enterprise space as the enterprise adoption of Google’s cloud services has lagged behind those of Amazon and Microsoft.

Microsoft certainly has the capabilities to offer the broadest variety of services on cloud. Currently Microsoft is trying to convert its numerous software advantages from the desktop world to the cloud, but it needs new thinking as the cloud reality demands new dimensions for software than the isolated desktop world. For cloud adoptions, it can benefit from the existing broad customer base, especially among large enterprises. In fact Windows Azure adoption among large enterprises has been steadily growing. With both IaaS and PaaS from Windows Azure and SaaS offerings from Office365, Exchange, Dynamic CRM, etc., Microsoft definitely has the capacity to set up as “one-stop shop” for all enterprises and mobile customers’ needs on cloud, although it still yet to achieve that stage or clearly define its positioning. Its IaaS and PaaS are competing with almost lock-step pricing with Amazon, however the most appealing factor for Microsoft’s cloud offerings resides in its broad software spectrum. Many enterprise tools, including Active Directory, SQL Server, SharePoint, etc., and developer tools such as Visual Studio, are still popular among customers, but the problem is that not all features from the desktop versions can be easily transported to the cloud versions. Conversion itself can be a challenge for Microsoft and easily confuse customers. Even for an enterprise customer on total Microsoft technology stack, migrating existing critical business applications to truly cloud-ready is not an easy task today unless it only uses hosted options. Beyond the familiar, one brand-new feature, Azure ML released in July, is a great addition to Azure PaaS. This drag-and-drop analytic tool on cloud is welcomed instantly by developers and data scientists alike. In addition, Microsoft is also trying to open Windows Azure platform to all customer choices, including Linux-based VMs (HDInsight) and Android/iOS/Symbian/other tools for mobile developers. The results of these offerings are yet to be measured. Compared with competitors’ cloud offerings, Microsoft has the advantages of familiarity to customers, but cautions are needed on the long-term roadmap as familiarity can often become a deterrent in the face of disruptive innovations.

There are also other providers that are competing on the market shares of Public Cloud services, for example, Rackspace in IaaS, Salesforce in SaaS, Verizon (Terremark), AT&T, VMware, IBM, HP, etc. It is definitely a space that is only getting more crowded.

In Part III, we will discuss some of the possible positioning scenarios and winning strategies on Public Cloud services per TriStrategist’s views.

The Positioning of Public Cloud Services – Part I

For IT Industry and for targeted customers’ organizations, is cloud computing a disruptive force or a continuous evolution? Which types of cloud offerings could potentially become commodity services and when would it happen?

The answers may vary among different organizations and for different types of cloud services, but nonetheless addressing these questions may well help determine the positioning strategies for cloud service providers in an increasingly competitive market, most importantly for Public Cloud service providers.

Today there is a paradigm shift among businesses in the perceptions about cloud computing. From the initial concept of infrastructure renting to connected business operations, from lowering CAPEX costs to new opportunity enabling, many companies have emerged from “cloud watchers” to “cloud chasers” and eventually will want to become “cloud riders”. The worldwide adoption of cloud services keeps increasing as a result and many global companies are migrating from the earlier “Transition” stage to “Transformation” stage as the cloud technologies and service establishments move towards maturity and future world with clouds becomes clearer and highly enticing. A surprising jump in the adoption of SaaS in the 2014 4th Cloud Computing survey can be a proof: from 13% adoption in 2011 to 72% in 2014. In this fast shifting market with numerous global vendors jumping in and huge investment money piling up over the past few years, the positioning of the services and value differentiation are hugely important for the Public Cloud providers, for today and the near future.

Because Public Cloud offerings demand a huge amount of upfront and continued investments on both infrastructure and critical capabilities, some have predicted that the industry may eventually consolidate into the hands of a few large players. However, since cloud computing impacts the future IT and business models of every business and offerings can become more and more innovative, the market is immensely large if not unlimited. With increasing future varieties of value-added services(especially on software and application sides), hardware costs getting cheaper and cheaper, open-source tools for enabling cloud-based data handling and applications readily available on the market, it could result in a very fragmented market with both multiple “global department stores” and many “specialty stores”.

For example with IaaS today, value differentiation becomes increasingly difficult as the market is crowded with many large players or smaller players with solid investment backing. For many enterprises, IaaS can be considered a continuous evolution and a low-barrier business. Most of the data center or colo operators of the past can naturally become IaaS providers. On the other hand, today many large companies that already have their own data centers from the past are continuing building more to offer cloud services to themselves or to a related community, for future growth, for global expansion, for the coming age of data and connectedness including IoT, etc.. Operating secure clouds of their own makes a lot of strategic sense for them. Some of them also decided to join the fray as Public Cloud providers, such as AT&T, Verizon, etc. Even for many medium and small companies, Hybrid Cloud solutions are more on their minds for the future with mixed on-premise solutions along with various value-added public offerings from multiple vendors in the picture. Under these market conditions, the coming challenges and competitions for Public Cloud services could be even more intense.

Today, Cloud Computing is still at the early adoption stage throughout the global IT industry although the momentum is fast building. There are still only a handful of large and early-entry Public Cloud providers on the market who can offer a wide range of cloud services, namely Amazon (IaaS, PaaS), Microsoft(IaaS, PaaS, SaaS), Google(IaaS, PaaS, SaaS). If Cloud Computing is viewed as a disruptive innovation in the IT world, early entrants typically will enjoy a certain degree of advantages.

In Part II, we’ll take a look at the current strengths and positioning in the Public Cloud arena from these large players. In Part III, we’ll discuss further the possible wining strategies in positioning Public Cloud services for future competitions.

Global Program and Project Management as Core Skills for Organizations

TriStrategyIn today’s complex and interconnected world, the success of every business around the world depends on how it can effectively operate in collaborative mode: collaborative on global strategies and advantages, collaborative with governments and industries on joint global initiatives, collaborative on the intersections of sciences, technologies and engineering for cutting-edge innovations, collaborative to perform and solve tough problems with people of diverse skills across multiple locations, organizations, teams, etc.. The most practical operational unit for these collaborations is a global program or project. A business’ growth very much depends on the successful execution of these programs (typically include multiple inter-related projects or iterative projects with more breadth) and projects.

Not only are vertical chains of command and rank-and-file structures no longer sufficient to capture new opportunities or deal with organizations’ challenges, they are impractical or ineffective for global collaborations most of the time and for many reasons. Differences in culture, jurisdiction and standards can still be barriers for global businesses. Global projects, often more complex and challenging in nature than localized ones, are strategically important to most enterprises today. In the new business reality of ever-changing landscape of global innovations and competitions, as well as new market demands for quick learning, high adaptability, global perspectives and versatile skill-set, effective global program and project management skills, as a combination of mindsets, leadership skills and rigorous disciplines, which often times can only be trained and enhanced through years of real-world practices, are essential for both organizations and for each aspiring leader’s career.

Meg Whitman, the former CEO of eBay and current CEO of HP, in her autobiography ‘The Power of Many’, mentioned that “Project management skills are surprisingly rare in business, even though they are possibly the most important skills needed to be a good operating executive.” Alan Mulally, in his successful transformation of Ford, essentially adopted many of the sound principles of global project management to collaborate with Ford’s executive teams to lead and transform the aging company. For example, similar to the open communication principle and the collaborative techniques for leading a matrix environment in a project, he used the recurrent weekly Business Plan Review Meetings (BPRs) to gradually foster a positive culture change in a flattened organization and keep his global leadership team informed at all time, at the same time, on all aspects of the business. Running in comparison to a giant global program operation, he encouraged honest data-driven status reporting, joint planning and full leverages of the global strategies, platforms, resources across the company, emphasizing sharing and collaborations among all global top leaders and teams. These techniques proved substantially more efficacious and popular to transform an old business today than the command-and-control, abrupt changes or costly “new-boss restructuring” carried out in many of the business transformations of the past.

On an immensely flattened globe today, “going global” is no longer a slogan but an ever-present reality. Skills of leading and managing global projects cross-country, cross-industry and cross-group are more and more in high demands. Global projects are no longer limited to contracting out straightforward lines of tasks or responsibilities such as offshore development and testing in software development, or support offices for call centers. Increasingly global program and projects are set up for running large global efforts of joint marketing and sales initiatives, interdependent product/service R&D, supply-chain operations, strategic investments, joint ventures, choice placements of business segments and resources in multiple global markets, etc., to achieve the maximum levels of growth potentials, cost structures, performance or go-to-market efficacy.

It’s hard to summarize all the skills entailed for an effective Global Program or Project manager, but TriStrategist would list out a few core requirements for such a role:

  • Essential Program and Project Management skills and experiences in diverse businesses or environments;
  • Have broad-minded big-picture focus and global strategic views of the business; be able to balance the competing needs of the multi-factors of the global program or project in the framework of the big picture;
  • Open-mindedness with a true appreciation and respect for cultural diversity;
  • Great negotiation and communication skills;
  • A willing leader who not only can lead people and projects, but can take calculated risks; be able to initiate tough decisions and win over global audiences to support the proposals or decisions for the long-term benefits of the business;
  • A quick learner and “constant gardener”- willing to learn and adapt constantly.

Eventually all businesses need to proactively develop these skills and mindsets in their leaders and managers, or acquire these resources, but good and ready ones can be hard to find.

Artificial Neural Networks

Neural Networks or Artificial Neural Networks (ANNs) are computational models which simulate the connectivity of the neuronal structure of cerebral cortex and the brain learning patterns for certain computational tasks, such as machine learning, cognitive and pattern recognitions, etc.. Conventional computational models usually fare poorly in these areas.

Differing from Computational Neuroscience which offers in-depth study of the true complex biological neuronal functions in information processing in the brain, a neural network is more a simplified modeling technique or a set of algorithms in simulating the patterns of stimulations and repetitive learning of the brain by using interconnected parallel computational nodes as artificial neurons that are often organized into inputs, outputs and processing layers. Adaptive weights are used to simulate the connection strength between any two neuron nodes. Theses weights can be adjusted repeatedly by each “learning” cycle instead of being determined beforehand.

There are many college courses designated to the study of Neural Networks. In a simple sense, neural networks offer the possibility of continued learning and corrections in order to eventually fit the models closer to a particular function of the brain by comparing the outcomes to a certain reality. This is a huge deviation from the conventional computational models. Conventional models are deterministic with data and pre-defined instruction sets stored in memory for a centralized processor node to retrieve, compute and store in a sequential manner to generate outcomes. However the processing nodes for neural networks get information from input nodes or external signals to carry out simple weighted computations in parallel and the results are together presented as the outcome. The knowledge of a neural network is in the entire network itself instead of in any single node. Each computational cycle is almost a self-learning and reality-adjusting cycle, similar to the way humans or animals generally learn.

A human brain contains billions of neurons, more than any other species on earth. Today, a typical large ANN may use a few thousands processor units as nodes, a much smaller number in comparison. With greatly enhanced computing power in cloud-ready world, the number of artificial neurons could be affordably extended if needed. However there is no such proven law yet whether a ANN’s power and reality-rendering accuracy are in direct proportion to the numbers of nodes it runs on. There is still a long way to go in AI to use ANN-enabled systems as the intelligent brains for everything in the plan, but we seem to be at least on the right track.

Today’s Nanotechnology and New Chip Design Concepts*

It’s all about scale. Once we shifted our reference frame to molecular or atomic level, a whole new world of possibilities has emerged in front of us.

First raised by the famed physicist Richard Feynman in 1959, the idea of making things at the foundation particle level has triggered a revolution in many fields, including physics, chemistry, biology, material science, medical science, life science, electrical engineering, bio-engineering, chemical-engineering, manufacturing, military and space engineering, etc.. A nanometer (1 nm) is equal to one billionth of a meter and the nanoscale is typically between 0.1-100 nm in size. Most atoms are 0.1-0.2 nm wide; a DNA strand is about 2 nm; a blood cell is about several thousand nm and a strand of human hair is about 80,000 nm in thickness. At nanoscale, quantum mechanics dominates and matters can display many unique properties that are not available under normal scale.

Experimental nanotechnology did not come into tangible existence until 1981 when an IBM research lab in Switzerland built the first scanning tunneling microscope (STM) which allowed the possibility of scanning individual atoms. Subsequently in the following decade, moving single atoms became possible. Other newly discovered techniques around the same time also made manipulations of atoms a true engineering reality, although not a small feat even in today. In 1991, the first carbon nanotube was created, wrapped by a single sheet of graphene (from carbon atoms), which is about 6 times lighter and 100 times stronger than steel. Its properties and electrical conductivity make it a favorite candidate as a nanoscale building block in many applications, especially in high-tech world.

Today, one application area of nanotechnology that has attracted some intense focus is on the extremely small-scale electronic circuitry design. With increasing difficulties to meet Moore’s Law of doubling density of transistors on a single IC with shrinking sizes desired in modern electronics, limitations of silicon chips, including heat and energy constraints, become more obvious and costly to maneuver. Nanotechnology has already rushed to the promising frontier as the replacements for future chips. Many creative ideas are being tested at present.

The graphene chips have already been created in various forms, but they can be too easily damaged in the assembly process to make it a practical production choice for most computers. IBM released an advanced version of the graphene chip in early 2014 by using a new manufacture technique to address the fragility problem. However, at the meantime, the possibility of other nanomaterials to compete with graphene has already come into the stage, for example, some new nanomaterial and assembly method demonstrated by Berkley Lab this year. At atomic levels, once assembled properly, many particles or mixed structures could potentially display the electrical and optical properties needed for building nanochips. This area of the chip manufacturing development will likely see intense competitions in the future.

Various techniques have long been investigated along the ideas of overcoming the limitations in existing chip design by modifying the silicon structure, but many challenges remain in engineering. Earlier this year, UC Davis researchers established a bottom-up approach to add nanowires on top of the silicon, which could create circuitry of smaller dimensions, bear higher temperatures and allow light emission for photonic applications that traditional silicon chips are incapable of. They found a way to grow other nanomaterials on top of silicon crystals to form nanopillars as the stations for nanowires to connect and function like transistors, thus form complex circuits. The most appealing aspect of this method is that it does not require significant changes to today’s manufacture process of silicon-based ICs.

Another completely new thinking of chip design concept at nanoscale comes from utilizing the quantum nature of the natural particles to define binary “0” and “1” instead of from silicon crystals. For example, by switching the direction of a single photon hitting a single atom residing in one of the two atomic states, the resulted direction of the photon could well represent the “0” and “1” logic. Quantum computing is therefore born. If manipulated successfully, quantum states may possibly allow simultaneous existence of more than just “0s” and “1s”, which could promise more powerful potentials for future computers.

With drastically different and innovative landscape in chip designs enabled by nanotechnology today, what are the needs and implications to future software design? One field in the near future that software will definitely play a significant part along with nanotechnology development is the need for logical algorithms to control and manage the “self-replicate” process of the “active” nanocells, especially for AI developments. For application software development, TriStrategist thinks that once nanoscale manufacturing becomes a norm in computer and electronic industry, its flexibility and versatility could only imply that chips can be more adaptably designed based on myriad human needs and the software application programs that run on top of it.

*[Note:] This blog was in fact drafted and published on August 27, 2014, to make up for previous week’s vacation absence.

Containers and Cloud

The container concept for cloud infrastructure deployment is not new. These containers serve as pre-fab blocks that contain the equipment, configuration and management tools to allow fast plug-and-play data center setup. The uniformity of the hardware is both the strength and weakness of such a concept, trading versatility and flexibility in infrastructure deployment with agility and speed. Google first designed and implemented its container data center in 2005. Microsoft built its first container-based 700,000-sqft Chicago Data Center in 2009.

Now the “container” concept has been smartly extended from cloud infrastructure to data and cloud software. A cloud software container can work as a portable module for cloud applications to move between hybrid cloud PaaS or be offered as a flexible component of PaaS. Many such designs and implementations are still in the making. An example today which just started gaining popularity is called Docker, a Linux-based open-source OS-virtualization implementation. It essentially functions as a middle-tier abstraction layer or wrapper to shield off the cloud platform complexities from the application developers. Within 15 months of its inception till today, the total downloads of its trial version have exceeded one million and the community is fast growing. There are a few major global supporters for such an implementation and the startup company in San Francisco recently got a new round of $40 million venture funding.

Compared with the prevailing concept of Virtual Machines for deploying cloud-based applications, the Docker “container” goals at making the applications easily portable among hybrid cloud platforms with agility, zero startup time and one-click management. Of course there are certain trade-offs with the benefits of portability and platform-neutrality. Since it uses kernel sharing for platform reach, some security controls have to be sacrificed. Also Docker only supports single-host applications because the “container package” can get a lot more complex with multi-server applications across platforms and it’s a hard problem to resolve. Some supplemental solution proposals are out there on the market to overcome the limitation. Still, the “software container” concept and solutions are very new and yet to be tested for any IT production situation.

Current Docker v1.0 does not support Windows applications, but it is still a good start towards fulfilling a key market need. This in fact is also one workable approach for today’s distributed computing.