Category Archives: Trend of Technology

Columnar Database and In-memory Processing

Columnar Database, precisely as its name, means a database that primarily stores and processes data by columns, rather than by rows. Most of the traditional relational databases and OLTP (Online Transaction Processing) cubes store and access data by rows. Due to the increasing challenges brought by today’s Big Data problems, which require ever faster processing of huge amount of non-relational data, columnar database is referred more frequently in present-day database technology discussions.

The major advantages of columnar database are typically twofold: First, it offers more efficient compression of the data in storage blocks, which results in overall less storage space and smaller amount of data to load onto disk or memory. Second, due to both the compression and self-indexing nature of the columnar data, it can drastically reduce disk I/O requirements and offer much better query performance, especially for those column-oriented operations such as SUM or COUNT. If the query operations are performed on limited columns only for a large data set with millions or trillions of records, columnar database usually offers notable performance improvement. Today’s well-known commercial products of columnar database include HP Vertica and Amazon Redshift (in AWS offerings), etc.

With memory chip technologies keep improving, RAM price dropping and non-volatile memory technologies readily available at present, in-memory processing or in-memory database (IMDB) becomes naturally affordable and popular. When the entire database or large blocks of data are stored and directly processed in memory, without disk I/O overheads, query performance can be significantly faster and predictable. Under such scenarios, the performance differences of row-based vs. column-based processing for a standard-size database may be less of a concern. Many enterprise database providers are exploring in-memory processing as their fast DB or Big Data solutions today. For example, in-memory options in Microsoft SQL Server 2014, IBM Informix, Oracle RDBMS, etc.

In the coming cloud world, distributed data stores will become more common. When huge amount and large variety of data need to be accessed across various storage units and processed at the same time with hybrid processing nodes, smart optimizations on storage, retrieval mechanisms and query performance may still demand careful considerations. We will likely see more data technology innovations and solutions with more intelligent designs and optimization algorithms built in.

Virtual Reality or Parallel Reality?

If ever possible, we all crave for expanded realities beyond the plain physical world that is experienced only through our limited basic senses.

From motion detection, eyeball tracking, to an instant fantasy world by putting on some 3D goggle or headset, human-interacting media reality or virtual reality (VR) are definitely joining the new technology fanfare nowadays. Whether it is Google’s expensive Google Glass, Oculus Rift VR headset (Oculus was purchased by Facebook in 2014), Razer’s OSVR headset, or the coming Sony, Samsung and many other vendors’ new VR gadgets, the current-day implementations of the VR are to distort our brain to accept the existence of the virtual world and the virtual connections with the visual contents presented – as if we are living and reacting at the same time with another place outside our immediate physical, where we seem to be, see, hear and touch with the objects and surroundings presented in the media or game. However in such scenarios, our brain always knows in advance that these are pure “virtual” and not real.

Star War Hologram Jedi Meetings
Star War Hologram Jedi Meetings
TriStrategist thinks that the frontier of the VR technologies will well be moved outside content visualization and gaming soon enough. In fact some of the best VR ideas we hope to see in the future and have imagined so far are again already in the sci-fi movies. Those Star War Jedi meetings had great communication channels where everyone can be called upon in real time through holograms no matter where their physical bodies are traveling in the universe. The realization of such VR technologies may not be far at all, just as we have to believe that humans could definitely colonize other planets with the advancements of technologies in the not-so-distant future.

To expand our actual reality, one way is to create another virtual or fake “reality”, then trick our brain to believe it and gather our basic senses around it. However, through further studies and advancements of modern physics, cosmology, biology, neuroscience, psychology, and of course aided by future technologies, we may well discover the existence of real ultra worlds which are yet to be detected or proved today. The parallel universe and wormhole theory could be the start, but the existence of other parallel realities could also be valid although they are still beyond our scientific understanding or even imagination today. Nature has vast unknowns waiting to be explored by us which could fundamentally change our concepts about space, time, energy, the power of our brain and undefined senses. If one day the parallel universe or parallel reality is proved true, we will be thrilled to no end. As we open our minds, seek and believe, the possibilities will be truly endless.

2015, A Year of Continued Transformation

The world is changing rapidly and we are living in an age of major transformations. For personal or for business, embracing the changes, looking forward to the future, being adaptive and flexible will become more important than ever. It’s certainly easy to say than done.

Almost all business leaders today agree that the coming years will see tremendous technology-based business transformations. The forces and momentum for changes have already been established in the broad market and society. Many of these transformations for businesses are taking place at this moment and year 2015 will surely be a significant year along the path.

Decade-old business models, mindsets or business processes will continue to be challenged and put under scrutiny as new technology innovations and new business concepts on the global scale are shaking up the society in every way. New breakthroughs will open ones’ minds and imaginations to far greater possibilities. Today’s technologies also helps enable many of the new business ideas to penetrate into the worldwide mainstream almost instantly.

When we look into the future, many seeds have already been sowed today. In 2015, TriStrategist thinks that we are likely to see fast changes in the following business areas, just to name a few:

– Cloud business: When IaaS are moving more towards commodity services, SaaS may become the differentiators in public cloud offering. Customers are seeking new features, flexibility and easy-to-understand pricing models in SaaS offerings.

– Device business: Worldwide competitions are only getting more and more fierce. It demands innovative ideas in manufacturing, selling and distribution, marketing, pricing and many more. Joint design and investment model will become a norm in device business as any new device comes and goes so quickly. Order-on-demand will likely be the preferred mode of operation for OEMs/ODMs and retailers. Speed and superior design innovations will be essential in all device business.

– Enterprise IT: Carried by the cloud computing waves, internally IT departments will likely move more towards SLA-based offerings – measurable on-demand or shared services models for more efficient and cost-effective internal infrastructure, platform and application support. Pain will be felt as many past established IT processes and roles will be shuffled through such changes.

– Ubiquitous Connectivity (UC): New gadgets, new sensors will continue to mushroom. UC will start taking clearer shape.

It will be an exciting time for many new entrants, but for large traditional businesses, trials and tribulations await because majority of technology innovations today are distinctively disruptive in nature. Yes, an elephant can dance, but for how long and how well is a serious question in today’s environment where new rivals and threats come from every corner of the world, possibly in the most unexpected manners.

For both personal and business, successful transformations will eventually come from the ready minds with visions, courage, dedications and agility. Peter Drucker once warned that yesterday’s breadwinner, “soon becomes a bar to the introduction and success of tomorrow’s breadwinner. One should, therefore, abandon yesterday’s breadwinner before one really wants to, let alone before one has to.” He also reminded us, “Do not kill tomorrow’s breadwinner on today’s altar”. If we have followed his wisdom and practiced routinely, we should hold onto the belief that successful transformation will be with us when we need it in the forward-looking new reality.

Nearables, Farables and, Escapables?

Following the smart wearables on the market, now come the term “nearables technologies” to describe the technologies that allow smart objects to communicate with the receiving devices within a few meters’ distance via Bluetooth Smart protocol. The release of low-energy example of such technologies, iBeacon (as an indoor positioning system) from Apple in 2013 has further promoted the conceptualization and realization of the Internet of Things (IoT), where almost everything, including humans, can be positioned as smart objects in the near future carrying wireless beacons.

With the near comes the far. If “Near” means a few meters’ range, by the sense of a beacon, “Far” will be a distance of more than 10 meters away. TriStrategist believes that very soon, new developments will enable “farables technologies” to fill in the picture of ubiquitous computing and IoT of the future.

Imagine a future world, with or without one’s knowledge, everyone becomes a smart object emitting signals constantly about their location and other personal profile information. Even with the modern encryption technologies or proprietary communication protocols, no matter how advanced, if any of these signals is captured by some ill-intentioned party, all information about this person can be potentially exposed and misused. It’ll be a grave privacy and security concern. Many today’s hacking stories have already demonstrated that no advanced technology setup can be truly hack-proof.

Could anyone escape this scenario? Not easy. In the near future, the clothes you dress, the shoes and socks you wear, the jewelry you pick, any piece of personal item you carry, of course your cell phone or any gadget with you, could all have sensors built in. The legal Opt-Out checkbox only fools the unsuspicious. TriStrategist thinks that our society will soon be in real and imminent need for “Escapable Technologies” – complex technologies that can allow the signal shielding or effective signal interference from the individual carrier so that a private person can choose to become “invisible” from any electronic signal receiver or monitoring screen.

A stealth plane carries special coating and is designed in carefully measured optical geometry to become nearly invisible to the radar. An “invisible man” on stage, performed by a magician, usually takes advantage of the lighting of the surroundings and wears special reflective clothing. Similarly, TriStrategist believes that future “escapables” from the electronic or optical signal receivers, may need a combination of different advanced technologies. Complete sensor signal shielding would be ideal, but may be hard to design and may not apply to all situations. Interference may be another approach. The signals or data captured, mixed with the interferences from the “escapables”, would become unreadable or undecipherable by majority of the receivers. These “escapables technologies”, once on the market and mature, could become far more valuable than any of the sensor technologies in the near future.

What else? Welcome to the new world where we will be experiencing numerous new technology innovations, new cultures and new vocabularies along with the explosive changes around us.

Consumer or Enterprise?

In traditional PC-dominated world, the distinction of enterprise vs. consumer business was fairly clear. However mobile devices are for today’s market just as desktop PCs were for the PC Age in the past. The line between enterprise vs. consumer market has become increasingly blurry as we see the proliferation of mobile devices adopted at workplace in place of desktops and mobile devices become more of the productivity tools than just the communication and social tools.

Today’s competitions in consumer device market are extremely fierce. New devices, new features, new global players are coming out every day. Domination in any global consumer market becomes increasingly short-lived and difficult. Companies are forced to constantly seek new markets and new ideas to hold onto market shares which may promise new service revenues in the near future. One such strategy is the push to cross of the protected boundary of enterprise vs. consumer market.

Google, with free Android mobile platform and rich assortments of its different price-point models from many manufacturers, has been gaining steady shares from cost-sensitive consumers and school buyers. Today Google has a strong push in cloud-based enterprise productivity software online and on devices with “Google for Work” solutions including Gmail, Google docs, etc. Microsoft, with the year-over-year decline of global PC shipment endangering its once-a-stronghold enterprise software space, on the other hand, is trying hard to gain more shares of the consumer market with its Windows Phone and Surface tablets.

Thanks to Steve Jobs’ visionary leadership, Apple has been very successful in high-end consumer market with its popular iPhone and iPad. However Apple also started observing a slower growth with iPad and more competitions in global smartphone market for iPhone. Of course they turned their attentions to the enterprise space. Apple devices have been flowing nicely into the corporate world in recent years. iPhone has long replaced Blackberry as the mobile choice for enterprises. With the new reality that enterprise solution providers and internal IT departments nowadays must offer applications both online and on mobile, Apple devices are riding on the lucky wave of the corporate changes. iOS is usually the No.1 targeted mobile platform for these developments. For example, Salesforce.com and SAP are observing increased traffic from Apple devices for their mobile versions of the CRM/ERP applications. Apple is also consciously deepening its corporate customer pursuits by allowing its devices to easily connect to enterprise emails and shipping new security features in its newer models such as the Touch ID fingerprint readers and anti-reflective screen coating in Apple Air 2 and Mini 3. This year Apple formed a joint venture with IBM to leverage Big Blue’s enterprise service reach to push more mobile apps running on its devices in enterprise environment. It has already borne fruits after a few months with the first set of mobile apps targeting key industries such as airlines, banking, financial services, insurance, etc. With more apps to come to ease enterprise pain points, they also hope that Apple’s strength in product design and user experience will appease to more enterprise users.

It looks like the merging of the enterprise and consumer markets in the future may be inevitable. A new reality appears fast approaching that all major enterprise applications will be running in cloud and all users can connect, do the needed work and run their daily life activities via mobile devices. Ease-of-use concepts and design innovations in consumer experience will surely be carried into business world as well. Wouldn’t it be cool that in the future work and play can merge?

The Stellar Rise of the Open Source Technologies

For more than 20 years in the past, Open Source was like a crying baby – loud but not loved. Many open source companies including Red Hat, a pioneer, had only made marginal revenues compared with other concurrent big software companies. Players around the world who genuinely loved open source concept, were competing against dominating windows-based software in guerrilla warfare fashions. The randomness came with the volunteering contributions, the shortage of funding, the lack of general organizations and development roadmaps had made open source software painful to use, especially for large enterprise environments where robustness and ease of maintenance are often the core requirements.

But things are dramatically changed with the new waves of cloud services and Big Data today. Now it becomes everyone’s favorite baby almost overnight. For example, Docker, a small San Francisco startup who just barely released its v1.0 software container solution for cloud about 1.5 years ago [See our August blog on Containers and Cloud], is instantly pursued by all major cloud providers. Google is using container templates directly for its cloud deployment features including autoscaling and load-balancing. Microsoft rushed to sign a new deal with Docker in October in order to allow generic Docker containers to be supported on windows servers and Azure platform in the near future. Today Amazon and Google’s popular cloud platforms have promoted a major chunk of Big Data solutions from open source communities, by either insulating the complexity of management tasks or delivering automation packages to the market. At the meantime, big old telecommunication companies are spending billions to develop open-source-based software-defined network infrastructure. Late for the party but with its own flair, Microsoft started to open its developer source code a few years ago to join the community. This week Microsoft disclosed its intention to open source its re-architected .NET Core as a foundation for both open-source application development and for cross-platform cloud services and application deployment.

On global mobile market, many OEMs and ODMs in emerging markets have been developing and packaging customized mobile applications using open source technologies on their devices, mostly on top of free Android mobile OS. This strategy has proved both fast and effective for many of them. The rocket rise of Chinese local smartphone maker Xiaomi, now #2 in market share in smartphone delivery in China, is a great example. [See October blog on The Fast Shifting Market of Mobile].

What are the reasons for the rebirth and stellar rise of an old baby? First, through time when a disruptive innovation has accumulated enough momentum, reached critical adoption mass and refined enough its solutions to form a sustainable ecosystem, it transforms itself from a smaller player in marginal markets to a powerful player that can compete head-to-head with incumbent dominating technologies in main markets. Second, other disruptive forces of present, triggered by the Second Machine Age innovations, demand new thinking, new ideas, new solutions to brand new problems which incumbents are not prepared to deal with. The disruptive technology thus becomes a more attractive, creative and cost-effective alternative for the experiments of seeking new solutions. Third, once worldwide great minds start gathering around on the subject, everything is possible. To sum up, the current condition is ripe for a broader penetration of a once insignificant disruptive technology to the mainstream market.

Some recent comments on Wall Street Journal explained the current appeals of open source software to big businesses. Companies think that it’s less expensive and easier to customize than proprietary software. They believe that open source options can help them develop new services faster. Obviously speed and cost are the top decision factors and signatures of today’s technology market.

From Software-defined Virtualization to Future Distributed Computing

Abstraction and encapsulation are among the most important concepts in software programming. Similar thought process is also applied to the data center design and management for cloud computing, especially in so-called software-defined data center (SDDC).

Currently, to support broad hybrid-cloud computing suited for most of the enterprise environments, at least two distinct approaches in data center design are competing in the market. One is the hyper-convergence hardware-based cloud appliance approach and the other one is the SDDC approach. Offerings in the first camp include Microsoft’s Dell-based Azure-on-board Cloud Platform System (CPS), VCE Vblock, HP CS700, IBM PureFlex, etc. This approach packages integrated compute, storage and virtual networks together in hardware containers with software management tools. It can also be preloaded with certain platforms or applications. The system can be switched on and linked to enterprise’s existing network to build hybrid cloud on premises almost instantly. The performance and future scalability will be limited by the capacity and numbers of these containers. Companies need to invest on these new appliances, but may save on many of the design, deployment and operation tasks.

On the other side, Google is the pioneer in SDDC camp. From the beginning of their online search business, Google used cheaply collected machines and storage units, bundled them together and programmed software to control everything. From shared compute, storage to virtual networking, all Google global data centers can be managed remotely. Failures from any of the hardware are monitored and switched off instantly without any interruption to the application tier or users. Amazon’s AWS also adopts SDDC with self-designed homogeneous servers.

The SDDC is gaining more steam in cloud industry. The concept has been clarified as distributed virtualization for all elements of the infrastructure – compute, storage, networking and security. It targets on the total abstraction of the application layer from the underlying hardware layers and thus allows service SLAs and automation of the management tasks for each element of the cloud computing. SDDC can promise unlimited scalability, performance and the important self-service to customers. The cheaper hardware scenarios usually attract more attentions, but there are often hidden costs associated with software resources and testing, especially with many of the open-source solutions.

Most of the SDDC solutions today are based on homogenous commodity hardware, but the real needs and challenges from today’s enterprises call for utilizing the existing heterogeneous hardware and network situations. Several companies are trying to come up with more answers, through distributed virtualization by abstraction and encapsulation. For example, VMware NSX extends software-defined network (SDN) concept with vSwitches built in VMware hypervisors to creates virtual networks and encapsulates existing network topology and security configurations, but it still yet to fully support hybrid cloud scenarios.

These are simply different stages in the development towards modern computing. Today’s continued breakthroughs in the research and development of super-fast computer chips along with the realization of nano and quantum technologies may start challenging all traditional hardware someday. The future definitely looks forward to the true distributed computing where the compute power will not be limited to a few data centers or any enterprise environment alone. Better designed software, especially smart algorithms, will still be the key to capture all future possibilities.

The Second Machine Age and AI

MIT Erik Brynjolffson and Andy McAfee published a book this year called “The Second Machine Age”. Last month (September), MIT hosted a conference with the same name to “showcase research and technologies leading the transformation of industry and enterprise in the digital era”. Per two professors’ naming convention, the First Machine Age is the Industrial Revolution which started in 18th century and lasted a few decades. In that age, steam-engine-powered machines extended beyond human’s physical limits and greatly expanded humans’ spatial reach. Since then, we have skyscrapers which could not be built by hands; we have trains to cross the continents, airplanes to cross the oceans and eventually spacecraft to reach to the space. Now it is the Second Machine Age, in which computer and digital revolutions enables automation, smart robotics, cognitively intelligent machines to work side-by-side with humans and will greatly extend human’s mental capacities. This Second Machine Age will signal in profound changes to our social, economic and everyday life.

As the Second Machine Age bringing in unparalleled productivity aided by smart machines, although never raised, TriStrategist asks: could it mean that one day intelligent machines may transcend the time limitation that humans experience? Either by some distorted (or virtual) reality or the co-existence of same human brain power at the same moment but different locations (by robotic surrogates or some sort of scaled-out brain mapping to machines)? All seems likely.

Today besides many industrial applications, newer smart robots can do cognitive captures and basic learning, take instructions, be deployed for dangerous rescues, do domestic chores, and be applied in robotic surgery… Humanoid robots can talk with people through Skype, ask logical-thinking questions and become closer to human-like and human-able. More creative and faster-thinking quantum robotics is also in the making. Very soon, smart robots will prevail in every corner of the human life. More and more office job functions could be carried out by either automation tools or by machines. Humans will be freed to pursue more creative and higher-level jobs that machines yet to be able to simulate or humans will have more leisure time on hands. Many could be displaced too, a possibly serious social and economic issue in the Second Machine Age.

Machines are still machines. The true power behind the machines in the Second Machine Age is no longer some physical engines, but Artificial Intelligence (AI), the “brain power” of the machines. We are currently in a new phase of AI: cognitive computing and deep learning. From traditional data mining, to voice recognition, now to cognitive adaptability, logical reasoning, improvisation and real-time interactions, AI has advanced towards human intelligence in big strides these years with the help from the ever-increasing computational power. Some even predicted that machine intelligence could surpass human intelligence in 15 years by 2030. That sounds very scary indeed.

In a recent interview with Walter Isaacson, SpaceX CEO Elon Musk raised his concerns that many people didn’t realize how fast AI has been progressing these days. He worried that intelligent machines with evil intentions could destroy humanity. It seems we indeed need ponder whether humans can still be the owner of the machines or accept them democratically as peers in our lifetime.

Microservices and Further Thinking on Distributed Computing

The challenges of distributed computing perplex and intrigue many minds. The search continues for a more flexible solution for distributed computing with heterogeneous cloud computing at background where information needs to be exchanged frequently, scaled out quickly and managed easily between diverse system and data environments. The concept of “Microservices” started gaining more attraction this year as an innovative software architecture approach, especially after Netflix published a case study to support it.

Simply speaking, the Microservice architecture is following the path of further software granularization to construct each software application as a set of small services, each performing one simple function, running in its own process and communicating with lightweight mechanisms, often through an HTTP resource API such as a REST API. These “micro”-sized services can be deployed independently and grouped together to jointly perform required complex capabilities which traditionally would be handled by one monolithic application with many embedded components and dependencies packaged inside. Errors could be isolated and fixed by re-deploying a few microservices instead of bringing down the entire system. Some startup companies promoting microservices have used the analogy of Apple App Store – so that they can offer PaaS with a rich collection of these microservices for a particular platform.

Differing from the plumbing used in many existing enterprise architectures where complex transformation rules, messaging and interchanges between systems are put on the Enterprise Service Bus in the middle, microservices adopt the idea of “smart endpoints, but dumb pipes”. It emphasizes the ready-deployable nature of the lightweight individual services from the start point to the destination without extra handling in between.

The benefits offered by the fine-grained microservices come with certain cost. For example, instead of a limited number of SLAs for an enterprise application, now numerous SLAs need to be created and maintained at the same time. Each microservice also needs load-balancing and fault-tolerance. Deploying, Managing, communicating, versioning and testing the complexity of the system with a huge number of microservices are demanding tasks.

The most intriguing aspect of microservice concept is that from the decentralized nature of its design patterns, it calls for corresponding changes to the traditional organizational structure by invoking Conway’s Law. Melvin Conway, a computer programmer, in 1968 mentioned that, “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” If one reflects carefully, it could be surprisingly true. Thus microservice concept calls for organizations to be organized around business capabilities instead of functions. Because each microservice, although “micro” in context, is an independently deployable service of its own, it must contain all cross-functional service basics as needed and developers must have UI, database, deployment, etc., full stack of skills to build and own the microservices.

Although microservice’s granularization is one form of software modularization, TriStrategist thinks that it may well be short of achieving the goal as tomorrow’s choice for distributed computing. Meaningful architecture abstraction to separate data from mechanisms, separate information from means of delivery will be needed and modularization will be the key for providing the flexibility required for distributed computing, but to meet the desired results, very likely we need both the re-thinking of software application architecture and cloud platform architecture. The future inter-deployable services, micro or not, need to be platform-neutral to truly address the essence of distributed computing. In addition, instead of thinking either smart endpoints or smart pipes, TriStrategist thinks that we need think more towards adding self-managed intelligence at each cross-point in the distributed picture.

As to the organizational structure, TriStrategist believes that more than ever cross-functional, cross-boundary collaborations are needed for any endeavor to succeed. Capabilities can contain many redundant functions and may not necessarily be cost-effective as division criteria of a large organization. Simplicity, modularity, flexibility and versatility may be the requirements for future structural changes for any organization, just as future software systems may entail.

Artificial Neural Networks

Neural Networks or Artificial Neural Networks (ANNs) are computational models which simulate the connectivity of the neuronal structure of cerebral cortex and the brain learning patterns for certain computational tasks, such as machine learning, cognitive and pattern recognitions, etc.. Conventional computational models usually fare poorly in these areas.

Differing from Computational Neuroscience which offers in-depth study of the true complex biological neuronal functions in information processing in the brain, a neural network is more a simplified modeling technique or a set of algorithms in simulating the patterns of stimulations and repetitive learning of the brain by using interconnected parallel computational nodes as artificial neurons that are often organized into inputs, outputs and processing layers. Adaptive weights are used to simulate the connection strength between any two neuron nodes. Theses weights can be adjusted repeatedly by each “learning” cycle instead of being determined beforehand.

There are many college courses designated to the study of Neural Networks. In a simple sense, neural networks offer the possibility of continued learning and corrections in order to eventually fit the models closer to a particular function of the brain by comparing the outcomes to a certain reality. This is a huge deviation from the conventional computational models. Conventional models are deterministic with data and pre-defined instruction sets stored in memory for a centralized processor node to retrieve, compute and store in a sequential manner to generate outcomes. However the processing nodes for neural networks get information from input nodes or external signals to carry out simple weighted computations in parallel and the results are together presented as the outcome. The knowledge of a neural network is in the entire network itself instead of in any single node. Each computational cycle is almost a self-learning and reality-adjusting cycle, similar to the way humans or animals generally learn.

A human brain contains billions of neurons, more than any other species on earth. Today, a typical large ANN may use a few thousands processor units as nodes, a much smaller number in comparison. With greatly enhanced computing power in cloud-ready world, the number of artificial neurons could be affordably extended if needed. However there is no such proven law yet whether a ANN’s power and reality-rendering accuracy are in direct proportion to the numbers of nodes it runs on. There is still a long way to go in AI to use ANN-enabled systems as the intelligent brains for everything in the plan, but we seem to be at least on the right track.