Artificial Neural Networks

Neural Networks or Artificial Neural Networks (ANNs) are computational models which simulate the connectivity of the neuronal structure of cerebral cortex and the brain learning patterns for certain computational tasks, such as machine learning, cognitive and pattern recognitions, etc.. Conventional computational models usually fare poorly in these areas.

Differing from Computational Neuroscience which offers in-depth study of the true complex biological neuronal functions in information processing in the brain, a neural network is more a simplified modeling technique or a set of algorithms in simulating the patterns of stimulations and repetitive learning of the brain by using interconnected parallel computational nodes as artificial neurons that are often organized into inputs, outputs and processing layers. Adaptive weights are used to simulate the connection strength between any two neuron nodes. Theses weights can be adjusted repeatedly by each “learning” cycle instead of being determined beforehand.

There are many college courses designated to the study of Neural Networks. In a simple sense, neural networks offer the possibility of continued learning and corrections in order to eventually fit the models closer to a particular function of the brain by comparing the outcomes to a certain reality. This is a huge deviation from the conventional computational models. Conventional models are deterministic with data and pre-defined instruction sets stored in memory for a centralized processor node to retrieve, compute and store in a sequential manner to generate outcomes. However the processing nodes for neural networks get information from input nodes or external signals to carry out simple weighted computations in parallel and the results are together presented as the outcome. The knowledge of a neural network is in the entire network itself instead of in any single node. Each computational cycle is almost a self-learning and reality-adjusting cycle, similar to the way humans or animals generally learn.

A human brain contains billions of neurons, more than any other species on earth. Today, a typical large ANN may use a few thousands processor units as nodes, a much smaller number in comparison. With greatly enhanced computing power in cloud-ready world, the number of artificial neurons could be affordably extended if needed. However there is no such proven law yet whether a ANN’s power and reality-rendering accuracy are in direct proportion to the numbers of nodes it runs on. There is still a long way to go in AI to use ANN-enabled systems as the intelligent brains for everything in the plan, but we seem to be at least on the right track.

Today’s Nanotechnology and New Chip Design Concepts*

It’s all about scale. Once we shifted our reference frame to molecular or atomic level, a whole new world of possibilities has emerged in front of us.

First raised by the famed physicist Richard Feynman in 1959, the idea of making things at the foundation particle level has triggered a revolution in many fields, including physics, chemistry, biology, material science, medical science, life science, electrical engineering, bio-engineering, chemical-engineering, manufacturing, military and space engineering, etc.. A nanometer (1 nm) is equal to one billionth of a meter and the nanoscale is typically between 0.1-100 nm in size. Most atoms are 0.1-0.2 nm wide; a DNA strand is about 2 nm; a blood cell is about several thousand nm and a strand of human hair is about 80,000 nm in thickness. At nanoscale, quantum mechanics dominates and matters can display many unique properties that are not available under normal scale.

Experimental nanotechnology did not come into tangible existence until 1981 when an IBM research lab in Switzerland built the first scanning tunneling microscope (STM) which allowed the possibility of scanning individual atoms. Subsequently in the following decade, moving single atoms became possible. Other newly discovered techniques around the same time also made manipulations of atoms a true engineering reality, although not a small feat even in today. In 1991, the first carbon nanotube was created, wrapped by a single sheet of graphene (from carbon atoms), which is about 6 times lighter and 100 times stronger than steel. Its properties and electrical conductivity make it a favorite candidate as a nanoscale building block in many applications, especially in high-tech world.

Today, one application area of nanotechnology that has attracted some intense focus is on the extremely small-scale electronic circuitry design. With increasing difficulties to meet Moore’s Law of doubling density of transistors on a single IC with shrinking sizes desired in modern electronics, limitations of silicon chips, including heat and energy constraints, become more obvious and costly to maneuver. Nanotechnology has already rushed to the promising frontier as the replacements for future chips. Many creative ideas are being tested at present.

The graphene chips have already been created in various forms, but they can be too easily damaged in the assembly process to make it a practical production choice for most computers. IBM released an advanced version of the graphene chip in early 2014 by using a new manufacture technique to address the fragility problem. However, at the meantime, the possibility of other nanomaterials to compete with graphene has already come into the stage, for example, some new nanomaterial and assembly method demonstrated by Berkley Lab this year. At atomic levels, once assembled properly, many particles or mixed structures could potentially display the electrical and optical properties needed for building nanochips. This area of the chip manufacturing development will likely see intense competitions in the future.

Various techniques have long been investigated along the ideas of overcoming the limitations in existing chip design by modifying the silicon structure, but many challenges remain in engineering. Earlier this year, UC Davis researchers established a bottom-up approach to add nanowires on top of the silicon, which could create circuitry of smaller dimensions, bear higher temperatures and allow light emission for photonic applications that traditional silicon chips are incapable of. They found a way to grow other nanomaterials on top of silicon crystals to form nanopillars as the stations for nanowires to connect and function like transistors, thus form complex circuits. The most appealing aspect of this method is that it does not require significant changes to today’s manufacture process of silicon-based ICs.

Another completely new thinking of chip design concept at nanoscale comes from utilizing the quantum nature of the natural particles to define binary “0” and “1” instead of from silicon crystals. For example, by switching the direction of a single photon hitting a single atom residing in one of the two atomic states, the resulted direction of the photon could well represent the “0” and “1” logic. Quantum computing is therefore born. If manipulated successfully, quantum states may possibly allow simultaneous existence of more than just “0s” and “1s”, which could promise more powerful potentials for future computers.

With drastically different and innovative landscape in chip designs enabled by nanotechnology today, what are the needs and implications to future software design? One field in the near future that software will definitely play a significant part along with nanotechnology development is the need for logical algorithms to control and manage the “self-replicate” process of the “active” nanocells, especially for AI developments. For application software development, TriStrategist thinks that once nanoscale manufacturing becomes a norm in computer and electronic industry, its flexibility and versatility could only imply that chips can be more adaptably designed based on myriad human needs and the software application programs that run on top of it.

*[Note:] This blog was in fact drafted and published on August 27, 2014, to make up for previous week’s vacation absence.

Containers and Cloud

The container concept for cloud infrastructure deployment is not new. These containers serve as pre-fab blocks that contain the equipment, configuration and management tools to allow fast plug-and-play data center setup. The uniformity of the hardware is both the strength and weakness of such a concept, trading versatility and flexibility in infrastructure deployment with agility and speed. Google first designed and implemented its container data center in 2005. Microsoft built its first container-based 700,000-sqft Chicago Data Center in 2009.

Now the “container” concept has been smartly extended from cloud infrastructure to data and cloud software. A cloud software container can work as a portable module for cloud applications to move between hybrid cloud PaaS or be offered as a flexible component of PaaS. Many such designs and implementations are still in the making. An example today which just started gaining popularity is called Docker, a Linux-based open-source OS-virtualization implementation. It essentially functions as a middle-tier abstraction layer or wrapper to shield off the cloud platform complexities from the application developers. Within 15 months of its inception till today, the total downloads of its trial version have exceeded one million and the community is fast growing. There are a few major global supporters for such an implementation and the startup company in San Francisco recently got a new round of $40 million venture funding.

Compared with the prevailing concept of Virtual Machines for deploying cloud-based applications, the Docker “container” goals at making the applications easily portable among hybrid cloud platforms with agility, zero startup time and one-click management. Of course there are certain trade-offs with the benefits of portability and platform-neutrality. Since it uses kernel sharing for platform reach, some security controls have to be sacrificed. Also Docker only supports single-host applications because the “container package” can get a lot more complex with multi-server applications across platforms and it’s a hard problem to resolve. Some supplemental solution proposals are out there on the market to overcome the limitation. Still, the “software container” concept and solutions are very new and yet to be tested for any IT production situation.

Current Docker v1.0 does not support Windows applications, but it is still a good start towards fulfilling a key market need. This in fact is also one workable approach for today’s distributed computing.

Would You Like a Robot Maid?

Aided by Hobble telescope, cosmologists have long verified the accelerated expansion of the universe since the Big Bang. Similar conclusion may be drawn for human society’s advancement and the complexity associated with it, although there is no such instrument or directed experiment except the hypothesis from a few expansive and insightful minds. If human society’s complex expansions are following similar natural patterns, it could mean that our imaginable future will always come sooner than we think. Technology developments may well demonstrate this hypothesis over time. Among them, human-interacting robots, one type of the machine beings of the future, may come into our daily life soon, not just in sci-fi movies.

Most of the robotic helpers today are still very much industrial-focused, machine-looking, ugly, clunky tools, but that’s within the early iteration process of the robotic development – limited by all related technologies and the needs. With rapid advancements of industrial design, artificial intelligence, material science, etc., better-sized, nicer-looking domestic robots which can help with basic chores of cleaning and cooking, and also interact with humans in some autonomous ways, may come into existence earlier than we have anticipated.

Wall-E in 2008 Disney Movie

We can’t totally expect them to be human-looking or super smart in the coming decade, but they should be prettier than the rustic Wall-E, or at least as cute as Eve (both are robotic characters in 2008 Disney movie Wall-E).

Eve
Eve in 2008 movie Wall-E

No doubt, they will become more and more intelligent and capable with each iteration of the releases.

More than ever, every piece of the robotic design needs the considerations of both hardware and software. The current major challenges for robotic developments are at the design of actuators and robotic software. Robotic software functions as the nerves and blood circulations of the robot beings and huge room of gap exists in this area today. Open-source-based Robotic Operating System is a very interesting concept in recent years, but a lot more industry support and focus are needed to truly propel the robotic industry to a new level so that we, as consumers, can expect to order our favorite robot maid to our household in a few years.