Tuesday, April 27, 2010

Theories of Everything- Rebirthing Hal

The arrival of super smart evolutionary computers, capable of autonomous reasoning, learning and emulating the human-like behaviour of the mythical HAL in Arthur C. Clarke’s Space Odyssey 2001 is imminent.

The Darwinian evolutionary paradigm has finally come of age in the era of super -computing. The AI evolutionary algorithm which now guides many problem solving and optimisation processes, is also being applied to the design of increasingly sophisticated computing systems. In a real sense, the evolutionary paradigm is guiding the design of evolutionary computing, which in turn will lead to the development of more powerful evolutionary algorithms. This process will inevitably lead to the generation of hyper-smart computing systems and therefore advanced knowledge; with each evolutionary computing advance catalysing the next in a fractal process.

Evolutionary design principles have been applied in all branches of science and technology for over a decade, including the development of advanced electronic hardware and software, now incorporated in personal computing devices and robotic controllers.
One of the first applications to use a standard genetic algorithm was the design of an electronic circuit which could discriminate between two tone signals or voices in a crowded room. This was achieved by using a Field Programmable Gateway Array or FPGA chip, on which a matrix of transistors or logic cells was reprogrammed on the fly in real time. Each new design configuration was varied or mutated and could then be immediately tested for its ability to achieve the desired output- discriminating between the two signal frequencies.

Such evolutionary-based technologies provide the potential to not only optimise the design of computers, but facilitate the evolution of self-organisational learning and replicating systems that design themselves. Eventually it will be possible to evolve truly intelligent machines that can learn on their own, without relying on pre-coded human expertise or knowledge.

In the late forties, John von Neumann conceptualised a self-replicating computer using a cellular automaton architecture of identical computing devices arranged in a chequerboard pattern, changing their states based on their nearest neighbour. One of the earliest examples was the Firefly machine with 54 cells controlled by circuits which evolved to flash on and off in unison.

The evolvable hardware that researchers created in the late 90’s and early this century was proof of principle of the potential ahead. For example, a group of Swiss researchers extended Von Neumann's dream by creating a self-repairing, self-duplicating version of a specialised computer. In this model, each processor cell or biomodule was programmed with an artificial chromosome, encapsulating all the information needed to function together as one computer and capable of exchanging information with other cells. As with each biological cell, only certain simulated genes were switched on to differentiate its function within the body.

A stunning example of the application of Darwinian principles to the mimicking of life was development of the CAM-Cellular Automata Machine Brain in 2000. It contained 40 million neurons, running on 72 linked FGPAs of 450 million autonomous cells. Also the first hyper-computer- HAL-4rw1 from Star Bridge Systems reached commercial production in 2000. Based on FPGA technology it operated at four times the speed of the world's fastest supercomputer.
And at the same time NASA began to create a new generation of small intelligent robots called ‘biomorphic’ explorers, designed to react to the environment in similar ways to living creatures on earth.

Another biological approach applied to achieve intelligent computing was the neural network model. Such networks simulate the firing patterns of neural cells in the brain, which accumulate incoming signals until a discharge threshold is reached, allowing information to be transmitted to the next layer of connected cells. However, such digital models cannot accurately capture the subtle firing patterns of real-life cells, which contain elements of both periodic and chaotic timing. However the latest simulations use analogue neuron circuits to capture the information encoded in these time-sensitive patterns and mimic real-life behaviour more accurately.
Neural networks and other forms of biological artificial intelligence are now being combined with evolutionary models, taking a major step towards the goal of artificial cognitive processing; allowing intelligent computing systems to learn on their own and become experts in any chosen field.

Eventually it will be possible to use evolutionary algorithms to design artificial brains, augmenting or supplanting biological human cognition. This is a win-win for humans. While the biological brain, with its tens of billions of neurons each connected to thousands of others, has assisted science to develop useful computational models, a deeper understanding of computation and artificial intelligence is also providing neuroscientists and philosophers with greater insights into the nature of the brain and its cognitive processes.

The future implications of the evolutionary design paradigm are therefore enormous. Universal computer prototypes capable of continuous learning are now reaching commercial production. Descendants of these systems will continue to evolve, simulating biological evolution through genetic mutation and optimisation, powered by quantum computing. They will soon create capabilities similar to those of HAL in Arthur Clarke's "Space Odyssey 2001"- and only a few decades later than predicted.

However the reincarnation of the legendary HAL may in fact be realised by a much more powerful phenomena incorporating all current computing and AI advances - the Intelligent World Wide Web. As previously discussed, this multidimensional network of networks, empowered by human and artificial intelligence and utilising unlimited computing and communication power, is well on the way to becoming a self-aware entity and the ultimate decision partner in our world.

Perhaps HAL is already alive and well.

Monday, April 26, 2010

Theories of Everything-The Bayesian Brain

The Director of the Future of Life Research Centre in Australia- David Hunter Tow, proposes that the latest Theory of the Brain based on Bayesian statistical methods, has connections to a wider Theory of Information that underpins the deep nature of the evolutionary process itself.

Theories of Mind provide a framework for investigating the capacity of humans to attribute thoughts, desires, and intentions to others; to explain and predict their actions and infer their intentions. The current theories of the mind and brain, developed over the last few decades, primarily focus on defining the mental behavior of others through mirror neurons. These are a set of specialized brain cells that fire when an animal observes an action performed by another. Therefore, the neurons ‘mirror’ or reflect the behavior of the other, as though the observer was itself acting. Such neurons have been directly observed in primates and now possibly humans and are believed to occur in other species including birds.

However despite an increasing understanding of the role of such mechanisms in shaping the evolution of the brain, previous theories have failed to provide an overarching or unified framework, linking all mental and physical aspects- until recently.
In a breakthrough by a group of researchers from University College London headed by neuroscientist Karl Friston, a mathematical law that may provide the basis for such a holistic theory has been derived.

This is based on Bayesian probability theory, which allows predictions to be made about the validity of a proposition or phenomenon based on the evidence available. Friston’s hypothesis builds on an existing theory known as the “Bayesian Brain”, which postulates the brain is a probability machine that constantly updates its predictions about the world based on its sensory perception and memory.

The crucial element is that these encoded probabilities are based on cumulative experience, which is updated whenever additional relevant data becomes available; such as visual information about an object’s location. Friston’s theory is therefore based on the brain as an inferential agent, continuously refining and optimising its model of the past, present and future. This can be seen as a generic process applied throughout the brain, continually adapting the internal state of its neural connections, as it learns from its experience. In the process it attempts to minimise the gap between its predictions and the actual state of the external world.

This gap or prediction error, can be defined mathematically in terms of the concept of ‘free energy’ used in thermodynamics and statistical mechanics. This is defined as the amount of useful work that can be extracted from a system such as an engine and is roughly equivalent to the difference between the total energy provided by the system and its waste energy or entropy. In this case the prediction error is equated to the free energy of the system, which must be minimised as far as practical. All functions of the brain has therefore evolved to reduce the usage of free energy in relation to prediction errors.

As proof of concept, Friston created a computer simulation of the brain’s cortex or primary cognitive area, with layers of neurons passing signals back and forth. Signals going from higher to lower levels represented the brain’s internal predictions, while signals going the other way represented sensory input. As new information arrived, the higher neurons adjusted their predictions according to Bayesian theory.
When the predictions are right, the brain is rewarded by being able to respond more efficiently. If it is wrong, additional energy is required to find out why it is incorrect and come up with better predictions.

The principle guiding this Bayesian model can be extrapolated to better understand the evolutionary process itself. As further developed in the author's forthcoming book- The Future of Life: A Unified Theory of Evolution, the process of minimizing prediction errors or in this case- useable energy, bears a striking similarity to the process of minimizing the information gap between a system’s environment and own internal state.

The system’s ability to minimize this gap determines its capacity to survive in accordance with an Information Law, first defined in the nineties by physicist Roy Frieden. This is based on Fisher Information, which provides a measure of a system’s accessible information, for deriving the dynamical equations of any process, including the physics of Quantum Mechanics and Relativity.

According to the author, it can also be applied to derive the dynamical equations of Evolution.

Bayesian mechanics can therefore be seen as an agent of the evolutionary process, providing a measure of a system’s capacity, whether a brain or species, to adapt in a changing physical or social landscape, by reducing the gap between its current and required knowledge states.

Saturday, April 24, 2010

Theories of Everything- Foundation Blog

The Theories of Everything blog will track and analyse major new theories as they emerge in the fields of science, philosophy, sociology and mathematics.

This blog will also complement the Theories of Everything video series which has been broadcast globally over the last six years- first on Google Video and across Europe now on YouTube