Building a Brain: Can Machines Mimic the Mind?

While computers can crunch numbers, remember facts, and play chess better than any human ever alive, humans still have countless abilities that no machine can mimic. However, this may be about to change. Using vast racks of interconnected computer processors, scientists are now building virtual neurons based on decades of biological data and connecting them together to form a simulated brain (1,2).

Over the past forty years, the humble transistor has faithfully followed Moore’s Law, doubling the number that can fit on a single chip every two years. Simple blocks of silicon, carefully impregnated with atomic impurities to act as a gate for the flow of electricity, the transistors in the laptop I am currently using now number over 100,000,000 and cost less than 10 pico-cents each, about the same as a single character printed in a newspaper (3). Cameras can fit in the tip of a pen, a laptop can render realistic three dimensional images, and banks can keep track of billions of monetary transactions per day without any error.

Despite this inexorable progress, computers are far inferior to humans at recognizing faces, composing music, using language, and possessing emotions or self-awareness. What is the key difference that gives man these advantages over machines? I believe it is noise, or random variation in signals. Transistors have been well-engineered to represent absolute 1s and 0s with extremely high accuracy. In any decent computer circuit, transistors are never presented with a “maybe.” However, the neurons in your brain are in a constant state of uncertainty-for example, one neuron in your cortex might not respond to 70 to 90 percent of the inputs it receives (1).

Despite the fundamental differences in how computers and minds work, all the ordered chaos and minute detail in the human brain could theoretically be approximated well enough to function like one, with enough computing power. This goal of reverse engineering the brain was set by the National Academy of Engineering in a grand challenge issued in February 2008. If realized, the ability to mimic the brain with computers would yield invaluable benefits, such as electronic implants that can replace damaged brain tissue or the ability to instantly screen new drug candidates for various neurological disorders (4).

Dharmendra Modha intends to build a complex computer model of the human brain to simulate billions of neurons in an interconnected network.
Dharmendra Modha intends to build a complex computer model of the human brain to simulate billions of neurons in an interconnected network.

On opposite sides of the earth, two rival groups are both racing to model the brain, one led by Henry Markram in Lausanne, Switzerland and the other by Dharmendra Modha in San Jose, California. Both groups are using massively powerful IBM supercomputers to build large-scale models of the brain. However, that’s about where the similarity ends. An intense rivalry has developed between the two groups recently, peaking in response to a prize that was awarded to Modha’s group last November for completion of a simulated neural network with the same number of neurons as a cat’s brain (2).

“I would have expected an ethics committee to string [Modha] up by the toes. … This is light years away from a cat brain, not even close to an ant’s brain in complexity,” said Markram in response to the award (5). On the other hand, Markram himself has made many grandiose and unsupported claims, such as announcing during a Technology, Entertainment, and Design (TED) talk that his model could lead directly to an understanding of consciousness (6).

The main difference that spawned such vituperative remarks is the level at which the models begin. Theoretically, the ideal model of the brain would include neurons, all the lipids and proteins which make them up, the ions which flow across their membranes, and even the quantum mechanics of how those ions behave. However, this is currently computationally impossible and not necessarily important for proper functioning of the model. Instead, Markram begins by dividing each neuron into about a hundred “compartments,” each representing a small piece of the neuron, complete with ion channels and realistic 3D morphology (1). On the other hand, Modha treats each neuron as a single point, which interacts with other neurons according to various computational models (2). This allows his team to make much larger scale, but less biologically detailed simulations of the brain.

After performing many experiments recording from neurons in the rat, neuroscientist Henry Markram decided that there was enough information about the brain to begin to reconstruct it from the ground up, in other words, to reverse engineer it. He decided to begin with the rat neocortex, which in humans has ballooned in size to facilitate our complex cognitive abilities. Furthermore, an enormous amount of experimental data is available on this area in young rats due to decades of detailed bench work. We know that the cortex is stratified into six horizontal layers, with functional units of narrow columns of highly interconnected cells (1). Over 30 different types of neurons are present in each column, with most of the synaptic connections between cells at least partially characterized (1). Since these columns are considered the basic functional unit of the cortex, the first major goal was to build a rat neocortical column, comprising about 10,000 neurons.

As the timeline in Box 1 shows, Markram was not the first to try to model a neural circuit. For example, a programming tool called NEURON has been used for years to facilitate the construction of biologically accurate neurons. The main problem with extending this language to model more than a few neurons is speed and memory. Simulating a single neuron takes over an entire average computer, but to make a circuit you need more than one neuron. Markram decided to scale up his model by collaborating with computing giant IBM (International Business Machines) (7).

The result was the Blue Brain Project, a partnership between IBM and the Brain Mind Institute, which Markram had founded in 2002 at the Ecole Polytechnique Fédérale de Lausanne in Switzerland. The group uses a Blue Gene supercomputer consisting of 64 refrigerator-sized racks of processors with 32 TB total memory, capable of 360 TFLOPS or trillion floating-point operations per second (300,000 times faster than an Intel Core 2 Duo Processor) (1). Two years after the launch of the program, the group announced the preliminary goal of modeling a single neocortical column composed of 10,000 neurons on November 26th, 2007 (7).

How exactly does Markram’s team construct a piece of mind? First, experimental data describing the morphology of each of the 30 classes of neurons is modeled and cloned to produce an indefinite number of virtual neurons, with 100 “compartments” in each to represent the physical shape of the soma, axons, and dendrites. Into this framework, over 20 types of ion channels are inserted according to experimental rules using NEURON software. To make each neuron unique, some random variation is introduced. The Blue Gene computer takes about a day to create models of all 10,000 neuron needed to make up one neocortical column, whereas a normal processor would take a day for each individual neuron. (1)

After all the neurons are built, they are placed in three-dimensional space to match experimental distributions describing where each cell type is found and how each pair is connected. The cells are jostled to avoid intersections, and synapses are placed where two branches meet. Each of the 10-50 million synapses are coded by functional models of their activity based on cell type, and learning mechanisms are also incorporated. Electrical conduction of each signal is also modeled by the distance from the cell body. Finally, output from the simulation is fed, close to real time, to a graphics processor to allow visualization of the column’s activity. (1)

Each neuron is located on a separate processor in the Blue Gene computer, and they are connected with message-passing interface cables, analogous to axons. The entire system is run using two software programs: one is a version of NEURON, while the other, the NeoCortical Simulator (NCS), is more specific to the connections in the neocortex and is essential for scaling up the model of the brain. (1)

By recreating fundamental microcircuitry, the Blue Brain Project hopes to achieve a physiological simulation of the brain.
By recreating fundamental microcircuitry, the Blue Brain Project hopes to achieve a physiological simulation of the brain.

The Blue Brain Project completed its model of a rat neocortical column in 2007 and Markram estimates that a human brain construction should be complete within 10 years (7). However, it is still not clear what benefits will ultimately be reaped from this project. Markram claims that this model, once it its predictions are verified by comparison to experimental data, will eventually be able to predict drugs to treat specific neurological disorders (6). He even claims that it will lead to an elucidation of consciousness (6). So far, however, the only clear results have been striking 3D renderings of the neocortical column and its electrical activity, represented by flashes of light. While this provides a valuable conceptualization for the general structure of communication within a neocortical column, so far it has not made any predictions that have been later confirmed by experiment. Predicting a phenomenon that was not already used to tune the model would be an important reality check on the model, and would lend confidence to the idea that the brain can be effectively modeled even without a complete understanding of all the underlying molecular processes.

Compared to Markram, however, Modha puts more faith in the ability of simplified models to summarize many simultaneous underlying processes. Based at the IBM Almaden Research Center in San Jose, CA, Modha’s algorithm represents each neuron as a single compartment. He includes both excitatory and inhibitory neurons, with four types of synapses, based on the major neurotransmitters in the brain: AMPA, NMDA, GABAA, and GABAB (2). There are 14 different categories of cell types represented, with synapses formed between different neurons according to experimental probabilities of communication between groups (2). Clearly Modha is also basing his model on experimental data, but with much less detail at the cellular level compared to Markram. Modha even included spike timing dependent plasticity in the simulation, an important feature of synaptic learning, which ironically Markram discovered in 1997 (8).

The main power of Modha’s approach, according to many in the scientific community, is its breakthroughs in computing (5). Modha’s team is using a newer computer with more memory and they have developed new methods for calculating and keeping track of the activity of over a billion neurons at a speed only a hundred times slower than real time. On the other hand, Markram is building his brain with much smaller Legos, resulting in the completion of only a single neocortical column.

One might predict that Markram’s method is more accurate and ultimately more useful in filling in the gaps in knowledge that connect form with function in the brain. However, a recent contest was held between several different single neuron models in which the neuron’s response to electrical stimulation was compared to experimental data (9). Surprisingly, a simple thresholding model outperformed more detailed biophysical models (9). Therefore, the addition of complexity to a model does not always make it more correct.

Box 1: Timeline of brain models: from single neurons to entire human brains
Box 1: Timeline of brain models: from single neurons to entire human brains

The best way to model the brain depends completely on what the desired goals are. If the goal is to build a computer that acts like a human brain, Modha may be the first to achieve it. By continuously modifying his model so that it responds to any input with the correct output, adding detail only where necessary, he turns the brain into a “black box” whose inner workings are not completely necessary to understand. A black box model of the brain would be useful for many applications, such as developing better artificial intelligence for detecting faces and speech, or even replacing damaged parts of the brain with functioning computer implants that communicate seamlessly with the surrounding intact tissue.

However, to completely understand the inside of this black box, it is necessary to model its inner workings at the most detailed level possible. The failure of biologically-inspired models to out-perform their simplified counterparts is likely a result of incomplete understanding of all the sub-cellular dynamics and biophysical phenomena that occur in each cell-there is still an unbelievable amount of lab work to be done before such a model can be considered complete. Once it is complete, however, the possibilities are truly fantastic. Instead of sacrificing hordes of laboratory animals to test each potential new drug, thousands of candidate chemicals can be screened by the computer while the researcher enjoys a cup of coffee. If the researcher comes up with 100 different molecules that could possibly be malfunctioning to cause a certain neurological disease, she can simply omit them from the model to find out instantly if a mutation might be causing the problem. And although the idea of a completely functional, artificial human brain is somewhat frightening and highly unlikely to exist in 10 years, let alone in our lifetime, a deeper understanding of human consciousness should eventually emerge, and may help society to understand that much of its conflict, pain, and divisiveness arise only from a chaotic yet sublime mental symphony of electrical activity.

References

1. H. Markram, Nature Reviews Neuroscience, 7, 153 (2006).
2. The Cat is Out of the Bag: Cortical Simulations with 10^9 Neurons, 10^13 Synapses, (ACM, New York, NY, 2009).
3. “The price per transistor has dropped dramatically since 1968.” 2010(2005).
4. “Reverse-engineer the brain,” 2010(2007).
5. S. Adee, “Cat Fight Brews Over Cat Brain,” 2010(2009).
6. H. Markram, “Henry Markram builds a brain in a supercomputer,” 2010(2009).
7. “The Blue Brain Project,” 2010.
8. H. Markram, J. Lübke, M. Frotscher, B. Sakmann, Science, 275, 213 (2997).
9. W. Gerstner, R. Naud, Science, 326, 379 (2009).
10. M. Glickstein, Current Biology, 16, R147 (2006).
11. A. L. Hodgkin, A. F. Huxley, J. Physiol., 117, 500 (1952).
12. W. Rall, Exp. Neurol., 1, 491 (1959).
13. W. Rall, G. M. Shepherd, J. Neurophysiol., 31, 884 (1968).
14. R. D. Traub, R. K. S. Wong, Science, 216, 745 (1982).
15. C. A. Mead, M. A. Mahowald, Neural Networks, 1, 91 (1988).

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *