Вы здесь

$100-Million for "Apollo Project of the Brain"

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.

Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem. “It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.

MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns. Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.

Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says. While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

The various teams will attempt to map the complete circuitry between all the neurons of a cubic millimeter of a rodent’s cortex. This volume, which constitutes less than one millionth the size of the human brain, may seem tiny. But to date, scientists have only been able to measure the activity of either a few neurons at a time or millions of neurons contained in the composite pictures obtained through functional magnetic resonance imaging. Now, the members of MICrONS plan to record the activity and connectivity of 100,000 neurons while the rodent is engaged in visual perception and learning tasks—a relatively enormous feat, since it requires imaging, with nanometer resolution, the twists and turn of wires whose full length is a few millimeters. “That’s like creating a road map of the U.S. by measuring every inch,” Vogelstein says.

Still, Vogelstein is optimistic because of recent support given for large-scale neuroscience research. “With the advent of the BRAIN Initiative, an enormous number of new tools have come online for interrogating the brain both at the resolution and scale that’s required for recovering a detailed circuit diagram,” he says. “So it’s a unique point in history, where we have the right tools, techniques, and technologies for the first time ever to reveal the wiring diagram of the brain at the level of every single neuron and every single synapse.”

Each team plans to record the brain’s road map differently. Cox’s team will use a technique called two-photon microscopy to measure brain activity in rats as they are trained to recognize objects on a computer screen. The researchers will introduce a modified fluorescent protein, which is sensitive to calcium, into the rodents. When a neuron fires, calcium ions rush into the cell, causing the protein to glow brighter—so using a laser scanning microscope, the researchers will be able to watch the neurons as they’re firing. “That’s a little bit like wire tapping the brain,” Cox says. “The way you might listen in on a phone call to see what’s going on, we can listen in on important internal aspects of the brain while the animal is alive and doing something.”

Then one cubic millimeter of the rat’s brain will be sent to Jeffrey Lichtman, a biologist and neuroscientist also at Harvard University. In Lichtman’s lab, it will be cut into incredibly thin slices and imaged under a state-of-the-art electron microscope at enough resolution to see all the wire-like extensions of brain cells that connect to each other. Tolias’s team is taking a similar approach, called three-photon microscopy, to look into the deeper layers of a mouse’s brain, and not just the top layers examined by Cox and his colleagues.

Meanwhile, Lee’s team plans to take a far more radical path toward mapping the connectome. Partnered with George Church, a geneticist at Harvard Medical School, they plan to use DNA barcoding: they will label every neuron with a unique sequence of nucleotides (barcode) and chemically connect barcodes across synapses to reconstruct circuits. While this method would not provide the same level of spatial information as microscopy, Lee hopes it will be faster and more accurate—provided it works at all, that is. This technology has never been successfully used before. “But if this barcoding technology works, it will revolutionize neuroscience and connectomics,” Lee says.

And all that only constitutes the first half of the MICrONS project. The scientists next have to find a way to make all this information useful for algorithms in machine learning. They have some ideas about how to do this. For one, many researchers believe the brain is Bayesian—that neurons represent sensory information in the form of probability distributions, calculating the most likely interpretation of an event based on previous experience. This hypothesis is based primarily on the idea of feedback loops in the brain—that information does not only flow forward, but that there are even more connections flowing back. In other words, researchers hypothesize that perception is not simply a mapping from some input to some output. Rather, a constructive process, “analysis by synthesis,” exists, in which the brain maintains and creates an internal representation of the world, generating expectation and prediction that allows it to explain incoming data and plan how to use it. “This is a guiding principle we’re looking at very closely—the hallmarks of this synthetic process,” Cox says, “where we confabulate what might be in the world and test that against what we actually see, and use that to drive our perception.”

For instance, the retina, which reacts to light by generating electrical impulses that are relayed to the optic nerve and then the brain, is actually a two-dimensional structure. So when a person sees an object, perhaps the brain uses such a probabilistic model to infer a three-dimensional world from the light hitting the retina’s two-dimensional surface. If that’s the case, though, then the brain has found a much better way of approximating and inferring variables than we are capable of with our current set of mathematical models. After all, if you’re observing a scene with 100 objects, consider only the forward and backward orientations the objects may have, just two of the many. That’s 2100 possible patterns right there. Getting answers by computing all those probabilities is not feasible, and yet the brain does it effortlessly with an infinite number of possible orientations: different distances, with different rotations, in different lighting conditions. “What the brain does is unfold this manifold [of data points] and make it easily separable,” Tolias explains.

Each of the three teams have recruited computer scientists to distill these theories into models, which they will then test against the reverse-engineered brain data. “For any given description of an algorithm, such as a probabilistic algorithm, there are millions of implementation choices you have to make to translate that theory into code that executes,” Vogelstein says. “Of those million or so options, some combinations of those parameters and features will result in good algorithms, and some combinations will result in inefficient or bad algorithms. By extracting those parameter settings from the brain, as opposed to guessing at them in software [as we have been doing], we will hopefully narrow the space down to a small set of implementations consistent with the brain.”

With such internal models, MICrONS plans to make machines more automatic, particularly when it comes to training machines to identify objects without first having to run through thousands of examples in which the items are identified by name. Vogelstein wants to apply unsupervised learning techniques to aid U.S. intelligence. “We may have just a single picture, or a single example of a cyber attack we want to prevent, or a single record of a financial crash or weather event that causes a problem,” he says, “and we need to generalize to a broader range of circumstances in which the same pattern might arise. So that’s what we hope to achieve: better generalization, better capacity for abstraction, better use of sparse data.”

While the researchers agree that deriving such algorithms from the brain will be the most difficult part of MICrONS—they will have to determine a way to code how the brain processes information and forms new connections—several challenges persist even in the earlier stages of the project. For one, their measurements of the brain will generate approximately two petabytes of data—the equivalent of the memory on 250,000 laptops, or 2.5 million CDs. Storage of such a large dataset will be difficult, and IARPA has partnered with Amazon to find solutions. Moreover, that data is all images. Mining it for information will require a process called segmentation, in which the structural elements of neurons and their connections are each colored differently so that computers can make better sense of shared characteristics and patterns. “Even if the whole world was coloring in for you,” Lichtman says, “it would take a lifetime to get the whole cubic millimeter colored in.” Instead, researchers will work on creating more sophisticated computer vision techniques to segment the data.

Lichtman has already seen success with a 100-terabyte dataset (one-twentieth the size of that which MICrONS plans to collect) generated from a piece of thalamus, a relay point for sensory information. His team’s results will be published this month in Cell. “We’ve learned that sometimes the same axons jump from one cell to another to contact the same place on different nerve cells, suggesting that the thalamus was organized differently than people expected,” Lichtman says. Perhaps these results will extend to the cubic millimeter of cortex they have just started to assess. “We know we can do large volumes, but now we’re going to do what we’d call gigantic volumes,” he says. “That’s a huge step up. We think we’re ready to go this next step.”

David Mumford, a mathematician, Fields Medalist, and Lee’s PhD advisor, who is not affiliated with MICrONS, lauds the project. “This is real progress,” he says. “Once datasets of this kind are available, it’s going to be a tremendous challenge to see what you can do in the way of getting a deeper insight in the way the neurons are interacting with each other. It’s been my dream that this massive recording would become possible at some point, and I think this group may very well be the group that does it.”

“But I’m a little more skeptical of the possibilities of transferring this information to artificial neural networks,” he adds. “That’s a little more far out.” Even so, all three teams are confident that their work will yield results. “What comes out of it—no matter what—it’s not a failure,” Lichtman says. “It may not fit what you expected, but that’s an opportunity. I’m not losing sleep over whether our idea is wrong. There’s no idea. It’s that the brain really exists, it’s really complicated, and no one’s ever really seen it before, so let’s take a look. What’s the risk in that?”

They also hope to succeed where the Human Brain Project, a $2 billion investment, ran into difficulties. Cox explains their approach is radically different from that of the Human Brain Project, both technically and logistically. In fact, by looking at nature first, before attempting to simulate the brain, they’re essentially working in the opposite way. And MICrONS’ teamwork-based approach will hopefully produce the cooperation and competition needed to make significant progress. IARPA intends to publish the data it collects so that other scientists can contribute ideas and research. “Even though it’s like looking at a grain of sand,” Lee says, “as my professor in college told me, you can see God in a grain of sand.”

Jordana Cepelewicz
http://www.scientificamerican.com/article/the-u-s-government-launches-a-...