Researchers Create Brain On A Chip


1. Although computers have been called “thinking machines,” their internal operations have very little to do with how the original thinking machine — the human brain — actually works. That’s changing, however, as some researchers at MIT and the University of Texas Medical School have demonstrated in a new computer chip that mimics how the brain learns as it receives new information.

2. The chip can simulate the activity that takes place in the brain’s synapses. Synapses connect neurons in the brain. Neurons are where the brain stores information.

3. One of the obstacles in trying to simulate brain activity on silicon is scale. That’s because brain activity takes place in parallel — many things happening simultaneously. Computer activity takes place in series — one thing after another.

4. “That means, when you have to go up to scale to the sizes of our brains, or even the brains of very simple animals, it becomes nearly impossible,” one of the researchers, University of Texas Medical School Associate Professor Harel Shouval, explained to TechNewsWorld.

5. Other members of the research team were Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology; Guy Rachmuth, a former postdoc in Poon’s lab; and Mark Bear, the Picower Professor of Neuroscience at MIT.

6. What kind of scale are researchers confronted with? It’s estimated that there are 100 billion neurons in the brain talking to each other through 100 trillion synapses.

7. “The number of connections between neurons in our brain grows approximately by the square of the number of neurons in the brain,” explained. “So the problem gets very quickly out of hand.”

8. “If all those synapses need to be simulated in series, there is no way we can do anything in a finite time,” he said.

9. “Each synapse is incredibly slow compared to anything digital, but how the brain does what it does is by having an immense number of these ‘machines’ working in parallel,” he added.

10. The new chip, however, can have millions of simulated synapses on it. The researchers are able to mimic a synapse with 400 transistors. Using very large scale integration (VLSI), billions of transistors can be placed on a chip.

11. Another problem tackled by the new chip is simulating how neurons communicate with each other through the synapses. How something is learned is determined by changes in the strength of those connections, which are called “ion channels.”

12. To simulate on a chip what happens in those channels, the researchers had to make the current flow on the silicon mimic the analog behavior in the channels. Rather than spiking the current — turning it off and on — a gradient approach is used which emulates how ions flow in the channels between neurons.

13. “If you really want to mimic brain function realistically, you have to do more than just spiking,” Poon told MIT tech writer Anne Traffton. “You have to capture the intracellular processes that are ion channel-based.”

14. The researchers see a number of applications for the new chip. For example, it could be used in neural prosthetic devices, such as artificial retinas, or to model neural functions, which can now take hours or even days to simulate.

15. Earlier this year, IBM researchers announced they had made a breakthrough in their efforts to make chips that work like the human brain. The chips, which are still in the test phase, are expected to be the backbone of a new breed of thinking machine called a “cognitive computer.”

16. Cognitive computers will add to their knowledge base as humans do — through experience, by finding correlations, creating hypotheses, remembering and emulating the brain’s structural and synaptic plasticity.

17. The brain structure is based on learning, by establishing pathways, reinforcing pathways and eliminating pathways when they’re not being used, explained Roger L. Kay, president of Endpoint Technologies Associates.

18. “Computers have not generally been learning structures, even though artificial intelligence has spent some time trying to make that happen,” he told TechNewsWorld.

19. Artifical intelligence researchers pioneered the idea of a computer architecture wherein nodes shared information with nearby nodes, just as neurons share information with neighboring neurons through synaptic connections, he explained.

20. “But it really didn’t work until recently,” he said. “So the idea that you could create a learning structure in a machine is something that’s just beginning. “AI showed the theory of it, but in reality, it’s much more complicated,” he added.

[By John P. Mello Jr. – TechNewsWorld ]

Braking Circuit


1. Many high-end cars today come equipped with brake assist systems, which help a driver use the brakes correctly depending on particular conditions in an emergency. But what if the car could apply the brakes before the driver even moved?

2. This is what German researchers have successfully simulated, as reported in the Journal of Neural Engineering. With electrodes attached to the scalps and right legs of drivers in a driving simulator, they used both electroencephalography (EEG) and electromyography (EMG) respectively to detect the intent to brake. These electrical signals were seen 130 milliseconds before drivers actually hit the brakes—enough time to reduce the braking distance by nearly four meters.

3. Seated facing three monitors in a driving simulator, each subject was told to drive about 18 meters behind a computer-driven virtual car traveling at about 100 kilometers per hour (about 60 mph). The simulation also included oncoming traffic and winding roads. When the car ahead suddenly flashed brake lights, the human drivers also braked. With the resulting EEG and EMG data, the researchers were able to identify signals that occurred consistently during emergency brake response situations.

4. “None of these [signals] are specific to braking,” says Stefan Haufe, a researcher in the Machine Learning Group at the Technical University of Berlin and lead author of the study. “However, we show that the co-occurrence of these brain potentials is specific to sudden emergency situations, such as pre-crash situations.” So while false positives from the signal are possible, the combination of EEG and EMG data makes a false positive much less likely.

5. While this kind of brain and muscle measurement works in lab conditions, the next step—real-world application—will likely be much more difficult technically to arrange. The first thing Haufe and his team will investigate is whether or not it’s possible to accurately gather data from EEG and EMG measurements in a real-world condition. In the lab, participants were asked not to move while attached to the wires, but real-world drivers move around however they please.

6. “The current challenge is to determine how to make use of the important, but still small and unreliable, information that we can gather from the brain on the intent to brake,” says Gerwin Schalk, a brain-computer interface researcher at the New York Department of Health’s Wadsworth Center.

7. Although research into mind-reading-assisted braking systems will continue, tests involving real vehicles are likely many years away. The research may never lead to a fully automated braking system, but it could ultimately result in a system that takes brain data into account when implementing other assisted-braking measures.

8. Whether drivers would feel comfortable handing over any braking responsibility to a computer hooked up to their head is another question. “In a potential commercial application, it, of course, would have to be assessed whether customers really want that,” adds Haufe.

BY: KRISTINA BJORAN (Courtesy: Journal of Neural Engineering, Technology review, MIT)

Updates on Internet


1. Google is pressing forward with its efforts to speed up the Internet. Early this morning, the company launched Page Speed Service, which is designed to automatically speed up Web pages when they load. The service intervenes between Web servers and users, rewriting a Web page’s code to improve its performance and applying other related tricks.

2. The service improves on previous offerings from Google. Page Speed began as a diagnostic tool and then as software that developers could install and configure for free. With every step, Google has increased the ease and automation of the service.

3. The Mozilla Foundation, a nonprofit corporation that makes the Firefox browser, released an experimental tool last week that could dramatically change the way people identify themselves online.

4. Instead of handing your log-in credentials over to countless different websites, or to a site like Facebook or Google that then confirms your identity with other sites, Mozilla’s BrowserID tool stores your identity information inside your browser. This keeps that data out of the hands of companies that could be hacked, or that may track your log-in behavior for commercial purposes.

5. Remembering many different passwords is hard enough, and recent attacks on Sony, Citibank, and others have shown that users’ identity credentials are often poorly protected. Mozilla argues that BrowserID would be a safer and more secure way to verify identity, and would give users more privacy.

6. Mozilla’s system lets users tie one password to an e-mail account of their choice. Mozilla confirms that the address is valid by sending an e-mail to the user with a link that is used to verify ownership. Then, when a user visits a website that supports BrowserID, the site asks which e-mail he or she wants to use. Once the user enters that address, BrowserID checks to see if the user owns that e-mail address, and either verifies him or her or does not.

Brainbow as contrast to Rainbow


Four years ago, Harvard scientists devised a way to make mouse neurons glow in a breathtaking array of colors, a technique dubbed “Brainbow.” This allowed scientists to trace neurons’ long arms, known as the dendrites and axons, through the brain with incredible ease, revealing a map of neuron connections.

Using a clever trick of genetic engineering, in which genes for three or more different fluorescent proteins were combined like paints to generate different hues, researchers created a system to make each neuron glow one of 100 different colors. The result was that the dendrites and axons of individual neurons, previously almost impossible to pick apart from their neighbors, could be traced through the mouse brain according to their color.

Now, fruit fly researchers have a similar bonanza on their hands. Last week, two Brainbow-based methods for making fly neurons glow customized colors—called dBrainbow and Flybow—were published in Nature Methods. This is the first time that scientists have converted the technique to work in fruit flies, and because these organisms have a very sophisticated set of existing genetic tools, researchers can exert even greater control over when and where the fluorescent proteins are expressed.

Because axons and dendrites are so long and fine, it’s hard to tell which neurons they are from. Researchers have traditionally had to stain just one or two neurons in each sample, painstakingly compiling data from many brains to build a map. In contrast, many neurons are easily discernible in this cross-section of a fly’s brain made using dBrainbow. Using dBrainbow images, Julie H. Simpson and colleagues at the Howard Hughes Medical Institute’s Janelia Farm could tell which motor neurons controlled parts of a fly’s proboscis, which it uses to take in food.

Credit: Phuong Chung, Stefanie Hampel, and Julie H. Simpson/HHMI
[Courtesy: http://www.technologyreview.com/biomedicine/32423/?p1=A2%5D

Robot As Doctor


Next generation of physician will be a robot. As physician-guided robots routinely operate on patients at most major hospitals, the next generation robot could eliminate a surprising element from the scenario—the doctor.

Feasibility studies have demonstrated that a robot—without any human assistance—can locate a man-made lesion in simulated human organs, guide a device to the lesion and take multiple samples during a single session.

The researchers believe that as the technology is further developed, autonomous robots could some day perform other simple surgical tasks.
“Earlier this year we demonstrated that a robot directed by artificial intelligence can on its own locate simulated calcifications and cysts in simulated breast tissue with high repeatability and accuracy,” says Kaicheng Liang, a former student of Stephen Smith, director of the Duke University Ultrasound Transducer Group.

The Duke team combined a souped-up version of an existing robot arm with an ultrasound system. The ultrasound serves as the robot’s “eyes” by collecting data from its scan and locating its target. The robot is “controlled” not by a physician, but by an artificial intelligence program that takes the real-time 3-D information, processes it and gives the robot specific commands to perform.