Invite your colleagues
And receive 1 week of complimentary premium membership
Upcoming Events (0)
ORGANIZE A MEETING OR EVENT
And earn up to €300 per participant.
Sub Circles (0)
No sub circles for Neural Engineering
Research Topics (0)
No research topics
Engineers design artificial synapse for “brain-on-a-chip” hardware
Posted by Mark Field from MIT in Neural Engineering
When it comes to processing power, the human brain just can’t be beat. Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds. Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse....
Mark shared this article 4y
With these neurons, extinguishing fear is its own reward
Posted by Mark Field from MIT in Neural Engineering
When you expect a really bad experience to happen and then it doesn’t, it’s a distinctly positive feeling. A new study of fear extinction training in mice may suggest why: The findings not only identify the exact population of brain cells that are key for learning not to feel afraid anymore, but also show that these neurons are the same ones that help encode feelings of reward. The study, published Jan. 14 in Neuron by scientists at MIT’s Picower Institute for Learning and Memory, specifically shows that fear extinction memories and feelings of reward alike are stored by neurons that express the gene Ppp1r1b in the posterior of the basolateral amygdala (pBLA), a region known to assign associations of aversive or rewarding feelings, or “valence,” with memories. The study was conducted by Xiangyu Zhang, an MIT graduate student, Joshua Kim, a former graduate student, and Susumu Tonegawa, professor of biology and neuroscience at RIKEN-MIT Laboratory of Neural Circuit Genetics at the Picower Institute for Learning and Memory at MIT and Howard Hughes Medical Institute....
Mark shared this article 4y
Research highlights immune molecule’s complex role in Huntington’s disease
Posted by Mark Field from MIT in Neural Engineering
More than a decade before people with Huntington’s disease (HD) show symptoms, they can exhibit abnormally high levels of an immune-system molecule called interleukin-6 (IL-6), which has led many researchers to suspect IL-6 of promoting the eventual neurological devastation associated with the genetic condition. A new investigation by MIT neuroscientists shows that the story likely isn’t so simple. In a recent study they found that Huntington’s model mice bred to lack IL-6 showed exacerbated symptoms compared to HD mice that still had it. “If one looks back in the literature of the Huntington’s disease field, many people have postulated that reductions to IL-6 would be therapeutic in HD,” says Myriam Heiman, associate professor in MIT’s Department of Brain and Cognitive Sciences and a member of The Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard. She is senior author of the paper in Molecular Neurodegeneration. Former postdoc Mary Wertz is the lead author....
Mark shared this article 4y
Smarter training of neural networks
These days, nearly all the artificial intelligence-based products in our lives rely on “deep neural networks” that automatically learn to process labeled data. For most organizations and individuals, though, deep learning is tough to break into. To learn well, neural networks normally have to be quite large and need massive datasets. This training process usually requires multiple days of training and expensive graphics processing units (GPUs) — and sometimes even custom-designed hardware. But what if they don’t actually have to be all that big, after all? In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have shown that neural networks contain subnetworks that are up to one-tenth the size yet capable of being trained to make equally accurate predictions — and sometimes can learn to do so even faster than the originals. The team’s approach isn’t particularly efficient now — they must train and “prune” the full network several times before finding the successful subnetwork. However, MIT Assistant Professor Michael Carbin says that his team’s findings suggest that, if we can determine precisely which part of the original network is relevant to the final prediction, scientists might one day be able to skip this expensive process altogether. Such a revelation has the potential to save hours of work and make it easier for meaningful models to be created by individual programmers, and not just huge tech companies....
Mark shared this article 4y