When it comes to processing power, the human brain just canât be beat.
Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses â the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.
Researchers in the emerging field of âneuromorphic computingâ have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a âbrain on a chipâ would work in an analog fashion, exchanging a gradient of signals, or âweights,â much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse....
When you expect a really bad experience to happen and then it doesnât, itâs a distinctly positive feeling. A new study of fear extinction training in mice may suggest why: The findings not only identify the exact population of brain cells that are key for learning not to feel afraid anymore, but also show that these neurons are the same ones that help encode feelings of reward.
The study, published Jan. 14 in Neuron by scientists at MITâs Picower Institute for Learning and Memory, specifically shows that fear extinction memories and feelings of reward alike are stored by neurons that express the gene Ppp1r1b in the posterior of the basolateral amygdala (pBLA), a region known to assign associations of aversive or rewarding feelings, or âvalence,â with memories. The study was conducted by Xiangyu Zhang, an MIT graduate student, Joshua Kim, a former graduate student, and Susumu Tonegawa, professor of biology and neuroscience at RIKEN-MIT Laboratory of Neural Circuit Genetics at the Picower Institute for Learning and Memory at MIT and Howard Hughes Medical Institute....
More than a decade before people with Huntingtonâs disease (HD) show symptoms, they can exhibit abnormally high levels of an immune-system molecule called interleukin-6 (IL-6), which has led many researchers to suspect IL-6 of promoting the eventual neurological devastation associated with the genetic condition. A new investigation by MIT neuroscientists shows that the story likely isnât so simple. In a recent study they found that Huntingtonâs model mice bred to lack IL-6 showed exacerbated symptoms compared to HD mice that still had it.
âIf one looks back in the literature of the Huntingtonâs disease field, many people have postulated that reductions to IL-6 would be therapeutic in HD,â says Myriam Heiman, associate professor in MITâs Department of Brain and Cognitive Sciences and a member of The Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard. She is senior author of the paper in Molecular Neurodegeneration. Former postdoc Mary Wertz is the lead author....
These days, nearly all the artificial intelligence-based products in our lives rely on âdeep neural networksâ that automatically learn to process labeled data.
For most organizations and individuals, though, deep learning is tough to break into. To learn well, neural networks normally have to be quite large and need massive datasets. This training process usually requires multiple days of training and expensive graphics processing units (GPUs) â and sometimes even custom-designed hardware.
But what if they donât actually have to be all that big, after all?
In a new paper, researchers from MITâs Computer Science and Artificial Intelligence Lab (CSAIL) have shown that neural networks contain subnetworks that are up to one-tenth the size yet capable of being trained to make equally accurate predictions â and sometimes can learn to do so even faster than the originals.
The teamâs approach isnât particularly efficient now â they must train and âpruneâ the full network several times before finding the successful subnetwork. However, MIT Assistant Professor Michael Carbin says that his teamâs findings suggest that, if we can determine precisely which part of the original network is relevant to the final prediction, scientists might one day be able to skip this expensive process altogether. Such a revelation has the potential to save hours of work and make it easier for meaningful models to be created by individual programmers, and not just huge tech companies....