Ask a Neuroscience Professor

Thanks for the replies Mark1023, nice catch! I feared the 2-3 year old thread would sink back into oblivion :)

So, research into spatial mapping is still at an early stage if I understand you correctly. Which leaves a lot of room for interpretation I guess. Learning more about how we store and access memories should be in the core of neural science I would imagine even though it's more about observing what's happening and not so much understanding the finer details of it at the present.

Edit: Thinking about it. Can you compare advances in neural science in part to astronomy? It received a significant boost from improved tools of observation in the latest decades, like the Hubble Obervatory. Are we still improving our methods of detecting neural activity? I think I remember a news story recently about how we're slowly getting to the point where we can translate neural patterns to actual images of what the person is thinking.

No we cannot translate activity into what the person is thinking except in the broadest sense like think of a song and there may be increased auditory cortex activity. This is however the critical question in Neuroscience IMO and it is what I work on. Essentially how is neural activity put together into a coherent representation?

There is an explosion in new technology in neuroscience. Some of what you may be thinking of is the neural silicon interface. It is possible apparently to take EEG signals and train animals (and people?) to use this activity to control certain computer actions. For example, a monkey can be trained to move a robotic arm with thinking causing a certain change in EEG that triggers the arm. It is very cool stuff but we don’t know the precise thoughts or even the precise activity patterns from EEG.

I work in animals with genetic tools. Our niche is that we have a way of taking the pattern of neural activity at a certain time and translating it into a genetic change in just those active neurons. We can thus have an animal learn something and put genes into those neurons that were active with learning that allow us to reactivate or silence those neurons artificially with a chemical or light. So in one experiment we have submitted recently we asked if we show an animal one room (room A) and then shock the animal in a different room (room B) while artifically firing the room A neurons what will the animal learn? Shocking the animal (very lightly) causes them to be afraid of the room they were shocked in so that is how we know they recognize it (by measuring fear responses). So what did the animal learn? They were not afraid of either room A or B alone but were afraid when we fired the room A neurons artifically while in room B. That is, they formed a hybrid representation incorporating elements of both. This was quite surprising for a number of complex reasons but the take home message is that you can incorporate ongoing and unrelated patterns of neural activity into a memory. Your brain is not silent after all until you want to learn something but has lots of ongoing internal activity. This shows that this internal activity is actually incorporated into representations.
 
How does one get to be a professor, generally?

Get s PhD in some field. Do 3-6 yrs of post doctoral research. Be very very successful in that research. Get Prof job. There are at least 200 applications for desirable jobs (tenure track) at major research institutions and at least 50 applications will be highly competitive. It is not an easy path.
 
Do you know anything about what causes Alzheimers or how to prevent it? I'm in a sort of at risk group you see, as my grandfather had Alzheimers.

Sorry I have not kept up with Alzheimers research. I vaguely remember something about diet and exercise and mental activity helping but you’d do better with the Google. Or maybe someone else knows more.
 
Are you familiar with "Incomplete Nature" by Terrence Deacon? Is he worth reading?
 
Do you know anything about what causes Alzheimers or how to prevent it? I'm in a sort of at risk group you see, as my grandfather had Alzheimers.

Yes we know what causes Alzheimer's, and at the same time, no we don't. What we have is a really good correlation. In Alzheimer's two types of protein accumulate too much and choke out the individual neurons, causing them to die or lose function. Except for a couple of specific mutations, we don't really know what causes these accumulations. That said, the correspondence to the accumulations and the disease is pretty strong.

There are really good mouse models of Alzheimer's, notably three different mutants that get the symptoms of the disease. Despite probably not having the same 'cause' of the accumulating protein as normal people (these are simply mutants) treatments that prevent the accumulation prevent the symptoms. So, to go back to the question, we know that preventing the accumulations will prevent the disease, even though we don't quite know what causes the accumulations.

The primary defense against Alzheimer's (other than clean living, which is always the first line of defense for nearly every disease) is learning. There's a Suddoku craze these days, but those people are missing the point. Lots and lots of integrated learning. And remember that learning can be physical as well as mental. What learning does is create a 'cognitive reserve' that allows the brain to run at proper capacity, despite neuronal death. So, the neurons are slowly dying, but there's no noticeable effect ... until there is. But, with people who build a cognitive reserve, the Alzheimer's takes more time to affect them. It doesn't extend the lifespan of the brain, but it certainly extends the healthspan. The learned person and the unlearned person can still die at 90, but the unlearned person suffers dementia for a dozen years first while the learned person only suffers a couple.

In other news, and a more direct answer than "learn and be healthy!" (which is still the best advice), you should know that we currently spend 1000x more on the social cost of Alzheimer's than on research (which strikes me as bandaid thinking). As well, the paper I like best regarding preventative treatment for Alzheimer's is to (in addition to learning and healthy living!) take regular doses of the B vitamins. As time goes on, this prophylactic should be more and more clarified.
 
Are you familiar with "Incomplete Nature" by Terrence Deacon? Is he worth reading?

No but I looked up what he studies and it sounds very interesting, and I am pretty select in what I think is interesting. He is looking at neuro and genetic differences between humans and other primates with the main question of hoe did we evolve language. Cool stuff. I might have to put this book on my Christmas list and invite him for a talk.
 
That would count as sufficient recommendation. Thanks.
 
This is a bit open ended, and probably too broad to answer, but, in the most important ways, how is the brain NOT like a computer?
 
This is a bit open ended, and probably too broad to answer, but, in the most important ways, how is the brain NOT like a computer?

Well I don't know computers that well but the standard answer is that the brain is highly parallel and computers are more linear. That is, in the brain each neuron receives 10,000 inputs and has output to a correspondingly large number of cells. While computers may perform many parallel computations it is my understanding that each unit (transistor?) has one input and one output.
 
This is a bit open ended, and probably too broad to answer, but, in the most important ways, how is the brain NOT like a computer?
The brain is alive?
 
Not a particularly meaningful distinction, considering this is no longer the age of vitalism and everything is made from the same fundamental building blocks.

A better answer is that the goal of brains is to help an organism or a group of organisms to propagate their genes and persist in their environment. That's a bit trickier to argue for a computer.
 
A better answer is that the goal of brains is to help an organism or a group of organisms to propagate their genes and persist in their environment. That's a bit trickier to argue for a computer.
You could presumably create one for that purpose, however.
 
How does Impact Factor affect a professor's ability to run a lab? If someone is getting out one paper per year, would it be better to have a primary (or last) authorship on a low-impact journal, or a middle-authorship on a really prestigious journal?

(For our audience, here's an example. I've brought up the publications for Susan Lindquist, a dynamo researcher. You'll notice that one some articles, her name comes last. This tends to mean that her laboratory is the one that performed the majority of the work and that she was responsible for getting everyone together to do the work. First author tends to be the person who did the majority of the actual bench-work and writing. Middle authors are people who helped.)

http://www.ncbi.nlm.nih.gov/pubmed?term=lindquist s

You'll see that in 2012, she got a 'middle' authorship in the Journal of Neuroscience (a really prestigious journal), but in 2011 she got 'first' authorship in PLoS (the public library of science), which is an awesome journal but not regarded as highly as J. Neurosci.
 
When neuroscientists talk about free will (and many don't), they seem to believe that their field disproves it's existence. Often, their arguments seem entirely uninformed by several hundred years of philosophical consideration of the issue (i'm thinking about people like this guy). Do you think there are any results in neuroscience that substantively disprove the proposition that we possess free will? If not, why do you think so many neuroscientists believe that there are?
 
This is a bit open ended, and probably too broad to answer, but, in the most important ways, how is the brain NOT like a computer?
A brain can process stuff simultaneously, while computers can only create the illusion of such. Though of course it would be possible to create a system of computers which communicate with each other but process data independently and hence simultaneously.

edit: Whoops, I somehow missed that this had been already answered. Nothing to see here...
 
Are you interested/following developments with the enzyme PKM-zeta that relate an enzyme to memory function?

Increased levels of one natural brain enzyme supercharge rat memories, a study suggests. And it’s not just new, short-term memories. The enzyme — called PKM-zeta — gives rats better recall of old remembrances, too, a U.S.-Israeli team reports in the March 5 Science.
http://www.sciencenews.org/view/generic/id/70519/title/Enzyme_revives_long-term_memories


"Our study is the first to demonstrate that, in the context of a functioning brain in a behaving animal, a single molecule, PKMzeta, is both necessary and sufficient for maintaining long-term memory,"
In their earlier studies, Sacktor's team showed that even weeks after rats learned to associate a nauseating sensation with saccharin and shunned the sweet taste, their sweet tooth returned within a couple of hours after rats received a chemical that blocked the enzyme PKMzeta in the brain's outer mantle, or neocortex, where long-term memories are stored.
http://www.sciencedaily.com/releases/2011/03/110304092111.htm


I saw an interesting webinar on this a few months ago (Can't remember when--lol) in which a New York Dept of Health researcher, credited with participating in the discovery of PKM-zeta, had a rat on a revolving table with a shock punishment if the rat was in a specific region of the table. So naturally the rat moves away from that region once it has learned about the shock. But interrupting the learning enzmyatically resulted in the learned mouse forgetting about the shock---and getting shocked. I'll post of video of it if I find one.

EDIT: found it! watch from 24:40 if you just want to see the experiment, though the whole talk is very interesting.


Link to video.
 
No we cannot translate activity into what the person is thinking except in the broadest sense like think of a song and there may be increased auditory cortex activity.

That's what I thought until I saw this. Some still images follow, taken from here.

article-2040599-0E0A118B00000578-912_306x307.jpg
article-2040599-0E0A118B00000578-395_306x307.jpg


The first image is a scene from a movie shown to subjects. The second is the reconstruction based on the MRI.

Another pair:
article-2040599-0E0A118B00000578-313_306x302.jpg
article-2040599-0E0A118B00000578-967_306x302.jpg
 
It's on the way, but it's not perfect. It seems to be simply pattern matching, viewing X causes this pattern of neurons to fire, and hence if a pattern of neurons fire a similar, then you've been watching an approximation of X. I'm more impressed by our ability to resolve what bits of the brain are active.

Sci Fi has a tendency to exaggerate how advanced science is in certain ways.
 
Ayatollah So: those videos are pretty cool, but every time I read their methods for generating those videos, I'm left with a funky taste in my mouth. I think the videos are a little bit cooked for effect.

An alternate experiment that shows the same idea is this one, where we can figure out what words people hear based on their brain activity.

http://newscenter.berkeley.edu/2012...ode-brain-waves-to-eavesdrop-on-what-we-hear/

These scientists have succeeded in decoding electrical activity in the brain’s temporal lobe – the seat of the auditory system – as a person listens to normal conversation. Based on this correlation between sound and brain activity, they then were able to predict the words the person had heard solely from the temporal lobe activity.

That's implanted electrodes, not MRI, but still pretty funky. I'd rather have implanted electrodes that walk around with an MRI wrapped around my head.
 
Back
Top Bottom