This semester, I’m TAing a graduate course called Neural Computation taught by Bruno Olshausen, which is probably my favorite class I’ve taken at Berkeley. Inspired by Ben Recht’s lecture posts, I will try to write a short post each week with my annotations of that week’s lectures. It’s going to be a busy semester, so no guarantees, but I’ll do my best 🫡. All course content is on the website.
Throughout history, we have used our newest technologies as metaphors for the brain: hydraulic pumps, steam engines, and computers. I think this urge to project a known system onto the brain tells us how hard it is to reason about its complexity without a concrete foothold. In the context of this class, we are not saying that the brain literally operates like a computer. But we are using ideas of computation, e.g. logic gates, circuits, and encoding principles, to help us ask concrete questions about the brain.
In Bruno’s words, this course is about how brains work, and how to build a brain. These two ideas are intertwined and complementary. The Wright brothers were inspired by observations about bird wings, which also helped us learn more about how birds fly. If we agree with Richard Feynman’s quote “What I cannot create, I do not understand”, we won’t be able to build a brain without understanding it. But trying to build it may help us start to understand it.
With recent technological advances, it’s now possible to record from tens of thousands of neurons in the brain simultaneously1. How do we go from these complex, noisy data to understanding the principles the generate this activity? It isn’t sufficient to only analyze data in a “bottom-up” fashion. We need top-down concepts to inform how we interpret the data. As Horace Barlow said, “A wing would be a most mystifying structure if one did not know that birds flew.”2
As a side note, neuroscientist Konrad Körding made this point in a podcast earlier this year:
“For every neural dataset... there is an infinite set of biologically meaningful, potentially realistic models that will predict exactly that dataset. The neuroscience that we do doesn't actually inform the mechanisms that we want to talk about, which puts large branches of neuroscience into an epistemologically really difficult spot."3
Coming up with a model that predicts a dataset doesn’t mean you definitively understand some aspect of the brain. It’s a nice story to tell, but results are never so clean, and there are so many other possible explanations. Predicting brain activity is also only one type of modeling, which we call descriptive. Can a model that regresses to neural activity tell us something about the actual implementation in the brain, or the normative principles driving these patterns? These are all hairy epistemological questions that I won’t get into here. When it comes to trying to make sense of the massive amount of data and theories put forward about the brain, I like to think of the parable of the blind men and the elephant, where each blind man draws a different conclusion about what an elephant is based on which part they touch. Werner Heisenberg puts it another way: “We have to remember that what we observe is not nature in itself, but nature exposed to our method of questioning.”4 This is why neuroscience is so hard, and why we still don’t understand the brain at all!
Okay, back to the lecture. In the 40s-60s, there were several early efforts to understand the brain. The cybernetics approach tried to understand the mechanisms of the brain from the inside, such as proposing that the brain implemented logic gates, which ended up also contributing to the development of modern computers5. After the famous Dartmouth AI meeting, the organizers were motivated to design a machine to simulate the brain67. Neurobiologists David Hubel and Torsten Wiesel discovered orientation tuning in cat visual cortex8, which still largely dictates how we study visual cortex, and eventually led to the development of convolutional neural networks9. It’s clear that trying to understand the brain and trying to build one have been connected since the beginning of computation.
As Bruno says, nature hides its secrets well. But there is hope for discovering theoretical principles by looking to biology. Biology has already solved complex problems. Can we learn from it? This is one of the main themes of the course, and the next lecture will start by highlighting the rich world of animal behavior.
E.g. High-precision coding in visual cortex, Stringer et al. 2021.
Possible Principles Underlying the Transformations of Sensory Messages, Barlow 1961.
Physics and Philosophy: The Revolution in Modern Science, Heisenberg 1958.
A Logical Calculus of the Ideas Immanent in Nervous Activity, McCulloch & Pitts 1943. This formed one of the bases for von Neumann’s development of the EDVAC, one of the first binary stored-value computers.
The preface of Automata Studies (1956) begins with “Among the most challenging scientific questions of our time are the corresponding analytic and synthetic problems: How does the brain function? Can we design a machine which will simulate a brain?”
Around that time, Claude Shannon, Herbert Simon, and Marvin Minsky, all arguably well-informed people who spent a lot of time thinking about these problems, predicted that AI was right around the corner. Sound familiar?
Receptive fields of single neurones in the cat's striate cortex, Hubel & Wiesel 1959.
Fukushima’s Neocognitron (1979) was the first CNN.
Very nice intro to what looks like an interesting course!
Speaking of building a brain, have you seen Chris Eliasmith’s book? https://www.amazon.com/How-Build-Brain-Architecture-Architectures/dp/0190262125