[nengo-user] 9 mm^2 cortex regions
Terry Stewart
terry.stewart at gmail.com
Wed Aug 28 20:47:47 EDT 2013
Hi Alex,
Fair question. :) Yes, we do tend to think that there are lots of
such regions throughout cortex. There may, of course, be areas of
cortex that don't do this, but we're arguing that it's at least
possible to fit this sort of computation within the sort of dense
local connectivity that is found in cortex. That said, as you point
out, there's no reason that all these areas share the same vocabulary.
Instead, we're just noting that for high-level cognitive areas, 500D
is enough to do complex lists (Note that there isn't a hard limit of 8
pairs of concepts there -- it's just that you start getting less
accuracy as you put more things in).
Indeed, we don't use 500D everywhere. For visual cortex in
particular, we have V1, V2, V4, and IT using very different
dimensionalities (1000D, 500D, 300D, and 50D, respectively: see figure
3.4). IT could get away with only 50D since it never had to do any
particularly complex representation in our model: it could only
represent one thing which is whatever symbol is currently being looked
at. When we connect that to the rest of cortex, however, we can map
from the 50D space to the 500D space pretty easily, and that's what
lets the rest of the system build up representations based on the
visual input.
The point about this binding system fitting in 9mm^2 is mainly a point
that puts a maximum limit on the dimensionality of our models. For
example, if I wanted to build a model with 50000D vectors, then I'd
have to figure out some other way to do binding, because the neural
connectivity needed to compute circular convolution on two vectors of
that size just isn't found in the human brain. I could certainly
build models with vectors of that size (indeed, more complex models of
visual cortex might require that) but the neuroscience evidence would
indicate that circular convolution (binding) isn't something that can
be done on those vectors. To me, this is one of the very exciting
things about this approach to cognitive research: it tells me that I
can't just use any algorithm I feel like: I have to use algorithms
that could be feasibly implemented in the human brain. If it had have
turned out that 500D wasn't enough to fit simple combinations of
concepts with the human vocabulary size, then that would be a very
strong argument against these models.
(It might also say something about where certain human cognitive
limitations come from -- why we have a limited vocabulary size and why
we can only handle a certain level of complexity in our concepts, but
that's a much more complex argument to make, since we'd also have to
look into why brains just didn't evolve larger densely connected
networks.... I find the idea interesting, though... :)
Does that help? Please feel free to send more questions our way as
they come up....
:)
Terry Stewart
Centre for Theoretical Neuroscience
University of Waterloo
On Wed, Aug 28, 2013 at 8:11 PM, Alex Miller <alex.etc.miller at gmail.com> wrote:
> Hello, everyone:
>
> I'm reading How to Build a Brain, and I guess I'm missing the big picture.
>
> Section 4.3 seems to argue that a 9 mm^2 cortex region must be able to
> bind and unbind 500D semantic pointers, which refer to lists of up to
> 8 pairs of concepts from about 60,000 that humans have in their
> vocabulary.
>
> Judging by its size, the cortex must contain 150,000 such regions. Is
> the idea that each of them operates on semantic pointers of the same
> kind? Why does, say, V2 need to be able to work with representations
> created by the auditory cortex, or with higher-level concepts like
> "science" (one of those 60,000)? And if it doesn't, why does it need
> to be 500D?
> _______________________________________________
> nengo-user mailing list
> nengo-user at ctnsrv.uwaterloo.ca
> http://ctnsrv.uwaterloo.ca/mailman/listinfo/nengo-user
More information about the nengo-user
mailing list