[nengo-user] neuron weights

Terry Stewart terry.stewart at gmail.com
Mon Jun 2 13:42:46 EDT 2014


Hi Brian,

The link Chris sent helps, and there's also this Tech Report that
summarizes it:
http://compneuro.uwaterloo.ca/publications/stewart2012d.html

The main idea is that it turns out that if you use a biologically
realistic representation, you don't need backprop.  In the real brain,
there's massive redundancy: for a 2-dimensional value there might be
100 (or more) neurons.  And they're (usually) not organized such that
it's 50 neurons for one value and 50 neurons for another value --
instead they do the "population coding" thing where each neuron gets a
preferred direction in the 2-dimensional space (we call these
"encoders").  Anyway, it turns out that if you start with that form of
representation, you can decode a pretty wide range of functions,
including things like XOR, which couldn't be done without something
like backprop if you didn't have this distributed representation.

That said, we do still show the system patterns.  Sort of.  We do it
implicitly when you specify what function you want to compute between
two groups of neurons.  So if I say that I want to compute x^2 between
group A and group B, what actually happens is that Nengo automatically
randomly generates a bunch of x values, computes x^2 on all of them,
figures out what the distributed neural firing would be for each of
those x values, and then uses that sort of like "training" data in a
traditional neural network.  Except since we've got a realistic
representation, we don't need backprop, and so we can go actually just
directly solve for the optimal connection weights.  We could also use
any sort of gradient decent mechanism, including the old original
perceptron learning rule, but we generally don't bother since
computing the optimal is faster.  (there's also a bunch of other
optimizations, including the nice feature that you can actually do
most of the computation without knowing anything about population B,
and that vastly reduces the size of the computation, but still leads
to the exact same answer as if we'd directly optimized knowing A and
B.  That's the fun stuff about computing the "decoders" first, and
then combining the decoder with the encoder to get connection weights.

Let me know if that helps...
:)
Terry

On Mon, Jun 2, 2014 at 10:20 AM, Brian Krainer <bkrainer731 at gmail.com> wrote:
> I'm a little confused on how weights between neurons are calculated. In
> previous neural network software I've used, the network had to be trained on
> specific patterns in order to calculate the correct weights (using
> algorithms like back propagation.) It seems that with nengo the weights are
> computed without seeing any patterns. How is this being done?
>
> Thanks,
>
> Brian
>
> _______________________________________________
> nengo-user mailing list
> nengo-user at ctnsrv.uwaterloo.ca
> http://ctnsrv.uwaterloo.ca/mailman/listinfo/nengo-user
>



More information about the nengo-user mailing list