[nengo-user] Answer from and to the Nengo group

Terry Stewart terry.stewart at gmail.com
Thu Feb 11 17:47:09 EST 2016


Hello Peer,

Thank you for the context!  That helps a lot.  And, it turns out that with
a couple slightly counter-intuitive tweaks, this sort of model is quite
nicely suited for Nengo.

The main thing we first need is to make tuning curves that look like Figure
6a of (Fischer and Pena, 2011).  These neurons have preferred directions
that range between -100 and +100 degrees.  Preferred directions are called
"encoders" in Nengo/NEF.  However, because Nengo allows you to generalize
the idea of preferred directions up to hundreds of dimensions, we need to
be explicit about exactly how we want to represent an angle.

The easiest way is to think of the group of neurons (the optic tectum) as
representing a two-dimensional space, and each neuron has some preferred
direction in that space.  By default, Nengo will randomly distribute the
neuron's encoders around that 2-dimensional space, although we can control
that if we want to (e.g. we may want to distribute neurons more densely in
the middle of the space, for example).

So, the code for making the optic_tectum is:

    optic_tectum = nengo.Ensemble(n_neurons=500, dimensions=2)

Now, however, we want to provide input to that set of neurons.  We want
that input to be an angle, and we want that angle to stimulate the neurons
in a consistent way (i.e. when the input is an angle of 0, the neurons with
preferred direction vectors near 0 should be stimulated).  We can do this
with one Node for the input angle, and one Node to do the conversion into
the represented space:

    stim_angle = nengo.Node([0])

    import numpy as np
    def convert_angle(t, x):
        return np.sin(x[0]), np.cos(x[0])
    angle_to_pt = nengo.Node(convert_angle, size_in=1)
    nengo.Connection(stim_angle, angle_to_pt, synapse=None)

Now we connect that input into the optic_tectum

    nengo.Connection(angle_to_pt, optic_tectum)

You now have neurons with tuning curves in different directions.  If you
plot the spikes from the optic_tectum as you change the input stim_angle,
you should see this behaviour.  Also, it's important to note that these
neurons in Nengo have much more variety in their tuning curves (heights and
widths) than the perfectly regular tuning curves done in Figure 6a.  We
tend to keep a large degree of variability in our neurons.  You can control
the width with the intercepts parameter, if you're interested in doing that.

So all of that above was just to get the stimulus and neuron parameters set
up right to give the sorts of tuning curves we want in the optic tectum.
Now we come to the part where we want to decode out from these tuning
curves what the currently represented angle is.  In (Fischer and Pena,
2011), they do this: "The population vector is obtained by averaging the
preferred direction vectors of neurons in the population, weighted by the
firing rates of the neurons".  This is one way of decoding neural activity,
but it's not the only way.

In Nengo, when you ask it to compute a function, it finds the optimal way
of weighting the outputs of neural activity to approximate that function.
Furthermore, it does this with a fixed set of linear weights -- i.e. it
does not require that division step in the weighted average.  So let's try
it the standard Nengo way, and then compare that to the weighted average
approach.

To do it the standard Nengo way, we define a function that goes from the
represented 2-D space and decodes out the angle:

    decoded_angle = nengo.Node(None, size_in=1)   # a place to store the
result

    # a function to map from 2-d space to an angle
    def decode_angle(x):
        return np.arctan2(x[0], x[1])

    # make the connection
    nengo.Connection(optic_tectum, decoded_angle, function=decode_angle)

This tells Nengo to find the optimal set of connection weights from the
neurons to decode out the angle.  That is, it finds the vector d such that
sum(a_i * d_i) best approximates the input angle.

If you try running this, it works okay, but we we can make it do better.
In particular, right now Nengo is trying to approximate the function across
the whole 2-D space, but we only need it to be good at a particular range
of points.  If we tell Nengo to just focus on those points, it gets much
better:

    decoded_angle = nengo.Node(None, size_in=1)

    def decode_angle(x):
        return np.arctan2(x[0], x[1])

    # define the set of points
    def make_pt():
        theta = np.random.uniform(-2, 2)
        return [np.sin(theta), np.cos(theta)]
    pts = [make_pt() for i in range(1000)]

    nengo.Connection(optic_tectum, decoded_angle, function=decode_angle,
eval_points=pts)

If you run this and plot the decoded_angle you'll see it very closely
follows the stim_angle.

However, it'd be good to also do the weighted average approach, so we can
compare the two.  Doing that requires us to to know the angles for all the
neurons, and do a weighted average of those angles (weighted by activity).
To do that, we explicitly define the angles for the neurons, and then do
the math in a Node:

    N = 500

    def make_pt():
        theta = np.random.uniform(-2, 2)
        return [np.sin(theta), np.cos(theta)]

    # generate random preferred direction vectors
    encoders = np.array([make_pt() for i in range(N)])

    optic_tectum = nengo.Ensemble(n_neurons=N, dimensions=2,
encoders=encoders)

    # compute the angle for each preferred direction vector
    angles = np.arctan2(encoders[:,0], encoders[:,1])

    # compute the weighted average
    def weighted_average(t, a):
        total = np.sum(a)
        if total == 0:
            return 0
        return np.sum(a*angles) / total

    computed = nengo.Node(weighted_average, size_in=N)
    nengo.Connection(optic_tectum.neurons, computed, synapse=None)

Now we can plot "computed" (the approach used in  (Fischer and Pena, 2011))
and compare it to "decoded_angle" (the default approach used in Nengo).  In
this case, the Nengo approach is more accurate, and it doesn't require any
division!

Here's a script that should let you directly compare the two approaches:

-------------------
import nengo
import numpy as np

N = 500    # the number of neurons

model = nengo.Network()
with model:
    stim_angle = nengo.Node([0])   # the input angle

    # convert the angle into a 2-D space
    def convert_angle(t, x):
        return np.sin(x[0]), np.cos(x[0])
    angle_to_pt = nengo.Node(convert_angle, size_in=1)
    nengo.Connection(stim_angle, angle_to_pt, synapse=None)

    # make a point in 2-D space that is at random angle
    def make_pt():
        theta = np.random.uniform(-2, 2)
        return [np.sin(theta), np.cos(theta)]

    # the preferred direction vectors for the neurons
    encoders = np.array([make_pt() for i in range(N)])

    # create the group of neurons
    optic_tectum = nengo.Ensemble(n_neurons=N, dimensions=2,
encoders=encoders)

    nengo.Connection(angle_to_pt, optic_tectum)


    ### Standard Nengo/NEF Approach

    # decode out the angle in the optimal Nengo/NEF approach
    decoded_angle = nengo.Node(None, size_in=1)

    # function that the neural connections should approximate
    def decode_angle(x):
        return np.arctan2(x[0], x[1])

    # define the set of values over which the approximate should be good
    pts = [make_pt() for i in range(1000)]

    nengo.Connection(optic_tectum, decoded_angle, function=decode_angle,
eval_points=pts)


    ### Weighted average approach

    # determine the angles for each neuron
    angles = np.arctan2(encoders[:,0], encoders[:,1])
    # compute the weighted sum
    def weighted_average(t, a):
        total = np.sum(a)
        if total == 0:
            return 0
        return np.sum(a*angles) / total
    computed = nengo.Node(weighted_average, size_in=N)

    nengo.Connection(optic_tectum.neurons, computed, synapse=None)

 ---------

So, the main differences between the Nengo/NEF approach rather than the
weighted average approach are:
 - The Nengo/NEF approach gives a more accurate result
 - The Nengo/NEF approach handles variability in the tuning curves
 - The Nengo/NEF approach does not require division (which is complicated
to biologically justify)

In any case, while it's certainly possible to do the weighted average
approach (using that "computed" Node defined above) in Nengo, we tend not
to.  But it'd be interesting to do a more rigorous direct comparison in
this case.

Notice also that, right now, the neurons are being optimized to just figure
out what the input angle is.  If you also want it to take into account some
sort of bayesian prior, then all you have to do is put that into the
decode_angle function.  This lets you implement a wide variety of possible
priors.


Does that help?  What I've presented here is definitely a very different
way of thinking about things than is taken in (Fischer and Pena, 2011).
But hopefully I've shown a) how to implement their approach in Nengo, and
b) that there are other ways of decoding information that are a little bit
more flexible.

Let me know if that helps, and thank you again for the question!

Terry




On Wed, Feb 10, 2016 at 6:33 PM, Peer Ueberholz <peer at ueberholz.de> wrote:

>
> Hi Terry,
>
> Thank you very much for the fast response. You are right, I should have
> explained the problem more carefully! However, I'm a new Nengo user and
> don't have much experience.
>
> What I want to do is similar to the simulation Brian Fischer did about 10
> years ago, that is to simulate the auditory system of a barn owl (B.J.
> Fischer, and J.L. Pena, (2011) Owl’s behaviour and neural representation
> predicted by Bayesian inference, Nature Neuroscience 14 (8): 1061-1067). To
> find the direction from which a sound comes from, the auditory system has a
> field of neurons, each associated with a different angle. A first
> approximation would be to test which neuron has the highest activity: for
> example, if it is the neuron which is associated with 40 degrees, this
> gives the direction.
>
> However, a better approximation would be to multiply the activities of all
> the neurons with the associated angles and divide it by the total activity
> of all neurons.
>
> But the problem already appears in a simpler example. Here I have
> implemented two ways to divide two numbers, the first is direct division
> and the second is using the logarithm. Using the logarithm is better, but
> in both cases I can get crazy results, quite often also negative, depending
> on the radius of the ensembles. (At one point I used the LIFRate neurons to
> get rid of too many fluctuations, but the results doesn't depend on the
> kind on neurons).
>
> The code for this example is:
>
> --------------------------------------
> import numpy as np
> import matplotlib.pyplot as plt
> import nengo
>
> model = nengo.Network(label='Log sums')
> with model:
>
>     input_1 = nengo.Node(output=10)
>     input_2 = nengo.Node(output=2)
>     v = nengo.Ensemble(50, dimensions=2, radius=12.0)
>     nengo.Connection(input_1, v[0])
>     nengo.Connection(input_2, v[1])
>
>     # Create a 2-population with representing the log of the numbers
>     def logf(x):
>         if x == 0:
>             return 0
>         else:
>             return  np.log(x)
>     vlog = nengo.Ensemble(50, dimensions=2, radius=8.0)
>     nengo.Connection(input_1, vlog[0],function=logf)
>     nengo.Connection(input_2, vlog[1],function=logf)
>
>     # compute the division directly
>     def quot(x):
>         if x[1] == 0:
>             return 0
>         else:
>             return  float(x[0])/float(x[1])
>     out = nengo.Ensemble(800,dimensions=1,radius=10)
>     nengo.Connection(v,out,function=quot,synapse=0.1)
>
>     #compute the division using the logarithm
>     def diff(x):
>         return  x[0]-x[1]
>
>     outlog =
> nengo.Ensemble(800,dimensions=1,neuron_type=nengo.neurons.LIFRate(tau_rc=0.01,
> tau_ref=0.002),radius=6)
>     nengo.Connection(vlog,outlog,function=diff,synapse=0.1)
>
>     def exp(x):
>         return  np.exp(x)
>
>     out_exp = nengo.Ensemble(800,dimensions=1,radius=12)
>     nengo.Connection(outlog,out_exp,function=exp,synapse=0.1)
> -----------------------------------------
>
> Unfortunately I am departing on leave for 5 weeks from the end of this
> week. As I’m travelling, I won’t be able to work on the problem over the
> next couple of weeks, so please excuse the delay in my responses until my
> return.
>
> Thank you very much for your help, which I greatly appreciate.
>
> Best regards,
>
> Peer
>
>
>
>
>
> -------- Weitergeleitete Nachricht --------
>
> *Betreff: *
>
> Re: [nengo-user] normalized average of a variable x over n ensembles of
> neurons
>
> *Datum: *
>
> Tue, 9 Feb 2016 10:31:04 -0500
>
> *Von: *
>
> Terry Stewart <terry.stewart at gmail.com><terry.stewart at gmail.com>
> <terry.stewart at gmail.com>
>
> *An: *
>
> Peer Ueberholz <peer at ueberholz.de><peer at ueberholz.de> <peer at ueberholz.de>
>
> *Kopie (CC): *
>
> <nengo-user at ctnsrv.uwaterloo.ca>nengo-user at ctnsrv.uwaterloo.ca
> <nengo-user at ctnsrv.uwaterloo.ca> <nengo-user at ctnsrv.uwaterloo.ca>
>
>
>
> Hello Peer,
>
>
>
> Thank you for the question!  I'm not quite sure exactly what you're
>
> trying to compute here, though -- if the neurons are representing some
>
> vector x, then that means that their activity a_i should itself be a
>
> function of x.  And if that's the case, then I'm not quite sure what
>
> it would mean to compute a weighted average when the weighting factor
>
> a_i is itself a function of x.  Do you want to control a_i
>
> independently of x_i?  Or are they supposed to be so tightly coupled?
>
>
>
> Could you give a little more context of what you're trying to do here?
>
>  What are the inputs and outputs that you want?  One possible thing
>
> that's coming to mind is that you've got two inputs: the vector x and
>
> the vector a, and you want to compute \sum_i^n a_i x_i / \sum_1^n a_i.
>
> This isn't quite what you asked, as I now have a_i and x_i as just two
>
> different things being represented by the neurons, rather than a_i
>
> being a direct measure of the activity of the neurons.  But if that's
>
> what you wanted, then this is how it could be done in nengo:
>
>
>
> ----------
>
> import nengo
>
> import numpy as np
>
>
>
> model = nengo.Network()
>
> with model:
>
>     D = 4
>
>     stim_x = nengo.Node([0]*D)      # the values to average
>
>     stim_a = nengo.Node([0.3]*D)   # the weights to apply when doing the average
>
>
>
>     ens = nengo.Ensemble(n_neurons=2000, dimensions=D*2)
>
>     nengo.Connection(stim_x, ens[:D])
>
>     nengo.Connection(stim_a, ens[D:])
>
>
>
>     result = nengo.Ensemble(n_neurons=50, dimensions=1)
>
>
>
>     def weighted_average(x):
>
>         a = x[D:]
>
>         x = x[:D]
>
>         return sum(a*x)/sum(a)
>
>
>
>     # this should give sample data that you want the neurons
>
>     # to be good at
>
>     def make_sample():
>
>         x = np.random.uniform(-1,1,size=D)
>
>         a = np.random.uniform(0,1,size=D)
>
>         return np.hstack([x, a])
>
>
>
>     nengo.Connection(ens, result, function=weighted_average,
>
>                      eval_points=[make_sample() for i in range(4000)])
>
>  --------------
>
>
>
> One slightly uncommon thing that's being done in that code is the
>
> eval_points, which I'm using to make sure that nengo doesn't try to
>
> optimize the neural connections across the whole space, which would
>
> include *negative* values for a_i.  Trying to optimize over the whole
>
> space is often too difficult and unneeded, so we tend to try to just
>
> optimize over the part of the space that's needed (as given by the
>
> make_sample() function).
>
>
>
> Still, I'm not quite sure this solves your problem, but maybe it's a
>
> step in the right direction.  Let me know how I'm misinterpreting
>
> things and I can make another attempt at solving the problem!
>
>
>
> :)
>
> Terry
>
>
>
>
>
>
>
>
>
> On Mon, Feb 8, 2016 at 6:49 PM, Peer Ueberholz <peer at ueberholz.de> <peer at ueberholz.de> wrote:
>
> > Hi
>
> >
>
> > I want to compute a normalized average of a variable x over n ensembles of
>
> > neurons
>
> >
>
> > \bar{x} = \sum_i^n a_i x_i / \sum_1^n a_i
>
> >
>
> > where a_i is the activity of each ensemble, \sum_1^n a_i the sum over the
>
> > activities of the n ensembles and x_i the value of a variable x assigned to
>
> > ensemble i.
>
> >
>
> > To implement the sum is possible, although not entirely straightforward.
>
> > However, the division seems to represent a more serious problem. A possible
>
> > solution that I've tried is to avoid division is to use logarithms, but this
>
> > doesn't appear to help much either. Does anyone know a simple method to
>
> > compute this quantity in Nengo?
>
> >
>
> >
>
> > Thanks,
>
> >
>
> > Peer
>
> >
>
> > _______________________________________________
>
> > nengo-user mailing list
>
> > nengo-user at ctnsrv.uwaterloo.ca
>
> > http://ctnsrv.uwaterloo.ca/mailman/listinfo/nengo-user
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://artsservices.uwaterloo.ca/pipermail/nengo-user/attachments/20160211/8c4c3170/attachment-0002.html>


More information about the nengo-user mailing list