<div dir="ltr">Hello Peer,<div><br></div><div>Thank you for the context! That helps a lot. And, it turns out that with a couple slightly counter-intuitive tweaks, this sort of model is quite nicely suited for Nengo.</div><div><br></div><div>The main thing we first need is to make tuning curves that look like Figure 6a of (Fischer and Pena, 2011). These neurons have preferred directions that range between -100 and +100 degrees. Preferred directions are called "encoders" in Nengo/NEF. However, because Nengo allows you to generalize the idea of preferred directions up to hundreds of dimensions, we need to be explicit about exactly how we want to represent an angle.</div><div><br></div><div>The easiest way is to think of the group of neurons (the optic tectum) as representing a two-dimensional space, and each neuron has some preferred direction in that space. By default, Nengo will randomly distribute the neuron's encoders around that 2-dimensional space, although we can control that if we want to (e.g. we may want to distribute neurons more densely in the middle of the space, for example). </div><div><br></div><div>So, the code for making the optic_tectum is:</div><div><br></div><div> optic_tectum = nengo.Ensemble(n_neurons=500, dimensions=2)<br></div><div><br></div><div>Now, however, we want to provide input to that set of neurons. We want that input to be an angle, and we want that angle to stimulate the neurons in a consistent way (i.e. when the input is an angle of 0, the neurons with preferred direction vectors near 0 should be stimulated). We can do this with one Node for the input angle, and one Node to do the conversion into the represented space:</div><div><br></div><div> stim_angle = nengo.Node([0])<br></div><div><div> </div><div> import numpy as np</div><div><div></div></div><div> def convert_angle(t, x):</div><div> return np.sin(x[0]), np.cos(x[0])</div><div> angle_to_pt = nengo.Node(convert_angle, size_in=1)</div><div> nengo.Connection(stim_angle, angle_to_pt, synapse=None)</div></div><div><br></div><div>Now we connect that input into the optic_tectum</div><div><br></div><div> nengo.Connection(angle_to_pt, optic_tectum)<br></div><div><br></div><div>You now have neurons with tuning curves in different directions. If you plot the spikes from the optic_tectum as you change the input stim_angle, you should see this behaviour. Also, it's important to note that these neurons in Nengo have much more variety in their tuning curves (heights and widths) than the perfectly regular tuning curves done in Figure 6a. We tend to keep a large degree of variability in our neurons. You can control the width with the intercepts parameter, if you're interested in doing that.</div><div><br></div><div>So all of that above was just to get the stimulus and neuron parameters set up right to give the sorts of tuning curves we want in the optic tectum. Now we come to the part where we want to decode out from these tuning curves what the currently represented angle is. In (Fischer and Pena, 2011), they do this: "The population vector is obtained
by averaging the preferred direction vectors of neurons in the population,
weighted by the firing rates of the neurons". This is one way of decoding neural activity, but it's not the only way.</div><div><br></div><div>In Nengo, when you ask it to compute a function, it finds the optimal way of weighting the outputs of neural activity to approximate that function. Furthermore, it does this with a fixed set of linear weights -- i.e. it does not require that division step in the weighted average. So let's try it the standard Nengo way, and then compare that to the weighted average approach.</div><div><br></div><div>To do it the standard Nengo way, we define a function that goes from the represented 2-D space and decodes out the angle:</div><div><br></div><div><div> decoded_angle = nengo.Node(None, size_in=1) # a place to store the result</div><div> </div><div> # a function to map from 2-d space to an angle</div><div> def decode_angle(x):</div><div> return np.arctan2(x[0], x[1])</div><div> </div><div> # make the connection</div><div> nengo.Connection(optic_tectum, decoded_angle, function=decode_angle)<br></div></div><div><br></div><div>This tells Nengo to find the optimal set of connection weights from the neurons to decode out the angle. That is, it finds the vector d such that sum(a_i * d_i) best approximates the input angle.</div><div><br></div><div>If you try running this, it works okay, but we we can make it do better. In particular, right now Nengo is trying to approximate the function across the whole 2-D space, but we only need it to be good at a particular range of points. If we tell Nengo to just focus on those points, it gets much better:</div><div><br></div><div><div> decoded_angle = nengo.Node(None, size_in=1)</div><div> </div><div> def decode_angle(x):</div><div> return np.arctan2(x[0], x[1])</div><div><br></div><div> # define the set of points </div><div><div> def make_pt():</div><div> theta = np.random.uniform(-2, 2)</div><div> return [np.sin(theta), np.cos(theta)]</div></div><div> pts = [make_pt() for i in range(1000)]<br></div><div> </div><div> nengo.Connection(optic_tectum, decoded_angle, function=decode_angle, eval_points=pts)</div></div><div><br></div><div>If you run this and plot the decoded_angle you'll see it very closely follows the stim_angle.</div><div><br></div><div>However, it'd be good to also do the weighted average approach, so we can compare the two. Doing that requires us to to know the angles for all the neurons, and do a weighted average of those angles (weighted by activity). To do that, we explicitly define the angles for the neurons, and then do the math in a Node:</div><div><br></div><div> N = 500</div><div><br></div><div><div> def make_pt():</div><div> theta = np.random.uniform(-2, 2)</div><div> return [np.sin(theta), np.cos(theta)]</div><div> </div><div> # generate random preferred direction vectors</div><div> encoders = np.array([make_pt() for i in range(N)])</div><div> </div></div><div><div> optic_tectum = nengo.Ensemble(n_neurons=N, dimensions=2, encoders=encoders)</div></div><div><br></div><div> # compute the angle for each preferred direction vector</div><div><div> angles = np.arctan2(encoders[:,0], encoders[:,1])</div><div><br></div><div> # compute the weighted average</div><div> def weighted_average(t, a):</div><div> total = np.sum(a)</div><div> if total == 0:</div><div> return 0</div><div> return np.sum(a*angles) / total</div><div> </div><div> computed = nengo.Node(weighted_average, size_in=N)</div><div> nengo.Connection(optic_tectum.neurons, computed, synapse=None)</div></div><div><br></div><div>Now we can plot "computed" (the approach used in (Fischer and Pena, 2011)) and compare it to "decoded_angle" (the default approach used in Nengo). In this case, the Nengo approach is more accurate, and it doesn't require any division! </div><div><br></div><div>Here's a script that should let you directly compare the two approaches:</div><div><br></div><div>-------------------</div><div><div>import nengo</div><div>import numpy as np</div><div><br></div><div>N = 500 # the number of neurons</div><div><br></div><div>model = nengo.Network()</div><div>with model:</div><div> stim_angle = nengo.Node([0]) # the input angle</div><div> </div><div> # convert the angle into a 2-D space</div><div> def convert_angle(t, x):</div><div> return np.sin(x[0]), np.cos(x[0])</div><div> angle_to_pt = nengo.Node(convert_angle, size_in=1)</div><div> nengo.Connection(stim_angle, angle_to_pt, synapse=None)</div><div> </div><div> # make a point in 2-D space that is at random angle</div><div> def make_pt():</div><div> theta = np.random.uniform(-2, 2)</div><div> return [np.sin(theta), np.cos(theta)]</div><div> </div><div> # the preferred direction vectors for the neurons</div><div> encoders = np.array([make_pt() for i in range(N)])</div><div> </div><div> # create the group of neurons</div><div> optic_tectum = nengo.Ensemble(n_neurons=N, dimensions=2, encoders=encoders)</div><div> </div><div> nengo.Connection(angle_to_pt, optic_tectum)</div><div> </div><div> </div><div> ### Standard Nengo/NEF Approach</div><div> </div><div> # decode out the angle in the optimal Nengo/NEF approach</div><div> decoded_angle = nengo.Node(None, size_in=1)</div><div> </div><div> # function that the neural connections should approximate</div><div> def decode_angle(x):</div><div> return np.arctan2(x[0], x[1])</div><div> </div><div> # define the set of values over which the approximate should be good</div><div> pts = [make_pt() for i in range(1000)]</div><div> </div><div> nengo.Connection(optic_tectum, decoded_angle, function=decode_angle, eval_points=pts)</div><div> </div><div><br></div><div> ### Weighted average approach</div><div><br></div><div> # determine the angles for each neuron</div><div> angles = np.arctan2(encoders[:,0], encoders[:,1])</div><div> # compute the weighted sum</div><div> def weighted_average(t, a):</div><div> total = np.sum(a)</div><div> if total == 0:</div><div> return 0</div><div> return np.sum(a*angles) / total</div><div> computed = nengo.Node(weighted_average, size_in=N)</div><div> </div><div> nengo.Connection(optic_tectum.neurons, computed, synapse=None)</div><div> </div><div> ---------</div></div><div><br></div><div>So, the main differences between the Nengo/NEF approach rather than the weighted average approach are:</div><div> - The Nengo/NEF approach gives a more accurate result</div><div> - The Nengo/NEF approach handles variability in the tuning curves</div><div> - The Nengo/NEF approach does not require division (which is complicated to biologically justify)</div><div><br></div><div>In any case, while it's certainly possible to do the weighted average approach (using that "computed" Node defined above) in Nengo, we tend not to. But it'd be interesting to do a more rigorous direct comparison in this case.</div><div><br></div><div>Notice also that, right now, the neurons are being optimized to just figure out what the input angle is. If you also want it to take into account some sort of bayesian prior, then all you have to do is put that into the decode_angle function. This lets you implement a wide variety of possible priors.</div><div><br></div><div><br></div><div>Does that help? What I've presented here is definitely a very different way of thinking about things than is taken in (Fischer and Pena, 2011). But hopefully I've shown a) how to implement their approach in Nengo, and b) that there are other ways of decoding information that are a little bit more flexible. </div><div><br></div><div>Let me know if that helps, and thank you again for the question!</div><div><br></div><div>Terry</div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 10, 2016 at 6:33 PM, Peer Ueberholz <span dir="ltr"><<a href="mailto:peer@ueberholz.de" target="_blank">peer@ueberholz.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-AU" text="#000000" link="blue" vlink="purple" bgcolor="white">
<div><tt><br>
</tt>
<tt><span style="font-size:10.0pt">Hi Terry,</span></tt><tt><span style="font-size:10pt"><br>
<br>
Thank you very much for the fast response. You are right, I
should have explained the problem more carefully! However, I'm
a new Nengo user and don't have much experience.
<br>
<br>
What I want to do is similar to the simulation Brian Fischer
did about 10 years ago, that is to simulate the auditory
system of a barn owl (B.J. Fischer, and J.L. Pena, (2011)
Owl’s behaviour and neural representation predicted by
Bayesian inference, Nature Neuroscience 14 (8): 1061-1067). To
find the direction from which a sound comes from, the auditory
system has a field of neurons, each associated with a
different angle. A first approximation would be to test which
neuron has the highest activity: for example, if it is the
neuron which is associated with 40 degrees, this gives the
direction.
<br>
<br>
However, a better approximation would be to multiply the
activities of all the neurons with the associated angles and
divide it by the total activity of all neurons.
<br>
<br>
But the problem already appears in a simpler example. Here I
have implemented two ways to divide two numbers, the first is
direct division and the second is using the logarithm. Using
the logarithm is better, but in both cases I can get crazy
results, quite often also negative, depending on the radius of
the ensembles. (At one point I used the LIFRate neurons to get
rid of too many fluctuations, but the results doesn't depend
on the kind on neurons).
<br>
<br>
The code for this example is:<br>
<br>
--------------------------------------<br>
import numpy as np<br>
import matplotlib.pyplot as plt<br>
import nengo<br>
<br>
model = nengo.Network(label='Log sums')<br>
with model:<br>
<br>
input_1 = nengo.Node(output=10)<br>
input_2 = nengo.Node(output=2)<br>
v = nengo.Ensemble(50, dimensions=2, radius=12.0)<br>
nengo.Connection(input_1, v[0])<br>
nengo.Connection(input_2, v[1])<br>
<br>
# Create a 2-population with representing the log of the
numbers<br>
def logf(x):<br>
if x == 0:<br>
return 0<br>
else:<br>
return np.log(x)<br>
vlog = nengo.Ensemble(50, dimensions=2, radius=8.0)<br>
nengo.Connection(input_1, vlog[0],function=logf)<br>
nengo.Connection(input_2, vlog[1],function=logf)<br>
<br>
# compute the division directly<br>
def quot(x):<br>
if x[1] == 0:<br>
return 0<br>
else:<br>
return float(x[0])/float(x[1])<br>
out = nengo.Ensemble(800,dimensions=1,radius=10)<br>
nengo.Connection(v,out,function=quot,synapse=0.1)<br>
<br>
#compute the division using the logarithm<br>
def diff(x):<br>
return x[0]-x[1]<br>
<br>
outlog =
nengo.Ensemble(800,dimensions=1,neuron_type=nengo.neurons.LIFRate(tau_rc=0.01,
tau_ref=0.002),radius=6)<br>
nengo.Connection(vlog,outlog,function=diff,synapse=0.1)<br>
<br>
def exp(x):<br>
return np.exp(x)<br>
<br>
out_exp = nengo.Ensemble(800,dimensions=1,radius=12)<br>
nengo.Connection(outlog,out_exp,function=exp,synapse=0.1)<br>
-----------------------------------------<br>
<br>
Unfortunately I am departing on leave for 5 weeks from the end
of this week. As I’m travelling, I won’t be able to work on
the problem over the next couple of weeks, so please excuse
the delay in my responses until my return.<br>
<br>
Thank you very much for your help, which I greatly appreciate.<br>
<br>
Best regards,<br>
<br>
Peer</span></tt><tt><u></u><u></u></tt><br>
<tt>
</tt>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<br>
<p class="MsoNormal"><span style="font-size:10.0pt;font-family:"Courier New"">
</span><br>
<br>
-------- Weitergeleitete Nachricht -------- <u></u><u></u></p>
<div>
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td style="padding:0cm 0cm 0cm 0cm" nowrap valign="top">
<p class="MsoNormal" style="text-align:right" align="right"><b>Betreff: <u></u><u></u></b></p>
</td>
<td style="padding:0cm 0cm 0cm 0cm">
<p class="MsoNormal">Re: [nengo-user] normalized average
of a variable x over n ensembles of neurons<u></u><u></u></p>
</td>
</tr>
<tr>
<td style="padding:0cm 0cm 0cm 0cm" nowrap valign="top">
<p class="MsoNormal" style="text-align:right" align="right"><b>Datum: <u></u><u></u></b></p>
</td>
<td style="padding:0cm 0cm 0cm 0cm">
<p class="MsoNormal">Tue, 9 Feb 2016 10:31:04 -0500<u></u><u></u></p>
</td>
</tr>
<tr>
<td style="padding:0cm 0cm 0cm 0cm" nowrap valign="top">
<p class="MsoNormal" style="text-align:right" align="right"><b>Von: <u></u><u></u></b></p>
</td>
<td style="padding:0cm 0cm 0cm 0cm">
<p class="MsoNormal">Terry Stewart <a href="mailto:terry.stewart@gmail.com" target="_blank"></a><a href="mailto:terry.stewart@gmail.com" target="_blank"><terry.stewart@gmail.com></a><u></u><u></u></p>
</td>
</tr>
<tr>
<td style="padding:0cm 0cm 0cm 0cm" nowrap valign="top">
<p class="MsoNormal" style="text-align:right" align="right"><b>An: <u></u><u></u></b></p>
</td>
<td style="padding:0cm 0cm 0cm 0cm">
<p class="MsoNormal">Peer Ueberholz <a href="mailto:peer@ueberholz.de" target="_blank"></a><a href="mailto:peer@ueberholz.de" target="_blank"><peer@ueberholz.de></a><u></u><u></u></p>
</td>
</tr>
<tr>
<td style="padding:0cm 0cm 0cm 0cm" nowrap valign="top">
<p class="MsoNormal" style="text-align:right" align="right"><b>Kopie (CC): <u></u><u></u></b></p>
</td>
<td style="padding:0cm 0cm 0cm 0cm">
<p class="MsoNormal"><a href="mailto:nengo-user@ctnsrv.uwaterloo.ca" target="_blank"></a><a href="mailto:nengo-user@ctnsrv.uwaterloo.ca" target="_blank">nengo-user@ctnsrv.uwaterloo.ca</a>
<a href="mailto:nengo-user@ctnsrv.uwaterloo.ca" target="_blank"><nengo-user@ctnsrv.uwaterloo.ca></a><u></u><u></u></p>
</td>
</tr>
</tbody>
</table>
<p class="MsoNormal" style="margin-bottom:12.0pt"><u></u> <u></u></p>
<pre>Hello Peer,<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>Thank you for the question! I'm not quite sure exactly what you're<u></u><u></u></pre>
<pre>trying to compute here, though -- if the neurons are representing some<u></u><u></u></pre>
<pre>vector x, then that means that their activity a_i should itself be a<u></u><u></u></pre>
<pre>function of x. And if that's the case, then I'm not quite sure what<u></u><u></u></pre>
<pre>it would mean to compute a weighted average when the weighting factor<u></u><u></u></pre>
<pre>a_i is itself a function of x. Do you want to control a_i<u></u><u></u></pre>
<pre>independently of x_i? Or are they supposed to be so tightly coupled?<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>Could you give a little more context of what you're trying to do here?<u></u><u></u></pre>
<pre> What are the inputs and outputs that you want? One possible thing<u></u><u></u></pre>
<pre>that's coming to mind is that you've got two inputs: the vector x and<u></u><u></u></pre>
<pre>the vector a, and you want to compute \sum_i^n a_i x_i / \sum_1^n a_i.<u></u><u></u></pre>
<pre>This isn't quite what you asked, as I now have a_i and x_i as just two<u></u><u></u></pre>
<pre>different things being represented by the neurons, rather than a_i<u></u><u></u></pre>
<pre>being a direct measure of the activity of the neurons. But if that's<u></u><u></u></pre>
<pre>what you wanted, then this is how it could be done in nengo:<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>----------<u></u><u></u></pre>
<pre>import nengo<u></u><u></u></pre>
<pre>import numpy as np<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>model = nengo.Network()<u></u><u></u></pre>
<pre>with model:<u></u><u></u></pre>
<pre> D = 4<u></u><u></u></pre>
<pre> stim_x = nengo.Node([0]*D) # the values to average<u></u><u></u></pre>
<pre> stim_a = nengo.Node([0.3]*D) # the weights to apply when doing the average<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre> ens = nengo.Ensemble(n_neurons=2000, dimensions=D*2)<u></u><u></u></pre>
<pre> nengo.Connection(stim_x, ens[:D])<u></u><u></u></pre>
<pre> nengo.Connection(stim_a, ens[D:])<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre> result = nengo.Ensemble(n_neurons=50, dimensions=1)<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre> def weighted_average(x):<u></u><u></u></pre>
<pre> a = x[D:]<u></u><u></u></pre>
<pre> x = x[:D]<u></u><u></u></pre>
<pre> return sum(a*x)/sum(a)<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre> # this should give sample data that you want the neurons<u></u><u></u></pre>
<pre> # to be good at<u></u><u></u></pre>
<pre> def make_sample():<u></u><u></u></pre>
<pre> x = np.random.uniform(-1,1,size=D)<u></u><u></u></pre>
<pre> a = np.random.uniform(0,1,size=D)<u></u><u></u></pre>
<pre> return np.hstack([x, a])<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre> nengo.Connection(ens, result, function=weighted_average,<u></u><u></u></pre>
<pre> eval_points=[make_sample() for i in range(4000)])<u></u><u></u></pre>
<pre> --------------<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>One slightly uncommon thing that's being done in that code is the<u></u><u></u></pre>
<pre>eval_points, which I'm using to make sure that nengo doesn't try to<u></u><u></u></pre>
<pre>optimize the neural connections across the whole space, which would<u></u><u></u></pre>
<pre>include *negative* values for a_i. Trying to optimize over the whole<u></u><u></u></pre>
<pre>space is often too difficult and unneeded, so we tend to try to just<u></u><u></u></pre>
<pre>optimize over the part of the space that's needed (as given by the<u></u><u></u></pre>
<pre>make_sample() function).<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>Still, I'm not quite sure this solves your problem, but maybe it's a<u></u><u></u></pre>
<pre>step in the right direction. Let me know how I'm misinterpreting<u></u><u></u></pre>
<pre>things and I can make another attempt at solving the problem!<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre>:)<u></u><u></u></pre>
<pre>Terry<u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<pre><u></u> <u></u></pre>
<pre><u></u> <u></u></pre>
<pre><u></u> <u></u></pre>
<pre>On Mon, Feb 8, 2016 at 6:49 PM, Peer Ueberholz <a href="mailto:peer@ueberholz.de" target="_blank"><peer@ueberholz.de></a> wrote:<u></u><u></u></pre>
<pre>> Hi<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> I want to compute a normalized average of a variable x over n ensembles of<u></u><u></u></pre>
<pre>> neurons<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> \bar{x} = \sum_i^n a_i x_i / \sum_1^n a_i<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> where a_i is the activity of each ensemble, \sum_1^n a_i the sum over the<u></u><u></u></pre>
<pre>> activities of the n ensembles and x_i the value of a variable x assigned to<u></u><u></u></pre>
<pre>> ensemble i.<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> To implement the sum is possible, although not entirely straightforward.<u></u><u></u></pre>
<pre>> However, the division seems to represent a more serious problem. A possible<u></u><u></u></pre>
<pre>> solution that I've tried is to avoid division is to use logarithms, but this<u></u><u></u></pre>
<pre>> doesn't appear to help much either. Does anyone know a simple method to<u></u><u></u></pre>
<pre>> compute this quantity in Nengo?<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> Thanks,<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> Peer<u></u><u></u></pre>
<pre>><u></u> <u></u></pre>
<pre>> _______________________________________________<u></u><u></u></pre>
<pre>> nengo-user mailing list<u></u><u></u></pre>
<pre>> <a href="mailto:nengo-user@ctnsrv.uwaterloo.ca" target="_blank">nengo-user@ctnsrv.uwaterloo.ca</a><u></u><u></u></pre>
<pre>> <a href="http://ctnsrv.uwaterloo.ca/mailman/listinfo/nengo-user" target="_blank">http://ctnsrv.uwaterloo.ca/mailman/listinfo/nengo-user</a><u></u><u></u></pre>
<pre><u></u> <u></u></pre>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
</blockquote></div><br></div>