[nengo-user] Answer from and to the Nengo group
Peer Ueberholz
peer at ueberholz.de
Wed Feb 10 18:33:20 EST 2016
Hi Terry,
Thank you very much for the fast response. You are right, I should have
explained the problem more carefully! However, I'm a new Nengo user and
don't have much experience.
What I want to do is similar to the simulation Brian Fischer did about
10 years ago, that is to simulate the auditory system of a barn owl
(B.J. Fischer, and J.L. Pena, (2011) Owl’s behaviour and neural
representation predicted by Bayesian inference, Nature Neuroscience 14
(8): 1061-1067). To find the direction from which a sound comes from,
the auditory system has a field of neurons, each associated with a
different angle. A first approximation would be to test which neuron has
the highest activity: for example, if it is the neuron which is
associated with 40 degrees, this gives the direction.
However, a better approximation would be to multiply the activities of
all the neurons with the associated angles and divide it by the total
activity of all neurons.
But the problem already appears in a simpler example. Here I have
implemented two ways to divide two numbers, the first is direct division
and the second is using the logarithm. Using the logarithm is better,
but in both cases I can get crazy results, quite often also negative,
depending on the radius of the ensembles. (At one point I used the
LIFRate neurons to get rid of too many fluctuations, but the results
doesn't depend on the kind on neurons).
The code for this example is:
--------------------------------------
import numpy as np
import matplotlib.pyplot as plt
import nengo
model = nengo.Network(label='Log sums')
with model:
input_1 = nengo.Node(output=10)
input_2 = nengo.Node(output=2)
v = nengo.Ensemble(50, dimensions=2, radius=12.0)
nengo.Connection(input_1, v[0])
nengo.Connection(input_2, v[1])
# Create a 2-population with representing the log of the numbers
def logf(x):
if x == 0:
return 0
else:
return np.log(x)
vlog = nengo.Ensemble(50, dimensions=2, radius=8.0)
nengo.Connection(input_1, vlog[0],function=logf)
nengo.Connection(input_2, vlog[1],function=logf)
# compute the division directly
def quot(x):
if x[1] == 0:
return 0
else:
return float(x[0])/float(x[1])
out = nengo.Ensemble(800,dimensions=1,radius=10)
nengo.Connection(v,out,function=quot,synapse=0.1)
#compute the division using the logarithm
def diff(x):
return x[0]-x[1]
outlog =
nengo.Ensemble(800,dimensions=1,neuron_type=nengo.neurons.LIFRate(tau_rc=0.01,
tau_ref=0.002),radius=6)
nengo.Connection(vlog,outlog,function=diff,synapse=0.1)
def exp(x):
return np.exp(x)
out_exp = nengo.Ensemble(800,dimensions=1,radius=12)
nengo.Connection(outlog,out_exp,function=exp,synapse=0.1)
-----------------------------------------
Unfortunately I am departing on leave for 5 weeks from the end of this
week. As I’m travelling, I won’t be able to work on the problem over the
next couple of weeks, so please excuse the delay in my responses until
my return.
Thank you very much for your help, which I greatly appreciate.
Best regards,
Peer
-------- Weitergeleitete Nachricht --------
*Betreff: *
Re: [nengo-user] normalized average of a variable x over n ensembles of
neurons
*Datum: *
Tue, 9 Feb 2016 10:31:04 -0500
*Von: *
Terry Stewart <terry.stewart at gmail.com> <mailto:terry.stewart at gmail.com>
*An: *
Peer Ueberholz <peer at ueberholz.de> <mailto:peer at ueberholz.de>
*Kopie (CC): *
nengo-user at ctnsrv.uwaterloo.ca <mailto:nengo-user at ctnsrv.uwaterloo.ca>
<nengo-user at ctnsrv.uwaterloo.ca> <mailto:nengo-user at ctnsrv.uwaterloo.ca>
Hello Peer,
Thank you for the question! I'm not quite sure exactly what you're
trying to compute here, though -- if the neurons are representing some
vector x, then that means that their activity a_i should itself be a
function of x. And if that's the case, then I'm not quite sure what
it would mean to compute a weighted average when the weighting factor
a_i is itself a function of x. Do you want to control a_i
independently of x_i? Or are they supposed to be so tightly coupled?
Could you give a little more context of what you're trying to do here?
What are the inputs and outputs that you want? One possible thing
that's coming to mind is that you've got two inputs: the vector x and
the vector a, and you want to compute \sum_i^n a_i x_i / \sum_1^n a_i.
This isn't quite what you asked, as I now have a_i and x_i as just two
different things being represented by the neurons, rather than a_i
being a direct measure of the activity of the neurons. But if that's
what you wanted, then this is how it could be done in nengo:
----------
import nengo
import numpy as np
model = nengo.Network()
with model:
D = 4
stim_x = nengo.Node([0]*D) # the values to average
stim_a = nengo.Node([0.3]*D) # the weights to apply when doing the average
ens = nengo.Ensemble(n_neurons=2000, dimensions=D*2)
nengo.Connection(stim_x, ens[:D])
nengo.Connection(stim_a, ens[D:])
result = nengo.Ensemble(n_neurons=50, dimensions=1)
def weighted_average(x):
a = x[D:]
x = x[:D]
return sum(a*x)/sum(a)
# this should give sample data that you want the neurons
# to be good at
def make_sample():
x = np.random.uniform(-1,1,size=D)
a = np.random.uniform(0,1,size=D)
return np.hstack([x, a])
nengo.Connection(ens, result, function=weighted_average,
eval_points=[make_sample() for i in range(4000)])
--------------
One slightly uncommon thing that's being done in that code is the
eval_points, which I'm using to make sure that nengo doesn't try to
optimize the neural connections across the whole space, which would
include *negative* values for a_i. Trying to optimize over the whole
space is often too difficult and unneeded, so we tend to try to just
optimize over the part of the space that's needed (as given by the
make_sample() function).
Still, I'm not quite sure this solves your problem, but maybe it's a
step in the right direction. Let me know how I'm misinterpreting
things and I can make another attempt at solving the problem!
:)
Terry
On Mon, Feb 8, 2016 at 6:49 PM, Peer Ueberholz<peer at ueberholz.de> <mailto:peer at ueberholz.de> wrote:
> Hi
>
> I want to compute a normalized average of a variable x over n ensembles of
> neurons
>
> \bar{x} = \sum_i^n a_i x_i / \sum_1^n a_i
>
> where a_i is the activity of each ensemble, \sum_1^n a_i the sum over the
> activities of the n ensembles and x_i the value of a variable x assigned to
> ensemble i.
>
> To implement the sum is possible, although not entirely straightforward.
> However, the division seems to represent a more serious problem. A possible
> solution that I've tried is to avoid division is to use logarithms, but this
> doesn't appear to help much either. Does anyone know a simple method to
> compute this quantity in Nengo?
>
>
> Thanks,
>
> Peer
>
> _______________________________________________
> nengo-user mailing list
>nengo-user at ctnsrv.uwaterloo.ca <mailto:nengo-user at ctnsrv.uwaterloo.ca>
>http://ctnsrv.uwaterloo.ca/mailman/listinfo/nengo-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://artsservices.uwaterloo.ca/pipermail/nengo-user/attachments/20160211/e95e97a9/attachment-0002.html>
More information about the nengo-user
mailing list