From lmichaels at mail.usf.edu Sat Jul 29 09:55:29 2017 From: lmichaels at mail.usf.edu (Laurence Michaels) Date: Sat, 29 Jul 2017 09:55:29 -0400 Subject: [nengo-user] Getting/interpreting neural weights Message-ID: Hi All, This is Larry. I did the coin-flipping task at this summer's brain camp (which I loved). I'm still working on my project and have confirmed that my model consistently underestimates Alternations at a Probability of Alternation around 0.6. (This is the same area where people have a bias and will underestimate it as well.) Now, I want to figure out why. My current theory is that the learning process is not stable/does not converge under this substantial uncertainty. To research this, I've created a probe to look at the neural weights. I have a two dimensional input ensemble (200 neurons) connected via PES learning to a one dimensional prediction (100 neurons). When I use a probe on the connection to get the weights, it is 3 dimensions: - dimension 0 is the time during the learning. The range for this index varies depending upon the 'sample_every' parameter. - dimension 1 only has a range of 1 (value = 0) - dimension 2 has a range of 200 (the size of my input ensemble) I expected the weights to have 3 dimensions (time, 200 for the input ensemble, 100 for the output ensemble). I seem to be mis-understanding something. What are the definitions of the dimensions? Any thoughts or advice? Best, Larry <p class="noscript">This app uses JavaScript. Your browser either doesn't support JavaScript or you have it turned off. To use this app please use a JavaScript enabled browser.</p> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbekolay at gmail.com Sat Jul 29 10:03:46 2017 From: tbekolay at gmail.com (Trevor Bekolay) Date: Sat, 29 Jul 2017 10:03:46 -0400 Subject: [nengo-user] Getting/interpreting neural weights In-Reply-To: References: Message-ID: Hey Larry! Good to hear from you again :) Could you post your question on the Nengo forum at https://forum.nengo.ai/ ? We're in the process of phasing out this mailing list so that this kind of info is easy to google. - Trevor On Sat, Jul 29, 2017 at 9:55 AM, Laurence Michaels wrote: > Hi All, > > This is Larry. I did the coin-flipping task at this summer's brain camp > (which I loved). I'm still working on my project and have confirmed that my > model consistently underestimates Alternations at a Probability of > Alternation around 0.6. (This is the same area where people have a bias and > will underestimate it as well.) > > Now, I want to figure out why. My current theory is that the learning > process is not stable/does not converge under this substantial uncertainty. > To research this, I've created a probe to look at the neural weights. I > have a two dimensional input ensemble (200 neurons) connected via PES > learning to a one dimensional prediction (100 neurons). When I use a probe > on the connection to get the weights, it is 3 dimensions: > > - dimension 0 is the time during the learning. The range for this index > varies depending upon the 'sample_every' parameter. > > - dimension 1 only has a range of 1 (value = 0) > > - dimension 2 has a range of 200 (the size of my input ensemble) > > I expected the weights to have 3 dimensions (time, 200 for the input > ensemble, 100 for the output ensemble). I seem to be mis-understanding > something. What are the definitions of the dimensions? Any thoughts or > advice? > > Best, > > Larry > <p class="noscript">This app uses JavaScript. Your > browser either doesn't support JavaScript or you have it turned off. To use > this app please use a JavaScript enabled browser.</p& > amp;gt; > > _______________________________________________ > nengo-user mailing list > nengo-user at artsservices.uwaterloo.ca > https://artsservices.uwaterloo.ca/mailman/listinfo/nengo-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: