<div dir="ltr"><div class="gmail-abs gmail-unselectable" id="gmail-io-ox-core"><div class="gmail-abs" id="gmail-io-ox-screens"><div class="gmail-abs" id="gmail-io-ox-windowmanager"><div class="gmail-atb" id="gmail-io-ox-windowmanager-pane"><div class="gmail-window-container gmail-io-ox-mail-window gmail-chromeless-window" id="gmail-window-0"><div class="gmail-window-container-center" style="width:100%"><div class="gmail-window-body gmail-classic-toolbar-visible gmail-right-section-enabled" style="left: 250px;"><div class="gmail-abs gmail-window-content gmail-vsplit gmail-preview-none"><div class="gmail-rightside gmail-mail-detail-pane gmail-selection-one gmail-preview-visible"><div tabindex="-1" class="gmail-thread-view-control gmail-abs gmail-back-navigation-visible"><div class="gmail-thread-view-list gmail-scrollable gmail-abs"><div class="gmail-thread-view gmail-list-view"><article tabindex="0" class="gmail-list-item gmail-mail-item gmail-mail-detail gmail-f6-target gmail-focusable expanded" style="outline:0px"><section tabindex="0" class="gmail-body gmail-user-select-text gmail-focusable"><div class="gmail-mail-detail-content gmail-noI18n gmail-colorQuoted gmail-simple-mail" style="min-height:100px"><div id="gmail-ox-ac41ec29b4"><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">Hi All,</span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">This is Larry. I did the coin-flipping task at this summer's brain camp (which I loved). I'm still working on my project and have confirmed that my model consistently underestimates Alternations at a Probability of Alternation around 0.6. (This is the same area where people have a bias and will underestimate it as well.)</span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">Now, I want to figure out why. My current theory is that the learning process is not stable/does not converge under this substantial uncertainty. To research this, I've created a probe to look at the neural weights. I have a two dimensional input ensemble (200 neurons) connected via PES learning to a one dimensional prediction (100 neurons). When I use a probe on the connection to get the weights, it is 3 dimensions:<br></span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">- dimension 0 is the time during the learning. The range for this index varies depending upon the 'sample_every' parameter.<br></span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">- dimension 1 only has a range of 1 (value = 0)<br></span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">- dimension 2 has a range of 200 (the size of my input ensemble)</span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">I expected the weights to have 3 dimensions (time, 200 for the input ensemble, 100 for the output ensemble). I seem to be mis-understanding something. What are the definitions of the dimensions? Any thoughts or advice? </span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">Best,<br></span></p><p><span style="color:rgb(51,51,51);font-family:helvetica,arial,sans-serif;font-size:12pt">Larry</span></p></div></div></section></article></div></div></div></div></div></div></div></div></div></div></div></div><div id="gmail-tmp" style="display:none"></div>
<noscript>&amp;lt;p class="noscript"&amp;gt;This app uses JavaScript. Your browser either doesn't support JavaScript or you have it turned off. To use this app please use a JavaScript enabled browser.&amp;lt;/p&amp;gt;</noscript></div>