TFNN – Hebbian Rewrite

Just a quick update today, no pictures.

I took out the old, incorrect synapse alteration routines today and replaced them with the new routines which match the functionality of Hebbian Learning’s “Fire Together Wire Together”. I actually used a different, more efficient algorithm than what I had originally planned, so the drop in speed is nothing at all. It did increase the size (in bytes) of each neuron, but speed is more a problem than space at this point.

Now that this is finished, all the underlying functionality is (to my knowledge) correct. From this point onward it’s simply a matter of testing different methods of construction with different values for threshold rates, plasticity level, degredation amount, physical placement, etc.

Also, I would like to take a day and just sit down with the code and see if I can get it any more efficient. Right now when I start generating neural matrices in the tens of thousands level, along with a high synaptic density, the thing just grinds to too much of a halt, though this may be OpenGL processing all the graphical representation of the net and not the net taking that time up itself. I will test it out without the graphics routine running and see how it does.

But regardless more efficient code is always a good thing. I know some places where I used a little more memory than I needed and added a few extra steps, I can shave it down a bit.

Everything’s going great though! Once I activated the Hebbian routines activity no longer followed a systematic pattern, or at least I couldn’t see one – which I believe is a very very good thing.

Leave a Reply

Your email address will not be published.