TFNN – Associative Learning

This was cool, I had the first successful test of associative learning last night with a test matrix.

Just to explain what associative learning is for a second – behaviorist and neurophysiological studies have shown the following: A stimulus not normally associated with an action can become associated if that stimulus is caused simultaneous to another stimulus that IS associated with an action. EG if you shock a rat’s foot, the amygdala will process this and send a message to motor control to jump away. If you make a sound – a clap or a beep, whatever, the amygdala doesn’t perceive that as a threat so the rat does nothing. However, if you continually clap at the same time you shock the rat’s foot, the rat’s amygdala begins to associate the pathways involved with hearing a clap with those of receiving an electric shock, and hence in the future, JUST a clap will cause the rat to jump in the air as it thinks pain is coming.

On a neurophysiological level, in the lateral amygdala (in this example), there is a preexisting STRONG synaptic connection between the portion of the brain that receives the electric shock and the part the motor control that causes the rat to jump. There is a WEAK connection between the auditory thalamus and cortex and the part of motor control that causes the rat to jump – normally a clap wouldn’t cause it to jump.

However, due to Hebbian learning (triggering of NMDA receptors causing an influx of calcium that cause a genetic reaction to strengthen the synapse between the pre and post synaptic neuron), whenever the POST synaptic neuron fires off as a RESULT of a presynaptic neuron, the synapse between the two neurons is STRENGTHENED. Normally the connection between the auditory thalamus and the motor control via the LA is WEAK and not enough to cause the aymgdala neurons to fire off. However, if the rat receives a shock at the same time as it hears a clapping noise, then both the STRONG and the WEAK connections are fired over – which triggers the post synaptic neuron. Since the WEAK connection was a cause, albeit a small one that led to the post synaptic neuron to fire, its connection is now STRENGTHENED, so now (well, after a few times) its a STRONG connection like the shock neurons, so now clapping causes the rat to jump without a shock to the feet.

Anyway, that was the explanation – and I successfully tested this out in a minimatrix last night. I created a 3 neuron matrix. 2 input neurons, one output neuron. I connected the first input neuron to Milo’s left touch sensor, I connected the second input neuron to Milo’s right touch sensor, and I connected the output node to Milo’s speaker. I then created a strong synaptic link between neuron 1 and neuron 3, and a WEAK synaptic link between neuron 2 and neuron 3. Hence, touching milo’s left antenna would cause him to beep, but touching his right one did nothing. Then I began touching both his left and right antennae simultaneously a few times. The synapse involved in the right synapse grew stronger, so after this action I was able to touch JUST his right antenna and Milo would beep – he had grown to associate beeping with his right antenna whereas before he only associated beeping with his left antenna.

Awesome stuff!

Also, I have to run, but a few things I realize I’ve forgotten to include in the neuron functionality (I keep saying I’m done, I’m not even going to say that anymore, everday I realize more stuff I want to do)

FIRST: I need to fix the Hebbian algorithm as outlined above. Due to technical programming stuff it strengthens the synaptic links in an exponential fashion right now instead of a linear one. It’s easy to fix, I just need to do it.

Also, I wan to incorporate non-hebbian learning like Habituation and Sensitization. Habituation will be easy (from what I gather), it’s simply a depletion of neurotransmitters such as glutamate from the neuron, hence the neuron becomes less effective after repeated use – the overall effect being desensitization.

Sensitization requires the creation of axoaxonic connections (Axons that form synapses with other axons). As of right now the TFNN wasn’t built to handle this situation – but the awesome part is, the code that stores the synaptic gap information can easily be modified to fit pretty much ANY situation. So regardless of what lies on the other side of a synapse, the TFNN can handle it, which is pretty awesome. Had to pat myself on the back for that engineering. ๐Ÿ˜‰

Anyway, enough for now.

Leave a Reply

Your email address will not be published.