Category Archives: SynthNet

A Verification of SynthNet’s Ion Handling

The following graphs demonstrate SynthNet’s substance and electrochemical engine.

For each graph, we have a setup a virtual soma with typical ion concentrations for a Mammalian neuron. Specifically:

Intra/Extra Na: 18mM/145mM
Intra/Extra K: 140mM/3mM
Intra/Extra Cl: 7mM/120mM
Intra/Extra Ca: 100nM/1.2mM

First, we verify that GHK is properly reducing to the Nernst equation and equilibrium potential is correctly being calculated. For this test, we isolate the ion in the question by removing permeability of all other ions across the cellular membrane. We then record membrane potential and ensure it matches equilibrium potential for that ion’s electrochemical gradient.

I forgot to change the scale over, so potential is shown in volts – so remember the factor of 1000 for mV.

For Sodium, we should get +56mV (Verified!)

For Potassium, we should get -102mV (Verified!)

For Chloride, we should get -76mV (Verified!)

For Calcium, we should get +125mV (Verified!)

So at this point, we verify that GHK is correctly reducing to Nernst for single ions. Now we need to test that GHK correctly works with multiple ions. So at this point, we setup typical permeability ratios for our neuron. Specifically, Pk:PNa:PCl:PCa = 1.00:0.04:0.45:0.000001.

For these ratios, we should see around -70mV, which is typical for many neurons, including the dorsal lateral geniculate nucleus, thalmus, and close for many others. (Verified!)

Now, switching over to verifying functionality of GHK flux, we setup an experiment where we again isolate a single ion type, but this time mimic voltage clamping experiments by turning off GHK voltage calculation on our membrane and setting it to a static voltage. We then initiate calculations with the incorrect intracellular and extracellular ion concentrations. If GHK flux is working properly, the ionic concentrations to achieve their respective homeostatic values for the specified membrane potential.

For Potassium, we clamp the voltage at -102mV – we should see concentrations even out at Intra/Extra K: 140mM/3mM (Verified!)

For Calcium, we clamp the voltage at +125mV – we should see concentrations even out at Intra/Extra Ca: 100nM/1.2mM (Verified!)

So ionic flux calculations look spot on too! With both potential and flux working properly, the engine provides enough functionality for the purposes of our emulator (currently, anyway).

I’ll leave off with a fun graph of running substance calculations over time with no ionic pumps in place to maintain homeostasis. I had to use LiveGraph for this one as Excel doesn’t allow this many graph points, and I don’t know how to turn on the legend – Green/Pink:K, Purple/Yellow:Na, Blue/Cyan: Cl, Ca not really visible, bottom is voltage. Next time I’ll have graphs of action potentials, fun stuff.

SynthNet, the Start of a Neural Emulator

If you’re anything like me, or many of the programmers and hardware hackers out there, you have a deep urge to constantly be creating something. While this presents the opportunity to try new and fun stuff, it can also be a curse in the fact that sometimes it’s hard to complete projects before jumping into a new one. I constantly have this issue, and in general I’ve tried to be good about not staring a new project before completing my existing one. And if you’ve known me for any period of time, you know there is one project that is the big one for me – the one that I’ve been working on for years, and the one that really drives me as a computer scientist – that is my quest to fully emulate the biological neural network (easy, right?). Well, after years of constantly putting it aside while working on other projects, the last 4 months I’ve been very good about focusing on it.

Goodbye TFNN, Hello SynthNet

The problem with emulating the biological brain is – it is extremely complicated to say the least, and there is still a library of information we don’t understand about neuroscience. However – there is also a huge amount of information we DO understand. I’ve had the disadvantage that I do not have a formal education in the biological sciences, let alone the specifics of neurophysiology. Because of that, the process for me of emulating it has been difficult. I have had to do a lot of catchup research to equal what the average graduate would have. This is very apparent looking at the work I’ve done now as compared to earlier versions of the emulator (TFNN) – you can see as much going back to older blog entries on this site. I am by no means an expert now, but I was less so of one back then. In the last year or two, I’ve really hit the books and tried to learn everything I can. And in doing so, I’ve learned that I got so much wrong before, that it was easier to start over again than try to repair what I had. And with that, comes the newest revision of the emulator, SynthNet.

What SynthNet Does So Far

At this point, SynthNet does the following:

  1. Emulates virtual major cellular structures, such as neuron soma, dendrites and denritic arbors, axons, terminals/boutons, synapses, etc – each with the full functionality (when applicable) of the following:
  2. Physical properties such as position, surface area, and cellular membranes.
  3. The ability to contain substances, including ions such as Sodium, Potassium, Chloride, and Calcium, as well as neurotransmitters and modulators, such as Glutamate, Serotonin, Norepinephrine, etc, both intracellular and extracellular.
  4. For all substances, current concentration is stored (with resolution to nanomoles), homeostatic concentrations, and valance of any ion substances
  5. Cellular membranes contain channels, both to the extracellular space, as well as gap junctions to the intracellular space of other cellular structures.
  6. Each channel stores permeability, what substance it is permeable, and tag information for synaptic tagging or other secondary messenger processes.
  7. Both leak channels and active pumps are supported
  8. Channels can also have gates, including voltage gates, inactivation gates, and ligand gates. Voltage gates activate at specified membrane potential, inactivation gates close either voltage or ligand gates after a certain amount of time, and ligand gates open in response to a specific concentration of a specific substance
  9. Membrane voltage is calculated using the Goldman-Hodgkin-Katz Voltage Equation modified for the inclusion of divalent ions (this may need a little tweaking though, converting this over to make use of Spangler’s equation from Ala J Med Sci, 9:218-223, 1972)
  10. Ion flux across the membrane is calculated using the Goldman-Hodgkin-Katz Flux Equation, with a membrane surface area coefficient.
  11. All substance flux is virtually processed in an N+1 parallel fashion across all neurons simultaneously
  12. The emulation of myelin sheaths via the elimination of channels/permeability in specific axonal segments, and an increase in intracellular trans-segment permeability across axonal segments.
  13. CSV export functionality for analysis within Excel, LiveGraph, or other tools

So at this point, it handles ions and substances as a whole pretty well, calculating flux across a substance’s electrochemical gradient fairly accurately (for our purposes). At this point, we can setup typical ion concentrations for a mammalian neuron, setup leak, pump, and voltage channels, and trigger action potientials with the expected results (still tweaking some of the values).

To Do:

What we don’t have yet, but will have:

  1. The regulation of extracellular substances via astroglia. This is the next thing I’m working on
  2. Any kind of protein synthesis or activation, such as kinase phosphorylation. After I get some of the glial cell work done, this will be the next big addition to the emulator. This is critical for the mediation of Hebbian plasticity and other types of learning. The genetic engine of the emulator will allow any sequence of instructions to be run under the specified protein activation – so this will cover everything from the addition of AMPA receptors due to NMDA receptor activation, to neurite growth due to nitric oxide as a retrograde messenger, and the entire neurogenesis process as a whole. Very excited to get started on this.
  3. Visualization engine, as a kind of virtual fMRI, for the purposes of graphical analysis
  4. A separate engine to mutate genetic code across generations for the purposes of natural selection (more on this later, a whole different phase of the project)
  5. A lot of other details, those are the biggies for now

TFNN – Virtual DNA

You may be wondering what TFNN is – it stands for Temporal Frame Neural Network, an artificial intelligence project of mine to accurately simulate the biological brain. Sadly, I haven’t worked on it in a few years for a number of reasons – reasons that were good at the time (and I wouldn’t change, the whole learning experience thing), but ones that aren’t so important now.

What the Heck Happened?

Jumping off topic a bit, but relevant to why I’m starting back up again, I realized a few weeks ago that I’m not really happy with the way my life is going. I mean, don’t get me wrong, overall things are great and I’m doing okay, but there is definitely something off. There are a few reasons, but one of the biggies was I was always doing things I felt obligated to do and never did things for fun anymore. I was always taking on a project to advance somehow, and never did it for the art or to enjoy it. I was always working hard, but honestly not really wanting the outcome, so it would never really go anywhere. I’m not a business man – I don’t like or want to play the game (There’s another article in here about not always turning your hobbies into something you get paid for, but that’s for another day). There’s nothing wrong with being a business man mind you – I’m just not one.

So I made the decision to just stop worrying about “succeeding” in these classic ways that are good for some people, but not for me. I learned something big from Shredz64 – I will never make any money off the project, but I had an incredible amount of fun doing it, and I have made so many connections with people because of it – it’s just amazing. I want to keep doing that all the time – I want to make things – not worry about marketing or selling them – I just want to create and share. I’ll save the rest of my thoughts for another post, but the bottom line is, I’ve already started working a ton more on my projects and I’ve been much happier because of it.

Back to Virtual DNA

SO, that being said, I recently made a 10 hour drive to and from Toronto, and it gave me a lot of time to think. Some of that thought was dedicated to the TFNN project. While the “neurophysiology” of the TFNN works great on a neuron and connection level, the overall issue remains in how those synaptic connections are made. Their configuration. Biology has a great thing going for it with DNA that controls neural development – during the neurulation phase when the neuroectoderm forms a lot of things happen, but at the end of the day through migration, axon paths and some other tricks, neurons are placed into their proper locations and form appropriate connections. Regardless of the nature vs nurture argument, there is definitely prewiring that is done. It’s the reason why a cat will never develop the ability to speak Romanian and why rabbits breathe without being trained to do so – it’s millions and millions of years of neurological evolution packed into a double helix.

Therein lies the problem – I have the materials with TFNN, but no blueprint I can use to construct something. I can make very small and specific networks, or very large, random ones, but neither of those will accomplish the goal of creating animal intelligence. So a blueprint is needed. Life has DNA, but what does TFNN have?

Use Real DNA?

My first idea to conquer this issue (as outlined on the project page) was to use some of the sequence databases that are available online – there are a couple species that have a very full nucleotide sequence documentation available. I won’t even bother mentioning all the reasons why this was never going to work, because the biggest reason is, I’m not a molecular geneticist, and while I have a good understanding of how DNA works, I don’t come close to having enough understanding to use DNA sequencing information to form a TFNN. It’s another project that I would love to start one day to build my understanding of the process, but not right now.

Let the Turing Machine Do What it Does Best

What I decided on the car ride was instead of using real DNA, it would be more realistic (relatively speaking) to create a virtual (accelerated) environment where evolution could take place and form virtual DNA. The TFNN already has rudimentary functionality for building neural networks from a list of instructions, so this is doable. Here’s the very lofty plan:

  1. Flesh out the matrix class inside of TFNN to construct neural networks as defined by encoded, segmented bit sequences. This will be some work but I have a good idea of how to accomplish it. There are already class members that control size, synaptic density, geography, and even connection specific formation within the neural matrix – the bit sequence needs to drive these member functions. The purpose of something encoded like a bit sequence as opposed to human readable scripts is to allow for easy engineering of mutation capabilities necessary for evolution
  2. Find a lightweight, open source graphics/physics engine. There are a few of them out there for games – it doesn’t need to look good or even come close to being the most advanced one available, it just needs to support a number of attributes common to our world such as mass, gravity, displacement, etc. The key is lightweight as possible, we don’t want to eat up CPU maintaining the world, we need all the cycles we can get for TFNN processing
  3. Engineer a method of recharging a lego NXT robot (Bit, my little LEGO robot will be the subject in these experiments) that can be initiated and completed by the robot itself. There are a number of ways to accomplish this, something tactile is preferred to force movement. Something like a magnetic connector with DC current. It would also need to produce a distinct stimulus to indicate it was a source of “nourishment” so to speak, such as producing an audible tone at a specific frequency .
  4. Create a VDNA (virtual DNA, easier to type) sequence to form a neural network that dictates motor control to guide robot to its “feeding station”. It doesn’t need to have any logic outside of a straight path for the source. I’ve created simple neural networks like these before and it is doable.
  5. Within the physics engine, model an environment that very simply and basically models a real world environment. The goal is by no means to have every possible physical scenario that could exist in the real world, its to offer enough obstacles and stimulus that evolution can take place, while using obstacles that are common to the environment the robot will operate in. Also include feeding stations
  6. Build an engine to generate instances of TFNNs using VDNA sequences and process them. Connect them to virtual robots modeled after the NXT lego robot and place them in the virtual world. Also include functionality to take the VDNA of a specific instance and spawn a new instance of the virtual robot. We could do this asexually or start with a neural configuration that drives two robots to touch in a manner that shares VDNA for virtual reproduction – I haven’t decided on this one yet. Regardless, new VDNA is subject to random mutation or corruption in the bit sequence
  7. Build in parameters that cause death in the virtual robots as well as prevent premature reproduction – most importantly that reproduction doesn’t take place if nourishment isn’t obtained.
  8. Run this simulation until results are obtained
  9. Take VDNA from successful virtual robot, generate instance of TFNN, connect to Bit and watch the fun

I can’t complain about being bored, that’s for sure – I will post here as I go. It may lead nowhere, but I’m extremely interested to see the results – even if it completely fails, it will still be fun science.

TFNN – Major changes

I thought I’d sit down and update – it’s not that I haven’t been working a lot on TFNN, I just haven’t had a chance to sit down and actually write about it!

Firstly, I implemented crude, neuron-global neuromodulator code a week ago or so. It worked under my very specific test cases, but it didn’t really accurately model how dopamine, serotonin, or norepinephrine function on the whole. I realized there was a lot of neuron-global code that really should have been axon-terminal/synaptic cleft/postsynaptic receptor specific. Can’t write too much about it, but yesterday I rewrote a lot of code dealing with neuromodulators and synapse processing so it more closely dealt with activity on the receptor level and not on the neuron level. I ran test cases with both an inhibitory and excitatory neuromodulator, both were success.

Right now however, neuromodulators will blindly increase or decrease the effect of a neurotransmitter. I would like to include code that discerns between a glutamate excitatory reaction and a GABA inhibitory reaction, and selectively affects only one.

TFNN – Neuromodulators

A quick update while I’m thinking about – next time I sit down with the code I want to add a section to emulate the functionality of dopamine cells like those found within the ventral tegmental, and other neuromodulators. This is actually a major enhancement and something to give careful thought to before proceeding. At first I intended TFNN matrices to operate without global or semiglobalized synaptic modulation – IE the tfnn matrix would operate purely on the “mechanical nature” of electro-chemical reactions in axodendritic, axosomatic, and axoaxonic connections – no globalized chemical reactions within the system.

The more I study though, the more I realize how important dopamine and other neuromodulators are in the prefrontal cortex regions. Via message controlled signals, these modulators can facilitate GABA reactions, and hence temporarily “quiet” certain systems, allowing for concentration. I have a feeling that without dopamine emulation matrices would fall prey to a ubiquitous ADD of sorts, and perhaps fail to mold meaningful neural configurations in deeper matrices due to an overload of traffic on neural bridges coming from sensory thalami and cortices.

At first when I was kicking it around I was thinking of just modifying axoaxonic connection code to introduce a negative change to synaptic weights and have that emulate dopamine secretion. This isn’t accurate though, as dopamine is a modulator, not a permanent change to the synaptic weights.

I think this may call for another variable to be introduced into the neuron, one that keeps track of current affecting modulators. More space – but I also realize I have an unused integer currently in the neuron that I used during debug sessions, I’ll remap that for dopamine / other modulator use. I may use it or another variable in connection to track glutamate supply to emulate habituation effects as well. It will add very little additional calculation time.

It’s amazing how large the TFNN neuron has grown in complexity from when I first completed the code until now.

TFNN – Another step down

Another quick update – I fixed some synapse timing issues in the Temporal Frame engine and finished up the axoaxonic code this weekend. I had a succesful test of sensitization as well, demonstrating the non-Hebbian learning capabilities of a neural matrix. Due to axoaxonic connections, a presynaptic neuron can now cause a direct increase in the synaptic weight of the postsynaptic neuron’s axon terminal (This postsynaptic neuron itself being a presynaptic neuron in another relationship).

The test was performed by generating a 3 neuron matrix. Milo’s left touch sensor was sent as input into neuron 1, while Milo’s right touch sensor was sent as input into neuron 2. Neuron 1 was connected via an axoaxonic connection to neuron 2’s axon terminal – the axon terminal creating the synapse between neuron 2 and neuron 3 in a standard axodendritic configuration. Neuron 3’s output was sent to Milo’s speaker.

The treshold rate of Neuron 3 was set higher than the synaptic weight between neuron 2 and 3, hence if Milo’s right antenna was pressed he would not beep. However, upon touching Milo’s left antenna a few times, via the phenomenon of sensization the synaptic weight between Neuron 2 and 3 was increased, and subsequent pressings of Milo’s right antenna was enough alone to cause Milo to beep, now the synaptic weight had grown strong enough to pass Neuron 3’s threshold.

Cool stuff!

TFNN – Axoaxonic Issue

I realized something today when I started fleshing out axoaxonic connections a bit – something about a flaw in the temporal frame engine itself. I can’t go too much into it, but I don’t think I would have realized it unless I had realized about axoaxonic connections, so I’m glad things worked out the way they did. It’s a fairly easy fix, so that’s good.

Also, I read a few papers on QBT (Quantum Brain Theory), and it seems like most reputable neurophysiologists don’t really buy it, and from what I’ve read I don’t really buy it either. The size and effect of the electrochemical reactions just don’t seem to leave any room for the very microscopic-natured effects of quantum mechanics, even if microtubules are a place where the magic could happen.

So, first things first, mend the engine, then I can go ahead and add habituation and sensitization effects.

TFNN – Associative Learning

This was cool, I had the first successful test of associative learning last night with a test matrix.

Just to explain what associative learning is for a second – behaviorist and neurophysiological studies have shown the following: A stimulus not normally associated with an action can become associated if that stimulus is caused simultaneous to another stimulus that IS associated with an action. EG if you shock a rat’s foot, the amygdala will process this and send a message to motor control to jump away. If you make a sound – a clap or a beep, whatever, the amygdala doesn’t perceive that as a threat so the rat does nothing. However, if you continually clap at the same time you shock the rat’s foot, the rat’s amygdala begins to associate the pathways involved with hearing a clap with those of receiving an electric shock, and hence in the future, JUST a clap will cause the rat to jump in the air as it thinks pain is coming.

On a neurophysiological level, in the lateral amygdala (in this example), there is a preexisting STRONG synaptic connection between the portion of the brain that receives the electric shock and the part the motor control that causes the rat to jump. There is a WEAK connection between the auditory thalamus and cortex and the part of motor control that causes the rat to jump – normally a clap wouldn’t cause it to jump.

However, due to Hebbian learning (triggering of NMDA receptors causing an influx of calcium that cause a genetic reaction to strengthen the synapse between the pre and post synaptic neuron), whenever the POST synaptic neuron fires off as a RESULT of a presynaptic neuron, the synapse between the two neurons is STRENGTHENED. Normally the connection between the auditory thalamus and the motor control via the LA is WEAK and not enough to cause the aymgdala neurons to fire off. However, if the rat receives a shock at the same time as it hears a clapping noise, then both the STRONG and the WEAK connections are fired over – which triggers the post synaptic neuron. Since the WEAK connection was a cause, albeit a small one that led to the post synaptic neuron to fire, its connection is now STRENGTHENED, so now (well, after a few times) its a STRONG connection like the shock neurons, so now clapping causes the rat to jump without a shock to the feet.

Anyway, that was the explanation – and I successfully tested this out in a minimatrix last night. I created a 3 neuron matrix. 2 input neurons, one output neuron. I connected the first input neuron to Milo’s left touch sensor, I connected the second input neuron to Milo’s right touch sensor, and I connected the output node to Milo’s speaker. I then created a strong synaptic link between neuron 1 and neuron 3, and a WEAK synaptic link between neuron 2 and neuron 3. Hence, touching milo’s left antenna would cause him to beep, but touching his right one did nothing. Then I began touching both his left and right antennae simultaneously a few times. The synapse involved in the right synapse grew stronger, so after this action I was able to touch JUST his right antenna and Milo would beep – he had grown to associate beeping with his right antenna whereas before he only associated beeping with his left antenna.

Awesome stuff!

Also, I have to run, but a few things I realize I’ve forgotten to include in the neuron functionality (I keep saying I’m done, I’m not even going to say that anymore, everday I realize more stuff I want to do)

FIRST: I need to fix the Hebbian algorithm as outlined above. Due to technical programming stuff it strengthens the synaptic links in an exponential fashion right now instead of a linear one. It’s easy to fix, I just need to do it.

Also, I wan to incorporate non-hebbian learning like Habituation and Sensitization. Habituation will be easy (from what I gather), it’s simply a depletion of neurotransmitters such as glutamate from the neuron, hence the neuron becomes less effective after repeated use – the overall effect being desensitization.

Sensitization requires the creation of axoaxonic connections (Axons that form synapses with other axons). As of right now the TFNN wasn’t built to handle this situation – but the awesome part is, the code that stores the synaptic gap information can easily be modified to fit pretty much ANY situation. So regardless of what lies on the other side of a synapse, the TFNN can handle it, which is pretty awesome. Had to pat myself on the back for that engineering. ๐Ÿ˜‰

Anyway, enough for now.

TFNN – Terminology update

Just a quick note – I need to think up a name for the visualization routine – the problem is it doesn’t strictly show the same activity of a functional MRI scan, but it also doesn’t show the same activity as a PET or SPECT scan. Something to think about, not really a big deal in the grand scheme of things.

TFNN – Functionality Addon

Also, while I’m thinking about it, I would like to include functionality of post-synaptic generated neurotrophins leading to axon growth and branching within the presynaptic neurons. This will be pretty easy, in the code used to increase synaptic weight, I can also setup new synaptic connections to geographically neighboring neurons. I’m not sure at what rate to do this though, from the literature it suggests it doesn’t happen as often or as quickly as synaptic reinforcement.