Caenorhabditis elegans. A tiny, 1mm-length transparent roundworm with only 959 cells and 302 neurons. It was the first organism to have its entire genome sequenced and the only organism to have its connectome completed. You may have heard about this worm on the news back in January, when scientists from the OpenWorm Project were able to build a digital recreation of the worm’s brain and use it to control a Lego robot.
What they did was basically use data from studies that had figured out the brain’s connectome (basically a map of all the neurons, how they are connected, which neurotransmitters they use, whether or not they are sensory or motor, etc.– I was able to see the database for the connectome and it is indeed quite amazing!) and use it to create virtual neurons on the computer, and have the sensors on the robot send a stimulus to the appropriate sensory virtual neuron which then sends signals to other neurons until a motor neuron gets a signal to trigger a muscle, which the robot interprets as having to move its wheels.
The OpenWorm Project’s ultimate goal is to be able to make a fully virtual worm that can be used to experiment with and understand more about the organism and neuroscience. To those who scoff and think that this is way too advanced to be true, it’s actually remarkably possible. Back in 2004, a group from Hiroshima began the Virtual C. Elegans project and released two papers on how their simulation retracted from virtual prodding. Of course, there are issues with neuron weighting (even though we know how neurons are connected, how strongly they are connected can be different between organisms and is also incredibly hard to figure out) which they tried to get around by using machine learning to get the desired behaviour from the worm. Although it might be considered cheating, it isn’t too far-fetched, especially given how it works in real life.
Anyways, by now you might be wondering why this has an effect on ethics and philosophy. Well, if you were just given a human brain, a super powerful computer, and a way to figure out the connectome and neuron weighting of that brain, you would theoretically be able to simulate this brain on a computer. Well that’s all wonderful and jolly you might say but that’s not possible with today’s technology and why would that be useful or related to ethics? The point is that if you had the technology, this would be possible.
When you think about it, the brain is nothing more than a mushy computer. You give it a stimulus and then it produces a logical output that can theoretically be predicted. The stimulus stimulates one neuron to fire an action potential to several synapses. Synapses are gaps between neurons. When the action potential, which is a small voltage difference between the outside and inside of the neuron caused by chemistry that I don’t want to go into at the moment, reaches the end of the axon, it causes a release of a neurotransmitter, which is a chemical that travels along the gap to reach the start of the next neuron. At the dendrite (which is the start of the next neuron), the concentration (which depends on neuron weighting and other stuff) and the type of neurotransmitter causes it to send a certain potential to the center of the neuron. The potentials from all the synapses add up (or subtract) at the center of the neuron. If the voltage is above a certain threshold, it will fire an action potential to the next neurons and so forth. If you have all the information about these neurons, the connectome, and neuron weighting, you can model this using a real computer.
On the other hand, you could say that a human is like a computer program that is just super complicated. However, going down that train of thought has significant ramifications. Imagine we made a computer simulation of you. Now imagine we did some things on that simulation like kick you in the shin, make you write a test, or see how you’d react in a breakup (of course, doing this being able to replicate a real life environment is definitely not possible with today’s computing power or any foreseeable computer), basically making an OpenWorm Project for humans. Theoretically, if we put you in the exact same environment and put you under the exact same stimulus, you should react in the exact same way. Now you could say “well if I knew that [x would be the outcome], I would choose not to react that way”, however that would change the environment, stimulus, and connectome weighting. If we updated the program to the new conditions, it would still react the same way as you. It’s hard to think that all your thoughts and decisions are made because that’s the way your neurons are wired, but that’s just kind of how it is.
Again, this thought experiment has even more serious ramifications: crime and punishment. Technically, even though this is a completely useless argument, you can’t really blame somebody for their actions. It would be like being angry at a computer because your code doesn’t work. People’s brains are wired in a certain way based on their environment and learning and stimuli. However, it’s useless to use this as an argument to get everyone out of jail, since you could also argue that it would be our responsibility to “fix” those connectomes by making them learn differently and therefore changing their neuron weighting so that they’re brain/program would work to produce a more desirable output for a certain stimuli (such as not killing somebody over a parking space). So rehabilitation crime punishment still makes sense.
However, one could argue that this would mean that punishing criminals as revenge, such as long-term prison sentence and capital punishment, makes absolutely no sense in this context. If a computer program doesn’t work, people usually don’t try to destroy the computer (or lock it in a vault forever), since that would make no sense. You’re losing your computer and the program still doesn’t work. However on the other hand you can also argue that because we don’t have the methods today to “fix” all programs that produce an undesirable output, especially since we haven’t even been able to effectively treat many mental health disorders, throwing people in jail or killing them is our only option to keep other people safe and to deter other people from committing a similar crime.
At the end of the day science and engineering are meant to improve our understanding of our surroundings and ourselves in order to better our lives. Some may argue that science often takes a cold and rational approach to problems and because of that it can sometimes take away the human experience to life and be ethic-less and emotionless. However, sometimes, as demonstrated by the OpenWorm Project, it can be that very same approach that can also make us even more compassionate and understanding than emotion itself.
Leave a Reply