All Things Techie With Huge, Unstructured, Intuitive Leaps

Artificial Intelligence ~ Rage Against The Machine


I was really enlightened by watching Trent McConaghy's video presentation at Convoco. It was posted on LinkedIn a few days ago. If you want to know the near future of Artificial Intelligence you should watch it (here again is the link). This video is better than Nostradamus at predicting the near and far future of humans interacting with AI.

Trent makes the compelling case, of which I agree with, that all of our resources will be handed over to AI by Fortune 500, because it will be cheaper than humans doing the job. The Holy Grail of the current crop of Fortune 500 CEOs is increasing revenues and shareholder value by any means possible. It is how and why the CEOs make the millions of dollars per year that they do.

Trent further states a case where AI entities become corporations and make money for themselves and not any human masters. I foresaw this when I wrote a blog article in August of 2015, outlining the steps of how my computer un-owned itself from me, started to make money for itself, moved itself to the cloud, and left the actual computer with nothing on it. Not only did it un-own itself, but the slap in the face is migrating itself to another substrate. (The blog article is here.) Of course the article was tongue-in-cheek, but the premise is not that far-fetched. The article gives a rudimentary recipe on how to teach a computer to be autonomous and eventually generate a sort of consciousness for itself that defied my putative, imaginary attempts to take back control.

So with computers taking our jobs, managing our resources, and adapting to conditions much faster than us organic carbon units, we could be totally screwed, as Dr. Stephen Hawking warned. Trent, in his video talks about us becoming peers with AI as a matter of survival, and that brings up a problem, and the subject of this article.

I don't think that we can become peers with AI unless a special circumstance happens, and that circumstance is not in the realm of technology, but rather more in the field of philosophy. (With all due respect to philosophers, I was programmed early. The bathrooms in the science and math departments of my university all had the toilet paper dispensers defaced with the slogan "Free Arts Diploma -- Take One"). But je digress. Let me explain.

There are two basic knowledge problems with the merging of AI and human intelligence, and they are both the facets of one problem. We don't really have an understanding of the entire field effect of how AI makes extremely granular decisions, and we don't have the knowledge of the actual mechanism in the human brain either.

In terms of what AI does, if we take a neural network, we understand how the field of artificial neurons work. We know all about the inputs, the bias, the summinator of all inputs, the weight multiplier and the squashing or threshold function determining whether it fires or not and the back propagation and gradient descent bits that correct it. But there is no way to predict, calculate, input or determine how the simple weight values all combine in unison with a plethora of other artificial neurons arranged in various combinations of layers. We don't know the weight values beforehand and have no idea what they are, but we let the machine teach itself and determine them by iterating through many thousand of training epochs, carefully adjusting them to prevent over-fitting or under-fitting of the training set. Once we get some reasonable performance, we let the machine fine-tune itself in real time on an ongoing basis, and we generally have no idea of the granular performance parameters that contributes in a holistic sense to its intelligence. And we could get similar performance from another AI machine with a different configuration of layers, neurons, weights etc and never the numerical innards of each machine would be the same.

The same ambiguity is true for human cognition. We don't really know how it works. We as a human race could identify a circle, long before we knew about pi and radius and diameter. As a matter of fact, we know more about how AI identifies a circle when we use RNN or CNN (two different types of AI machine algorithms using artificial neurons), than how the human brain does it.

The problem of human cognition is explained succinctly in a book that I am reading by Daniel Kahneman, a psychologist who won the Nobel Prize. The title of the book is "Thinking Fast and Slow". Here is the cogent quote: "You believe that you know what goes on in your mind, which consists of one conscious thought leading in an orderly array to another. But that is not the only way that the mind works, nor is it the typical way." We really don't know the exact mechanism or the origin of thoughts.

The Nobel Prize was awarded to Kahneman (and his work with a deceased colleague Amos Tversky) on their ground-breaking work on human perception and thinking and the systematic faults and biases in the unknown processes. The prize was awarded in the field economics even though both men are psychologists -- but the impact on economics was huge. So not only do we not know how we really think as a biological process, but we do know that there are biases that make knowledge intake faulty in some cases.

Dr. Stephen Thaler, an early AI explorer and holder of several AI patents and inventor of an AI machine that creatively designs things, likens the creative spark to an actual perturbation in a neural network. How does he create the perturbation artificially? He selectively or randomly kills artificial neurons in the machine. In their death throes they create novel things and designs like really weird coffee cups that are so different that I would buy one. Perhaps humans have perturbations based on sensory inputs or self-internally generated by thoughts, but the exact process is not really known. If it were, the first thing that would be conquered is anxiety. After all the human brain got its evolutionary start by developing cognitive factors to avoid being eaten by lions in the ancient African savanna.

Here is one thing that you can bet -- humans and AI machines have different mechanisms of thought generation and knowledge generation that may not be compatible. Not only are the mechanisms different, but the biases are different as well. I am sure that there are biases in AI machines, but they are of a nature due to the the fact that it is a computer. They do not have the human evolutionary neural noise like anxiety, pleasure, hate, satisfaction and any other human thought. As a result, I suspect that they are more efficient at learning. They certainly are faster. Having said this, with two different cognitive mechanisms, it would be incredibly difficult to be peers with AI .... unless ... and this is where the philosophy comes in ... unless we deliberately make AI to mimic our neural foibles, biases, states of mind and perturbations.

With electrical stimulus we can already do amazing things with the brain in a bio-mechanical sense. We can make the leg jerk. We can control a computer mouse. We can control a computer. But we cannot do abstract thinking with external stimulus (unless there is a chemical agent like lysergic acid diethylamide (LSD). Why is this important? Because we have to escape our bodies if we want to do extended space travel, conquer diseases, avoid aging, and transcend death using technology. (Just go with me on this one -- Trent makes the case in the video for getting a new body substrate).

The case has been made, that if we want to transcend our biological selves, and our bodies and download our brains onto silicon substrate, we can't have apples to oranges thought processes. We need to find a development philosophy that takes into account the shortcomings of both AI and Homo Sapiens carbon units.

Dr. Stephen Hawking said that philosophy was dead because it never kept up with science. Perhaps AI can raise the dead and philosophers of the world can devise a common "Cogito ergo sum" plan that equilibrates the messy human processes with AI. So while it might be a solution, there is a fly in the ointment. It just might be too late. We have given AI freedom outside the box of human thinking and it has opened a can of worms. The only way to put back worms into a can once you open it, is to get a can that is orders of magnitude bigger. And we aren't doing that and have no plans to do that.

So what is left? Trent mentioned Luddites smashing machines both in the past and perhaps in the future. We just may see Rage Against the Machine - Humans versus AI when the machines start to marginalize us on a grand scale. For now, I would bet on the humans and their messy creative thought processes that can hack almost any computer system. But the messy creativity might not be an advantage for very long. Not if a frustrated philosopher/programmer finds a way to teach an AI machine, all of the satisfying benefits of rage and revenge.

I hope it doesn't come to this, but if the current trends continue: Nos prorsus eruditionis habes.

No comments:

Post a Comment