I was surprised the other day to see a news blurb that a movie based on Raymond Kurzweil's 2005 book, The Singularity is Near, was due for release this summer. I must assume the eponymous film will be loosely based on the book because it is a work of speculative philosophy–not fiction in the traditional sense of fiction.
The Singularity referred to is the supposed tipping point when technology no longer will require human beings to exist or change. Artificial intelligences will evolve to the point they will have the ability to improve and perpetuate themselves continually. Hand in hand with the development of such ?ber-artificial intelligence is a concurrent development of super technologies to support these new “beings.” The implications of such an event are profound and probably would challenge the very future of the human race.
Some speculate a result of the Singularity would be the ability to “download” somehow an existing human intelligence into a super or strong artificial intelligence and achieve a type of immortality. I would hope operating systems will have evolved significantly by the time of the Singularity. I don't know how many blue screens and core dumps my intellect would survive.
The Turing Test
Before we even can postulate something such as the Singularity, we need to determine whether artificial intelligence machines can ever truly approach intelligence in the sense of the self-aware intellect we, as humans, experience. It has been almost 60 years since Alan Turing published “Computing Machinery and Intelligence” (1950). That paper is the basis of the so-called Turing Test for the ability of a machine to demonstrate intelligence. A human interrogator or judge poses questions to two entities: a machine and a human. The questions are posed and answered in a natural language. The “conversations” take place using text only, so there are no auditory or visual clues to influence the judge. If the interrogator cannot reliably determine which entity is human and which is machine, the machine is said to have passed the test. Machines have passed some versions of a Turing Test for almost 20 years.
So, are the machines that pass the Turing Test intelligent? Not in any real sense of the word. All a machine must do to pass a Turing Test is to imitate a human–that is, the machine must reliably answer text questions in a way a human would. Imitation is not intelligence; it may be flattery, but it is only imitation. Machines that pass Turing Tests operate with sets of rules so they can appear human–one machine even was programmed to make typing “mistakes” in order to appear more human. A machine operating with a set of rules does not make for intelligence. But this does raise a very interesting question: Just how does one determine whether another entity is intelligent?
How Do We Determine Intellect?
I certainly am aware of my own intellect–my own self-awareness, my own inner self–and recognize myself as a thinking being. The question remains, however: How can I determine another entity is a thinking being? Unfortunately, there are no easy answers to that question. I remember as a child I was fascinated by that apparent conundrum: Was my conscious experience the same as everyone else's, or was it somehow unique? That led me to ponder even simpler questions such as whether my perception of the color green was different from that experienced by other beings? Ultimately these questions are unanswerable and unknowable. The only knowledge we can have of other intelligences is what we are able to observe. We then make assumptions based on those observations. This is, after all, the root of empirical science–assumptions based on observations. What we do is observe other beings and determine their actions are similar to our own and thus assume they are intelligent beings.
In that context, the Turing Test has some validity. If all we can know of another entity is what we observe, and what we observe is identical to what we would expect of an intelligent being, then we can conclude the entity we are observing is intelligent. Right? I don't think so (no pun intended). Doing what an intelligent being can do may be a sign of intelligence, but it does not determine intelligence. The question really is whether machines can think, not whether machines can act as if they can think.
Deep Blue
There are many human activities that are quantifiable, and those activities can be performed by digital computers. Consider the game of chess. Deep Blue, a massively parallel computer capable of analyzing 200 million chess positions a second, eventually defeated Grand Master Garry Kasparov. It was inevitable a machine could and would be able to win a game based on an immutable set of rules.
But once again, this is not a demonstration of intelligence. Game playing in the human sense is interesting because humans do not function as machines. Consider the “game of the century” that took place in 1956. Thirteen-year-old Bobby Fischer sacrificed his queen on move 17 only to force checkmate 20 moves later. That sacrifice actually was set up because his opponent made a minor mistake on move 11. That particular game would not have played out as it did if one of the players were a machine. Machines–chess playing or otherwise–are not capable of using emotion (or error) to influence their decisions. The value of sacrificing a queen relies on the emotional reaction of your opponent.
Interestingly, Deep Blue was reprogrammed between matches with Kasparov. The machine was defeated twice using the same trap in two games before it was reprogrammed to avoid it. Kasparov himself believed during one of the games human intervention took place–a charge that has been denied by the creators of Deep Blue.
Dualism or Materialism
The underlying philosophical question to all this has to do with dualism and materialism and the relationship between mind and matter. The two schools of thought have been debated for centuries. Is the mind purely physical–a series of physical and chemical events that taken together constitute intelligence–or is there beyond the obvious physical events that take place in a living being some additional something that makes us the intelligent beings we are?
The supposed nonphysical aspect of the mind often is couched in religious terms, but it does not need to be. In fact, mixing it with religion probably has done more harm than good in terms of the advancement of science. Most science is based on the assumption everything is knowable by human beings. This assumption is not only arrogant but self-destructive. Science certainly can help us understand the world as we know it and observe it, but that does not guarantee there are things we simply are unable to know.
Or for that matter, things we are unable to observe. We can describe multidimensional universes using mathematics, but we are unable to “visualize” those multidimensional universes or even understand the implications of their possible existence. If we allow for the fact there are some things that are unknowable–at least at this time–then it is not unreasonable to postulate physical phenomena alone may not totally describe what we know as intelligence. If we accept there is a nonphysical component of mind or intelligence, then we also must accept the conclusion artificial intelligences are not possible and the Singularity never will happen. So, what can we conclude if we reject dualism and accept intelligence is just a complex physical process?
The Singularity
The occurrence of the Singularity depends on a number of postulates. The first of these is computers will continue to evolve–in ways we do not even understand yet. The second postulate is the intelligent mind is knowable. The human brain is composed of some six trillion cells and arguably is the least understood organ in the human body. The map of the physical brain itself is embedded in the human genome, but the mind itself is a constantly evolving thing we do not yet have the ability to describe.
Present-day computer science is founded on a sets of rules based on on/off switches. Binary decision trees do not allow for distinctly human intelligent phenomena such as intuition or the “aha” moment. A computer certainly has the ability to create millions of contrapuntal lines of music, but there is no way for a digital computer to make the purely human decision that some particular lines of music are pleasing. A computer can create something such as “The Art of the Fugue,” but it never can know it is beautiful and art.
Digital computers must have rules by which they operate, and any system that works with predefined rules will not be truly intelligent even if that system begins to change the rules by which it operates. The universe is not digital. There is not an on/off state for every physical event. Quantum physics postulates many different states for atomic and subatomic particles. We use digital computers today because we do not successfully know how to make other types of computers. We digitize music and video because it is an easy and practical way to reproduce these things, but the sound produced by a violin is not digital–it is analog. Digital computers have revolutionized our world for the better, but they are incapable of becoming intelligent agents, although they certainly can act like one.
Moore's Law on Steroids
There is another assumption or postulate that accompanies present-day thinking about the Singularity, and that is known as the law of accelerating returns. Like Moore's law, it states based on what we have observed to date, technology will advance at a rapidly increasing exponential rate. One of the predictions based on the law of accelerating returns poses the year 2020 as the time when personal computers will have the same processing power as the human brain. By the 2040s, we will be able to move our minds into totally cybernetic bodies. You will note the Singularity depends on not just computers evolving at a hyperkinetic rate but also all technologies.
So What?
What are the real implications of something such as the Singularity? If we really could develop an ultra-intelligent self-aware machine that surpassed the intellectual capabilities of human beings and that also could perpetuate itself, what would the consequences be for the human race? I suspect they would not be favorable. The only reason we still manage to exist as a race on this planet is we have more or less adopted ethical rules that have kept us thus far from mutual destruction. I doubt an ultra-intelligent machine would have much use for ethics. In fact, if it really were all that smart it probably would look at the way we virtually have destroyed the planet and annihilate the race. I know I probably would. Or it might decide just to keep us around to run the power plants and reboot them every so often.
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.