It has been about 40 years since the first microprocessors appeared. Prior to that, computers were assembled from integrated circuits and prior to that discrete transistors, and reaching even further back in time vacuum tubes were the heart of the processor.
That conceptual architecture of computers has not changed in that time. Combine electronic switches to create a NAND gate, connect a few NAND gates to allow Boolean processing, and you have the building blocks for a computer. String a lot of these things together and you can create anything from a smart thermostat to a super-computer.
Microprocessors provide us with the ability to create devices with large numbers of switches (processing power) in a small space—as we have been made well aware of via Moore's Law. The ubiquity of devices with microprocessors has muddied the distinction between computers and personal electronic devices.
Anthropomorphic Machines?
At it's most basic a computer is a machine that has some sort of central processor that is capable of performing binary operations. It also has interfaces that allow external "things" to interact with that CPU and it has a way of storing the data manipulated by the CPU.
For those of us who live in developed nations we are constantly surrounded by and continually interacting with computing devices. The terminology we use to describe them implies that they have human-like qualities. We use smart phones. We talk about intelligent systems. We want computers to have human qualities.
The computer HAL in 2001: A Space Odyssey exhibited human-like emotions. When it was threatened with being shut down because of aberrant behavior it reacted by attempting to eliminate its enemies. Attributing emotions to a machine makes for great entertainment but it doesn't mirror reality. Likewise attributing intelligence to a machine doesn't mirror reality. Computers are capable of performing amazing things, but they are not intelligent—in the sense that human beings are intelligent.
Turing Test
In 1950 Alan Turing introduced his well-known eponymous test in his paper: Computing Machinery and Intelligence. Contrary to the commonly held view, the Turing test is not a test for machine intelligence. It is a test to determine if a machine can imitate human behavior and, by implication, human intelligence. Turing posed the question, "Can machines think?" and then proposed the test, in which a judge attempts to determine which of two entities, is a machine and which is human by asking a series of questions. The implication being that if one cannot distinguish a machine from a human the machine is "thinking."
The Turing test is interesting. Annual competitions like the Loebner Prize competition demonstrate the continually increasing sophistication of computers. But the Turing test does not prove (or for that matter disprove) that a machine can think. What it can prove is that a particular machine may be able to imitate human behavior. Imitating a thing does not make an entity become that thing—unless you are an existentialist.
Cogito Ergo Sum
Thinking is a uniquely human process. The term intelligence was originally used to describe attributes of the human mind—as distinguished from inanimate objects and lower life forms. As humans we tend to attribute intelligence to other mammals. We say that dolphins are smart and cows are not. It is interesting that those animals we eat—fish, cows, pigs, sheep—we characterize as less intelligent beings. "Smart" animals like dogs, cats, dolphins, monkeys, apes, and so on are generally left out of our food chain—generally but not universally. We are horrified by cultures that consume these intelligent animals.
I don't intend to begin a debate over the relative intelligence of various species. I suspect that most behavior we classify as intelligent is just a combination of instinctual and learned behavior coupled with a bit of wishful thinking. I love my dogs and even though they may appear to be more intelligent that my three-month-old grandson I suspect that he is the smarter beast.
All species that we attribute intelligence to all have some form of brain—a collection of neural cells that control body functions and which serves as the center of thought. Although our knowledge of how the brain "works" is limited, we do know that it consists of many (up to 200 billion) neurons. Each neuron has the ability to interact with up to 100,000 other neurons through a process known as a synapse or synaptic junction. Each synapse has a number of distinguishable levels of interaction. And all that adds up to very large (trillions) number of possible states.
Brains and CPU's
Since we are so prone to anthropomorphism we tend to liken a central processor unit to a mammalian brain. The parallels are there…neurons can be thought of as something like registers or RAM and synapses are connections between those registers. Like the brain, a CPU has a large number of states. So thinking anthropomorphically we make the leap to assuming that a computer is capable of thinking or intelligence.
Disregarding other human beliefs—that we possess a soul or something that transcends the actual physical and chemical reactions that take place in a brain—there remains a major difference between the inner working of a brain and a digital computer. Digital computers operate entirely on binary bits of data. Every state that a computer is capable of creating or acting on must be able to be reduced to 0's and 1's.
Every bit of information is nothing but a collection of on and off states. Contrast that to a synapse that has 200+ distinguishable types of interaction and those 200 billion neurons each connected to thousand of others. Information is stored in the brain at the molecular level. There are a seemingly infinite number of states for a bit of data stored in the brain. It is not just "on" or "off." The brain plays tricks with data. It may be accessible today but then disappear only to reappear years later. If my mainframe did that I would be getting a new mainframe. With a brain that is just SOP.
Rules To Live By
Computers are deterministic. They operate within the boundaries of a set of rules that are defined as a series of Boolean states. There is no rule set for a brain. The condition that exists within a brain is a constantly changing and evolving construct built with chemical, biological, and physical components. It may be that at the atomic and sub-atomic levels there exist deterministic rules governing behavior, but we have no knowledge that is the case.
Quantum physics is a theory that attempts to create an understandable (that is understandable by humans) framework around the observed or implied behavior of elementary particles. We construct such theories in the hope that we can ultimately understand the universe, but there is no certainty that we are even capable of understanding how things work and even less certainty that a general theory of everything would even be deterministic.
As it now stands, quantum theory is based on probability not certainty. That is a convoluted way of saying the brain—human or bovine—doesn't operate on a defined set of binary rules. It's the essential unpredictability of the thought process that makes it so unique. Just as it is the essential predictability of digital computer systems that make them such a valuable tool.
Predictability Is Good
When Apollo 13 was returning to earth after the explosion in the service module, one of the major concerns was the inability to restart the guidance computer. That forced the crew to make manual trajectory calculations and fly using a manual dead-reckoning approach—basically point the capsule toward earth and fire a rocket.
That unavailable guidance system was a 16-bit integrated circuit system with 2048 words of core memory and 36K words of ROM. My phone has more computing power. Yet as primitive as it was, it was the preferred option. If there had been sufficient power, that guidance system would have brought the capsule home. We need to accept computing systems for what they are and not try and humanize them. Computers do not think—they act in a defined manner on the data available to them.
Big Blue
IBM has always delighted in creating computing systems that are able to defeat a human opponent in some intellectual game. Deep Blue was able to beat chess grandmaster Gary Kasparov back in the 1990's. No surprise there. At any given time a chess game has a large finite number of possible moves. A grandmaster may be able to project 20 moves ahead, but those moves are largely based on previous games and strategies that they have studied and memorized. They are not the result of analyzing every possible set of moves.
Then there is Watson—the Jeopardy-playing super computer. Of course it isn't a single computer, it has racks of servers running in parallel, generating 80 teraflops of processing power (80 trillion operations a second) with 15 terabytes of data, the equivalent of a million books. Once again, a very impressive display of computing power.
It took four years and tons of resources to get it right. And right it was as it easily defeated the reigning human Jeopardy champions.
One Trick Ponies
Deep Blue and Watson were special purpose systems, built to perform specific functions and they are good at those special purposes. A human mind is not a single-purpose mechanism. It can play chess as well as Jeopardy. It can do the calculations necessary to hit a 100 MPH fastball and a major league curveball. It can effectively argue either side of a debate. It can understand the elegance and complexity of a Bach Fugue as well as delight in a Keith Richards guitar riff. I understand that the technology put together for Watson will be reused and that it is being repurposed for medical research. I applaud that, but we are just creating another special purpose system.
Grids
Computer grids are efficient ways of supporting multiple processes running across multiple machines. They are not general-purpose systems though; they are general-purpose platforms. They still require sets of instructions (programs) to tell them what to do with all those resources.
The human brain is ready to go. It comes with its own massively parallel (and serial and random) infrastructure and its own operation system. It doesn't require an external intelligence to generate an instruction set. It is perfectly capable of generating its own.
The printing press put thousands of monks out of work copying manuscripts by hand. That doesn't make the printing press monk-like nor does it even imply the resultant product is better—just that it and the process are different.
Different is good. It took a massively parallel computing system to discover a prime number with 12 million some digits, but it took a single man to prove Fibonacci's last theorem. That is the difference between number crunching and thinking, between computing machines and the human mind. One is efficient; the other is elegant.
Please address comments, complaints, and suggestions to the author at [email protected].
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.