The dawn of artificial intelligence has led to much consternation about whether this is a good or bad development. But, in order to better understand the technology and the implications, we must first define what we mean by intelligence. What does it mean to be intelligent? What makes us intelligent beings?
The definition of intelligence, provided by Google, is “the ability to acquire and apply knowledge and skills.” But this seems to be a little vague and inadequate. Robots, used in manufacturing, already apply knowledge and skills—things programmed into them by human operators. And “ability to acquire” isn’t too clear either. So we need to break this down further.
Intelligence is the ability to usefully process information.
It has components.
First there is the ability to interface within a broader space or an external environment. If there is no information to input them there is nothing to intelligently process. Our senses are what connects us to the physical world and part of how we navigate through life. An internal model of the outside domain starts with information gathering or interface.
Second, intelligence requires memory, the capacity to remember past success and failures. Much of what counts as human intelligence is a bunch of procedures and formulas we obtained from others through language. This rote learning isn’t actually intelligence, memory or knowledge alone aren’t intelligence, but it is definitely part of the foundation. Memory is one component of IQ we can exercise and expand.
Third, intelligence is an ability to recognize patterns, to accurately extrapolate beyond the data and draw the correct conclusions. The reality is that our ‘intelligence’ is mostly a process of trial and error, often spanning generations, which leads to advancement in technology and thought. All one needs to do is observe how major inventions came to be and the many flops along the way to realize we’re more like blind rats running through a maze, using impact with walls until we find an opening to pass through.
Forth, intelligence is an ability create good models to recombine existing ideas. Nikola Tesla was a genius and not only because of his knowledge. No, he could use what he knew to construct an apparatus in his brain, which he could then build in the real world. What set Tesla apart is that his imagination wasn’t fanciful. Indeed, anyone can proclaim that “there should be,” but it takes something else entirely to accurately extrapolate.
Finally, intelligence has an aim. Truly a pile of knowledge is worth much less than a pile of manure if it can’t be usefully applied. And if something is useful, that is to say that there is an underlying meaning or purpose. To be intelligent there must be some agency or will to drive it. Curiosity is one of the things that sets us apart, it moves us forward—questions like “what is beyond that mountain” or “how high does the sky go,” push innovation.
Intelligence is knowledge and abilities that are useful to something. Useful to us. And really becomes a question of what our own consciousness.
Intelligence Failure
“Primitive life is relatively common, but that intelligent life is very rare. Some say it has yet to appear on planet Earth.”
Steven Hawking
Another way to define what intelligence is is to explore what it is not. Encyclopedias hold knowledge, stored in human language, but a book on a shelf has no logos. It is the writer and reader that provide reason to the words, via their own interpretation or intended use, which is something that can’t be contained in ink on the page. There are many people who are full of knowledge, but it is largely trivial because they lack ability to put it to good use.
Another problem is perception. Even our physical eyes provide a very selective and distorted view of the world. We do not see everything and, in fact, can literally miss the gorilla in the room if our focus is occupied elsewhere. Many can’t comprehend their own limitations, they are guided through the evidence by confirmation bias and not with good analysis. We really can connect the dots any which way, see patterns in what is random truly noise, and errant perception is difficult to correct once entrenched.
Intelligence must be about knowledge and theories that can be usefully applied. The intuitions we have that help us to navigate the mundane tasks do not necessarily help us to draw correct conclusions so far as the more abstract areas. People can persist in being wrong in matters that can’t be readily tested and falsified. Any processor is only as good as the data that is entered and the depth of the interpretative matrix through which it is sifted and measured. Even the slight error in one of the pillars of a thought, no matter how good the rest of the material is, can lead to an entirely failed structure.
Thoughts are structures only as good as their base assumptions.
Being slow is also a synonym for a lack of intelligence. That is to say, in order to be useful, information must be processed in a timely manner. Missing context and cues also leads to poor understanding, like Drax protesting the metaphor “goes over his head” with, “Nothing goes over my head. My reflexes are too fast. I would catch it.” It does not matter how much information you process if the conclusions are inaccurate or too late for the circumstances. Wittiness and a good sense of humor is a sign that a person is intelligent.
Intelligence is a continuum. We can have more or less of it. But measures like IQ don’t really mean that much, a person with a high IQ isn’t necessarily smart or wise. A Mensa membership doesn’t mean you’ll make good decisions or be free of crackpot ideas. Sure, it will probably help a person navigate academia and be more verbose in arguments, but it is not going to free someone of bias nor does it mean they’re rational. This is why true intelligence needs to be about useful application.
Deus Ex Machina
Deus ex machina, literally “god from the machine,” refers to a plot element where something arrives that solves a problem and allows the story to proceed.
Ex Machina is also the title of a great movie which explores questions about artificial intelligence, with an android named Ava, her creator Nathan and a software engineer named Caleb. Caleb who was selected by Nathan is there to perform a Turing test and is eventually manipulated by Ava who uses his feelings for her as a means to escape. It is a sobering story about human vulnerability and the limits of our intelligence—Caleb’s human compassion (along with his sexual preferences) is exploited.
Ava
However, this kind of artificial intelligence does not exist. Yes, various chat bots are able to mimic human conversation. But this is not Ava talking to Caleb. There is not real self-awareness or observer behind the lines of code. It is, rather, a program that follows rules. Sure, it may be sophisticated enough to fool many people. But it is not sentient or being having agency, it is augmented human intelligence. They have essentially created a mannequin, not a man. Despite these bots being able to manufacture statements which sound like intelligence, they lack capacity for consciousness.
A true Ava would require more than mere ability to interact convincingly with humans, it would take the “ghost inside the machine,” that is to say duplicates our own singular experience of the present moment or has a mind’s I. This level of artificial intelligence doesn’t seem possible until we crack the code of our own self-awareness and that is a mystery yet to be solved. Even if you do not believe in things like immaterial spirit or detached soul, there is likely some special quality to the structure of our brains which creates this synthesis.
Without some kind of quantum leap, this A.I. technology will be an amplifier of the values of the creators, an intelligence built in their image and to serve them. It will not uncover objective truth or be a perfect moral arbiter. Nor will it be our undoing as a species. It will be a reflection of us and our own aims. It has no reason for it’s being apart from us. No consciousness, survival instinct or true being besides that of those utilizing it to extend their own.
There are many alt-right types who use IQ statistics to distinguish between groups of people, and yet they themselves do not seem to grasp statistics or even understand what IQ actually measures. They suggest their own lack of intelligence through this. And, given that their use of IQ is most often directed at those whom they deem to be inferior races and is what makes them feel superior, this is deliciously ironic.
Yes, certainly IQ does matter. But it matters in the same way that hitting a golf ball and bench pressing do as being a measure of overall athleticism. Sure, it does differentiate natural ability for those with equal training, and yet says very little about the inborn abilities of those coming from vastly different circumstances. In other words, I can out bench many bigger men who never saw a gym. But not because they couldn’t outperform me if they put the same time in. And, likewise, the kind of intelligence that IQ tests measure is built on practice.
So, basically, without a multi-variant analysis, the results of IQ tests tell us very little. A person can score high because they are genetically gifted. They could score high because they had a stable home, good nutrition, and high-quality education. And, like Koreans getting taller on average, lower average IQ today does not mean the same will be true tomorrow or if all circumstances were equal. In fact, IQ tests are increasing generation by generation, this is called the “Flynn Effect” and not necessarily a result of people actually getting smarter than their grandparents.
No, IQ tests tend to focus on a kind of abstract reasoning that has no practical application for prior generations or those who are raised outside of an advanced economic system. My ability to reason through engineering problems may unlock earning potential in a very controlled environment and yet doesn’t mean I would survive a day in the Amazon basin or on the streets of Rio. So this assumption that my test scores prove something about my superiority is basically nonsense.
Sure, not everyone has the mental capacity to solve differential equations. But that doesn’t mean everyone who couldn’t solve them prior to Isaac Newton and Gottfried Leibniz was an idiot.
The really crazy thing about racial supremacist mid-wits (or at least those who I know of European ancestry) is that they will so often make fun of the pointy-headed intellectuals (those who outscore them in IQ while lacking street smarts) only to turn around and use IQ statistics to create a racial pecking order. I mean, if IQ is a reason for some to rule, why do these same people turn to wild conspiracy theories to explain why many Ashkenazi Jews are disproportionately more successful (academically) and in positions of power or influence? Why not just assume they are the next stage of human evolution?
The truth is culture and environment have a large part to play in our development. What is prioritized in homes and communities can make a huge difference in outcomes. If my dad was an attorney and I was sent to a prep school, I would probably be more likely to score higher and go further in pursuit of a professional career. Alternatively, if I was raised in a place where everyone was obsessed with track speed and achieving celebrity status, I doubt I would’ve grown up playing with Legos or visiting various museums with my parents. My own 97th percentile IQ was likely built on experience as much as anything else.
Lastly, it is worth noting that outliers do not tell us a whole lot. Interestingly enough, men are both smarter and dumber than women and this has to do with standard distribution or how the bell curve works. What this means is that there can be more or less diversity within categories. Or, put otherwise, some Kenyans being excellent long-distance runners doesn’t mean all are and this superiority of some Kenyans will tell us even less about those on the other end of the African continent. Too often we look at the cream of the crop (or bad actors) as an indication of the whole and yet group statistics never tell us about individuals.
Air-travel has become safer than ever and that due, in large part, to the increase in automated systems in the cockpit. However, with this advanced technology there comes a downside and the downside being that an otherwise perfectly functional aircraft (I.e., mechanically sound) with competent operators, can be lost because of a small electronic glitch somewhere in the system.
This issue was discussed, at length in response to the crash of Air France flight 447, an Airbus A330, in 2009, when an issue with an airspeed indicator and automated systems led to pilot confusion—which, in the end, resulted in a plunge into the ocean and the loss of all 228 people on board. The pilots were ultimately responsible for not responding in the correct way (they were in a stall and needed to push the nose down to recover lift) and yet the reason for their failure is as complex as the automated systems that were there to help them manage the cockpit.
One of the more common questions asked in cockpits today is “What’s it doing now?” Robert’s “We don’t understand anything!” was an extreme version of the same. Sarter said, “We now have this systemic problem with complexity, and it does not involve just one manufacturer. I could easily list 10 or more incidents from either manufacturer where the problem was related to automation and confusion. Complexity means you have a large number of subcomponents and they interact in sometimes unexpected ways. Pilots don’t know, because they haven’t experienced the fringe conditions that are built into the system. I was once in a room with five engineers who had been involved in building a particular airplane, and I started asking, ‘Well, how does this or that work?’ And they could not agree on the answers. So I was thinking, If these five engineers cannot agree, the poor pilot, if he ever encounters that particular situation . . . well, good luck.” (“Should Airplanes Be Flying Themselves?,” The Human Factor)
More recently this problem of complexity has come back into focus after a couple disasters involving Boeing 737 MAX 8 and 9 aircraft. Initial reports have suggested that at an automated system on the aircraft has malfunctioned—pushing the nose down at low altitudes on take-offs as if responding to a stall—and with catastrophic consequences.
It could very well be something as simple as one sensor going haywire. It could very well be that everything else on the aircraft is functioning properly except this one small part. If that is the case, it certainly not something that should bring down an aircraft and would not have in years past when there was an actual direct mechanical linkage between pilot and control surfaces. But, now, since automated systems can override pilot inputs and take away some of the intuitive ‘feel’ of things in a cockpit, the possibility is very real that the pilots simply did not have enough time to sift through the possibilities of what was going wrong enough to diagnose the issue, switch to a manual mode, and prevent disaster.
The FAA, following after the lead of China and the Europeans, has decided to ground the entire fleet of Boeing 737 MAX 8 and 9 aircraft pending the results of the investigations. This move on the part of regulators will probably be a big inconvenience for air travelers. Nevertheless, after two incidents, and hundreds dead, it is better to take the precaution and get to the bottom of the issue.
President Trump’s off-the-cuff Twitter response, basically stating “the complexity creates danger,” was met with the usual ridicule from those who hate the man and apparently do not understand hyperbole. (It ironic that some, who likely see themselves as sophisticated, have yet to see that through Trump’s putting-it-in-simple-layman’s-terms shtick.) However, technically incorrect is not the same as totally wrong and there is absolutely nothing ridiculous about the general point being made—there are unique (and unforeseeable) problems that come with complex systems.
The “keep it simple, stupid” mantra (aka: KISS principle) is not without merit in an age where our technology is advancing beyond our ability to control it. If a minor glitch in a system can lead to a major disaster, that is dangerous complexity and a real problem that needs to be addressed. Furthermore, if something as simple as flight can be made incomprehensible, even for a trained professional crew, then imagine the risk when a system is too complicated for humans alone to operate—say, for example, a nuclear power plant?
Systems too complex for humans to operate?
On the topic of dangerous complexity, I’m reminded of the meltdown of reactor two at Three Mile Island and the series of small human errors leading up to the big event. A few men, who held the fate of a wide swath of central Pennsylvania in their hands, made a few blunders in diagnosing the issue with serious consequences.
Human operators aren’t even able to comprehend the enormous (and awful) potential of their errors in such circumstances—they cannot fear to the same magnitude or to the proportion of the possible fallout of their actions—let alone have the ability to respond correctly to the cascade of blaring alarms when things did start to go south:
Perrow concluded that the failure at Three Mile Island was a consequence of the system’s immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a ‘normal accident’. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. (“In retrospect: Normal accidents“. Nature.)
The system accident (also called the “normal” accident by Yale sociologist, Charles Perrow, who wrote a book on the topic) is when a series of minor things go wrong together or combine in an unexpected way and eventually lead to a cataclysmic failure. This “unanticipated interaction of multiple factors” is what happened at Three Mile Island. It is called ‘normal’ because people, put in these immensely complex situations, revert to their normal routines and (like a pilot who has the nose of his aircraft inexplicably pitch down on routine take off) they lose (or just plain lack) the “narrative thread” necessary to properly respond to an emerging crisis situation.
Such was the case at Three Mile Island. It was not gross misconduct on the part of one person nor a terrible flaw in the design of the reactor itself, but rather it was a series of minor issues that led to operator confusion and number of small of mistakes that soon snowballed into something gravely serious. The accident was a result of the complexity of the system, our difficulty predicting how various factors can interact in ways that lead to failure and is something we can expect as systems become more and more complex.
And increased automation does not eliminate this problem. No, quite the opposite, it compounds the problem by adding another layer of management that clouds our ability to understand what is going on before it is too late. In other words, with automation, not only do you have the possibility of mechanical failure and human error, but you also have the potential for the automation itself failing and failing in a way that leaves the human operators too perplexed to sort through the mess of layered systems and unable respond in time. As the list of interactions between various systems grows so does the risk of a complex failure.
[As a footnote, nuclear energy is cleaner, safer and far more reliable than wind and solar farms. And, in the same way, that it is safer to fly than to drive, despite perceptions to the contrary, the dangers of nuclear are simply more obvious to the casual observer than the alternatives. So, again, with the fierce opposition to nuclear power by those who are unwittingly promoting less effective and more dangerous solutions, the human capacity to make good decisions when faced with the ambiguous problems created by the interaction of various complex systems does certainly come into question.]
Has modern life become dangerously complex?
There is no question that technological advancement has greatly benefited this generation in many ways and few would really be willing to give up modern convenience. That said, this change has not come without a cost. I had to think of that reality over the past few weeks while doing a major overhaul of how we manage information at the office and considering how quickly years of work could vanish into thin air. Yes, I suppose that paper files, like the Library of Alexandria burned, are always susceptible to flames or other destructive forces of nature. But, at least fire (unlike the infamous “blue screen of death“) is a somewhat predictable phenomenon.
Does anyone know why the Bluetooth in my car syncs up sometimes and not always?
Or why plugging my Android phone into the charger causes my calls in Facebook Messenger to hiccup (I.e., disconnects and reconnects multiple times) sometimes but not always?
I’m sure there is a reason hidden somewhere in the code, a failed interaction between several components in the system, but it would take an expert to get to the bottom of the issue. That’s quite a bit different from the times when the problem was the rain and the solution was cutting down a few trees to create a shelter. That was also true in the early days of machines as well—a somewhat mechanically inclined person could maintain and repair their own automobiles. However, the complicating factor of modern electronics has put this do-it-yourself option out of reach for all but the most dedicated mechanics.
Life for this generation has also become exponentially more complex than it was for prior generations when travel was as fast as your horse and you were watching your crops grow rather than checking your Facebook feed updates every other minute. It is very easy to be overwhelmed, as individuals, by information overload. The common man is increasingly over his head in dealing with the technological onslaught. We have become increasingly dependent on technology that we cannot understand ourselves and fails spontaneously, without warning, at seemingly the most inopportune times.
Advanced modern technology represents a paradigm shift as much as the invention of the automobile was a revolution for personal transportation. We have gone from analog to digital—a change that has opened a whole new realm of possibilities and also comes with a new set of vulnerabilities as well that go beyond the occasional annoyance of a computer crash. We really have no idea how the complexity of the current system will fare against the next Carrington Event (a solar storm that caused widespread damage and disruptions to the electric grid in 1859—a time of very basic and sturdy technology) nor are we able to foresee the many other potential glitches that could crash the entire system.
It is easy to be lulled into thinking everything will be okay because it has been so far. But that is a false security in a time of complex systems that are extremely sensitive and vulnerable. As when a pilot of a sophisticated airliner fails to comprehend the inputs or like the flustered operators of a nuclear reactor when the alarm bells ring, our civilization may be unable to respond when the complex systems we now rely on fail in an unexpected way that we could not predict. It is not completely unlikely that a relatively small glitch could crash the entire system and lead to a collapse of the current civilization. That is the danger of complexity, having systems that are well beyond our ability to fix should they fail in the right way at the wrong time.
The last human invention will be too complex to control and could be our demise…
Computers far exceed the human capacity to process information. We’ve come a long way from Deep Blue versus Garry Kasparov in the 90s and the gap between man and machine continues to grow wider after our best representatives were surpassed. Yet, while vastly faster in their abilities, computers have long only been able to do what they were programmed to do and thus their intelligence is limited by the abilities of their human programmers.
However, we are on the cusp of development of this technology and the implications far beyond the finite capacity of the human mind to grasp. We could very soon couple the processing speed of a computer with a problem-solving ability similar to that of a human. Except, unlike us, limited by our brain size and relatively slow processing speed, this “machine learning” invention (a video on the progress so far) could continue to expand its own intellectual abilities.
Machine learning is a massive paradigm shift from the programmed computers we currently use. It would lead to super-intelligence beyond our ability to fathom (literally) and, any more than a monkey can control us, could not be stopped. Imagine something that is always a hundred steps beyond any scenario we could imagine and has less in common with us (in terms of raw intelligence) than we do with an ant—would it have any reason not to treat us better than bacteria?
There was a time when I would not have believed that artificial intelligence was possible in my lifetime and a time after that when I would’ve thought it is something we could control. That was naive, artificial intelligence would, at very least, be unpredictable and almost totally unstoppable once the ball got rolling. It could see us as a curiosity, solve cancer simply because it could in a few nanoseconds—or it could kill us off for basically the same reason. Hopefully, in the latter case, it would see our extermination as not being worth the effort and be on to far greater things.
It remains to be seen whether artificial intelligence will solve all of our problems or see us as a problem and remove us from the equation. This is why very intelligent men, who love science and technological advancement, like Elon Musk, are fearful. Like the atomic age, it is a Pandora’s box that, once opened, cannot be closed again. But unlike a fission bomb that is dependent on human operators, this is a technology that could shape a destiny for itself—an invention that could quite possibly make us obsolete, hardly even worth a footnote in history, as it expanded across our planet and into the universe.
In fact, it is in your smartphone, it enables facial recognition and language translation. It also helps you pick a movie on Amazon by predicting what might interest you based on your prior choices.
Artificial intelligence technology could be our future. It could be that last invention that can finally manage all of these dangerous complex systems that modern convenience is so dependent upon and allow us to return to our simple pleasures. Or it could be a dangerous complexity in and of itself, something impossible to control, indifferent to our suffering and basically (from a human perspective) the greatest evil we ever face in the moments before it ensures our extinction.
Artificial super-intelligence will be complexity beyond our control, a dangerous complexity, and comes with risks that are humanly unimaginable. It could either solve all of our problems in dealing with disease and the complexity of our current technology—or it could make our woes exponentially greater and erase our civilization from the universe in the same way we apply an antibiotic to a pathogen. It is not ridiculous or absurd to think a little about the consequences before flipping the “on” switch of our last invention.
Should we think about simplifying our lives?
It is important, while we still reign supreme as the most inventive, intelligent and complex creatures on this planet, that we consider where our current trajectory will lead. Technological advancement has offered us unique advantages over previous generations but has also exposed us to unique stresses and incredible risks as well. Through technology, we have gained the ability to go to the moon and also to destroy all life on this planet with the push of a button.
Our technologies have always come as two-edged swords, with a good side and bad side. Discovering how to use fire, for example, provided us with warmth on a winter night and eventually internal combustion engines, but has often escaped our containment, destroyed our properties, cost countless lives, and creates air pollution. Rocks, likewise, became useful tools in our hands, they increased our productivity in dramatic fashion, but then also became a means to bash in the skulls of other humans as a weapon. For every positive development, there seems to be corresponding negative consequences and automation has proved to be no different.
The dramatic changes of the past century will likely seem small by comparison to what is coming next and there really is no way to be adequately prepared. Normal people can barely keep up with the increased complexity of our time as it is, we are already being manipulated by our own devices—scammers use our technology against us (soon spoof callers, using neuron networks, will be able to perfectly mimic your voice or that of a loved one for any nefarious purpose they can imagine) and it is likely big corporations will continue to do the same. Most of us will only fall further behind as our human weakness is easily used against us by the use of computer algorithms and artificial intelligence.
It would be nice to have the option to reconsider our decisions of the past few decades. Alas, this flight has already departed, we have no choice but to continue forward, hope for the best, and prepare for the worse. We really do need to consider, with the benefits, the potential cost of our increased dependence on complex systems and automation. And there is good reason to think (as individuals and also a civilization) about the value of simplifying our lives. It is not regressive or wrong to hold back a little on complexity and go with what is simple, tried and true.