Since day one I’ve kept an anecdote about a competitor’s failure in the back of my head as motivation to do better. This company, believed by some to make a high-quality product, had sent out a truss package that was plagued with problems and I made it my own goal to avoid this sort of thing as much as possible.
Truss designer may be my job description. However, things aren’t always as they seem, high-level creativity is not a requirement for most of what I do on a daily basis and, in reality, my job is loss prevention. My role is, first of all, to make an effective and efficient design that doesn’t waste material or add unnecessary cost. However, second, and more importantly, I must always meet the specifications of the customer and avoid truss design failure.
In an ideal world, my work would be spread out evenly, revisions after quotes (especially involving complex layouts) would be punishable by death, there would be no phone to interrupt, and I would have all day to create perfection. In the real world, unfortunately, there are trade-offs, it is deluge or drought (all deluge lately), and things do not always go as they should.
Anticipating an Opportunity to Impress
Anyhow, the office where I work has an open design and this feature, combined with my eavesdropping, gives me a preview of what is to come. I overheard the phone conversation, someone needed trusses yesterday, apparently, they were a loyal customer to one of our competitors and someone over there had dropped the ball. The salesman (my boss) assured him that we could make the trusses in the next couple days.
Upstairs I anticipated an opportunity to knock one out of the park and geared myself up for the fast turnaround time. The quicker I could finish the design work, the sooner the saw guys could get cutting, everything would hinge on my ability to churn something out quickly and I was determined not to be the gum in the works.
But what I (along with my boss) did not anticipate was that this was not a simple run of common trusses. No, it was an extremely complex design, a flat roof with angled walls, equipment loads, parapets, and something that would normally take a day or two to design. And, adding to the mess, was the fact that I had to match the competitor’s prints—which is not actually ideal.
Some customers lack appreciation for the design process and seem to think that we can just click a couple keys *bee-boop* out pops a truss layout. In reality, for things to go well, everything must be entered in a particular order (using a program that takes pleasure in crashing at the most inopportune times) and thus it actually is easier to work from scratch than to try to copy individual trusses from another manufacturer.
The Ace Up My Sleeve
Fortunately, I recognized this layout as something I had done before when contractors were bidding on the job. However, my own design had been true to the architect’s drawings, which is probably why we didn’t get the project to begin with, so I stripped off the trusses that I did before and went to work duplicating the cheaper (incorrect) version of the layout brought in by this contractor.
The competition’s webbing was atrocious, they obviously did nothing to optimize the generic web pattern spit out by the software and could be vastly improved with a little effort. It is amazing to me how many other truss company prints I see like this where clearly the ‘designer’ let the programming do all the work and take pride in my ability to go above and beyond. If I could not take a bit of pride in my work I would find something else to do.
I was halfway through my masterpiece, trying to work at warp speed while also checking all the right boxes and then realized something. Oh no! The trusses on the competitor’s prints were all an inch shorter than mine! Some contractors prefer it this way so that the trusses are easier to position in the field (or maybe because they hate truss designers?), for me this was simply another thing that could go wrong and meant going back through every truss I had already designed.
Finally, about an hour after lunch, I finished the last of the individual truss designs, took one last look at the profiles in the printer queue, and sent them down. My coworkers, the guys in sales, generally do a good job reviewing my work and having the plans confirmed with the customers before production.
Pride Cometh Before the Fail
However, it was at that time I made a terrible mistake. I went on Facebook and, with a slight amount of tongue-in-cheek, crowed:
I would be embarrassed to send out the truss prints that I’ve seen from some of our competitors, plain embarrassed.
I was feeling very good about what I had just accomplished. I had prints ready to go in record time and they were aesthetically pleasing to me. The contractor, I hoped, would appreciate my work and maybe reconsider his allegiances as well. I mean, we bailed him out. He had somehow been dropped off the schedule of our competitor, we got him what he needed and that was something to be proud about—not to mention that we beat them at price as well.
But pride cometh before the design failure and my moments of reveling were short-lived. After the trusses went out, a day or later, my boss received a call and it wasn’t good. I had screwed up. The truss lengths were not correct, some had to be cut down and would need repair prints. Apparently, in my haste, I had trimmed some of the trusses twice, I had done some once, and others not at all. That was something that I could have easily seen had I taken one look down the line of trusses in 3D rendering and reflected poorly on my efforts.
Besides that, the plans he gave had a small note near a dimension for the angled wall. For whatever reason, the architect (without changing the actual dimension) decided to change the angled wall slightly and that’s where the really big mess was. That would require specially engineered repair prints from Dallas. I feverishly went to work to determine what each new length would be on the dozen different length trusses on that wall.
Fortunately, the competitor’s truss prints, that I was supposed to copy, were wrong as well. Evidently, they too had missed the amendment made in a note on the plans. Unfortunately, I also missed the note and made a serious mistake myself besides. Yes, they were able to use the trusses, an inch difference isn’t a big deal ultimately, but it was the mental mistake that kept me up at night days later and soured my mood for the next week. It was a mistake and a missed opportunity to impress.
There is certainly always blame to go around for a failure like this. The contractor should have checked the prints, my boss could have insisted on this precaution before producing and shipping these trusses as well. But in the end, what matters as far as I am concerned is what I did wrong and how to ensure that it does not happen again. It is my job, as truss designer and loss prevention specialist, to ensure that this problem is solved—to make the changes necessary to the process to guarantee that there is no repeat of the same mistake.
A Final Analysis of the Failure
In short, I bungled my part because I rushed Taking one more look at the wall dimensions may have been enough to spot the note about the change of angle and lead to a clarifying question. An extra thirty seconds of review at the end of the design process would have been enough to find my own mistake. I should not expect my coworkers or the contractor to cover for my own incompetence and especially not on a complex layout where they lack the same resources. Design is my job, not theirs, and that means doing it right the first time.
Furthermore, design is a process that should never be rushed and it is my job to push back against pressure as needed. Sure, the sale’s guys do sometimes over-promise. And, yes, it is always my great pleasure to deliver results on or ahead of schedule. But it is also my responsibility to set a pace where I am comfortable working, where I am able to do things correctly the first time, and where the company avoids the cost and embarrassment of a truss design failure.
In my work, one small mistake can outweigh the hundreds of things done right. For instance, on this particular layout, I had all but two details right out of dozens of parameters. But the customer will only ever remember the hassle that came as a result of those two things overlooked. We will likely never get a second chance with this contractor after what happened and that’s on me—for failing at my most basic duty as truss designer.
Air-travel has become safer than ever and that due, in large part, to the increase in automated systems in the cockpit. However, with this advanced technology there comes a downside and the downside being that an otherwise perfectly functional aircraft (I.e., mechanically sound) with competent operators, can be lost because of a small electronic glitch somewhere in the system.
This issue was discussed, at length in response to the crash of Air France flight 447, an Airbus A330, in 2009, when an issue with an airspeed indicator and automated systems led to pilot confusion—which, in the end, resulted in a plunge into the ocean and the loss of all 228 people on board. The pilots were ultimately responsible for not responding in the correct way (they were in a stall and needed to push the nose down to recover lift) and yet the reason for their failure is as complex as the automated systems that were there to help them manage the cockpit.
One of the more common questions asked in cockpits today is “What’s it doing now?” Robert’s “We don’t understand anything!” was an extreme version of the same. Sarter said, “We now have this systemic problem with complexity, and it does not involve just one manufacturer. I could easily list 10 or more incidents from either manufacturer where the problem was related to automation and confusion. Complexity means you have a large number of subcomponents and they interact in sometimes unexpected ways. Pilots don’t know, because they haven’t experienced the fringe conditions that are built into the system. I was once in a room with five engineers who had been involved in building a particular airplane, and I started asking, ‘Well, how does this or that work?’ And they could not agree on the answers. So I was thinking, If these five engineers cannot agree, the poor pilot, if he ever encounters that particular situation . . . well, good luck.” (“Should Airplanes Be Flying Themselves?,” The Human Factor)
More recently this problem of complexity has come back into focus after a couple disasters involving Boeing 737 MAX 8 and 9 aircraft. Initial reports have suggested that at an automated system on the aircraft has malfunctioned—pushing the nose down at low altitudes on take-offs as if responding to a stall—and with catastrophic consequences.
It could very well be something as simple as one sensor going haywire. It could very well be that everything else on the aircraft is functioning properly except this one small part. If that is the case, it certainly not something that should bring down an aircraft and would not have in years past when there was an actual direct mechanical linkage between pilot and control surfaces. But, now, since automated systems can override pilot inputs and take away some of the intuitive ‘feel’ of things in a cockpit, the possibility is very real that the pilots simply did not have enough time to sift through the possibilities of what was going wrong enough to diagnose the issue, switch to a manual mode, and prevent disaster.
The FAA, following after the lead of China and the Europeans, has decided to ground the entire fleet of Boeing 737 MAX 8 and 9 aircraft pending the results of the investigations. This move on the part of regulators will probably be a big inconvenience for air travelers. Nevertheless, after two incidents, and hundreds dead, it is better to take the precaution and get to the bottom of the issue.
President Trump’s off-the-cuff Twitter response, basically stating “the complexity creates danger,” was met with the usual ridicule from those who hate the man and apparently do not understand hyperbole. (It ironic that some, who likely see themselves as sophisticated, have yet to see that through Trump’s putting-it-in-simple-layman’s-terms shtick.) However, technically incorrect is not the same as totally wrong and there is absolutely nothing ridiculous about the general point being made—there are unique (and unforeseeable) problems that come with complex systems.
The “keep it simple, stupid” mantra (aka: KISS principle) is not without merit in an age where our technology is advancing beyond our ability to control it. If a minor glitch in a system can lead to a major disaster, that is dangerous complexity and a real problem that needs to be addressed. Furthermore, if something as simple as flight can be made incomprehensible, even for a trained professional crew, then imagine the risk when a system is too complicated for humans alone to operate—say, for example, a nuclear power plant?
Systems too complex for humans to operate?
On the topic of dangerous complexity, I’m reminded of the meltdown of reactor two at Three Mile Island and the series of small human errors leading up to the big event. A few men, who held the fate of a wide swath of central Pennsylvania in their hands, made a few blunders in diagnosing the issue with serious consequences.
Human operators aren’t even able to comprehend the enormous (and awful) potential of their errors in such circumstances—they cannot fear to the same magnitude or to the proportion of the possible fallout of their actions—let alone have the ability to respond correctly to the cascade of blaring alarms when things did start to go south:
Perrow concluded that the failure at Three Mile Island was a consequence of the system’s immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a ‘normal accident’. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. (“In retrospect: Normal accidents“. Nature.)
The system accident (also called the “normal” accident by Yale sociologist, Charles Perrow, who wrote a book on the topic) is when a series of minor things go wrong together or combine in an unexpected way and eventually lead to a cataclysmic failure. This “unanticipated interaction of multiple factors” is what happened at Three Mile Island. It is called ‘normal’ because people, put in these immensely complex situations, revert to their normal routines and (like a pilot who has the nose of his aircraft inexplicably pitch down on routine take off) they lose (or just plain lack) the “narrative thread” necessary to properly respond to an emerging crisis situation.
Such was the case at Three Mile Island. It was not gross misconduct on the part of one person nor a terrible flaw in the design of the reactor itself, but rather it was a series of minor issues that led to operator confusion and number of small of mistakes that soon snowballed into something gravely serious. The accident was a result of the complexity of the system, our difficulty predicting how various factors can interact in ways that lead to failure and is something we can expect as systems become more and more complex.
And increased automation does not eliminate this problem. No, quite the opposite, it compounds the problem by adding another layer of management that clouds our ability to understand what is going on before it is too late. In other words, with automation, not only do you have the possibility of mechanical failure and human error, but you also have the potential for the automation itself failing and failing in a way that leaves the human operators too perplexed to sort through the mess of layered systems and unable respond in time. As the list of interactions between various systems grows so does the risk of a complex failure.
[As a footnote, nuclear energy is cleaner, safer and far more reliable than wind and solar farms. And, in the same way, that it is safer to fly than to drive, despite perceptions to the contrary, the dangers of nuclear are simply more obvious to the casual observer than the alternatives. So, again, with the fierce opposition to nuclear power by those who are unwittingly promoting less effective and more dangerous solutions, the human capacity to make good decisions when faced with the ambiguous problems created by the interaction of various complex systems does certainly come into question.]
Has modern life become dangerously complex?
There is no question that technological advancement has greatly benefited this generation in many ways and few would really be willing to give up modern convenience. That said, this change has not come without a cost. I had to think of that reality over the past few weeks while doing a major overhaul of how we manage information at the office and considering how quickly years of work could vanish into thin air. Yes, I suppose that paper files, like the Library of Alexandria burned, are always susceptible to flames or other destructive forces of nature. But, at least fire (unlike the infamous “blue screen of death“) is a somewhat predictable phenomenon.
Does anyone know why the Bluetooth in my car syncs up sometimes and not always?
Or why plugging my Android phone into the charger causes my calls in Facebook Messenger to hiccup (I.e., disconnects and reconnects multiple times) sometimes but not always?
I’m sure there is a reason hidden somewhere in the code, a failed interaction between several components in the system, but it would take an expert to get to the bottom of the issue. That’s quite a bit different from the times when the problem was the rain and the solution was cutting down a few trees to create a shelter. That was also true in the early days of machines as well—a somewhat mechanically inclined person could maintain and repair their own automobiles. However, the complicating factor of modern electronics has put this do-it-yourself option out of reach for all but the most dedicated mechanics.
Life for this generation has also become exponentially more complex than it was for prior generations when travel was as fast as your horse and you were watching your crops grow rather than checking your Facebook feed updates every other minute. It is very easy to be overwhelmed, as individuals, by information overload. The common man is increasingly over his head in dealing with the technological onslaught. We have become increasingly dependent on technology that we cannot understand ourselves and fails spontaneously, without warning, at seemingly the most inopportune times.
Advanced modern technology represents a paradigm shift as much as the invention of the automobile was a revolution for personal transportation. We have gone from analog to digital—a change that has opened a whole new realm of possibilities and also comes with a new set of vulnerabilities as well that go beyond the occasional annoyance of a computer crash. We really have no idea how the complexity of the current system will fare against the next Carrington Event (a solar storm that caused widespread damage and disruptions to the electric grid in 1859—a time of very basic and sturdy technology) nor are we able to foresee the many other potential glitches that could crash the entire system.
It is easy to be lulled into thinking everything will be okay because it has been so far. But that is a false security in a time of complex systems that are extremely sensitive and vulnerable. As when a pilot of a sophisticated airliner fails to comprehend the inputs or like the flustered operators of a nuclear reactor when the alarm bells ring, our civilization may be unable to respond when the complex systems we now rely on fail in an unexpected way that we could not predict. It is not completely unlikely that a relatively small glitch could crash the entire system and lead to a collapse of the current civilization. That is the danger of complexity, having systems that are well beyond our ability to fix should they fail in the right way at the wrong time.
The last human invention will be too complex to control and could be our demise…
Computers far exceed the human capacity to process information. We’ve come a long way from Deep Blue versus Garry Kasparov in the 90s and the gap between man and machine continues to grow wider after our best representatives were surpassed. Yet, while vastly faster in their abilities, computers have long only been able to do what they were programmed to do and thus their intelligence is limited by the abilities of their human programmers.
However, we are on the cusp of development of this technology and the implications far beyond the finite capacity of the human mind to grasp. We could very soon couple the processing speed of a computer with a problem-solving ability similar to that of a human. Except, unlike us, limited by our brain size and relatively slow processing speed, this “machine learning” invention (a video on the progress so far) could continue to expand its own intellectual abilities.
Machine learning is a massive paradigm shift from the programmed computers we currently use. It would lead to super-intelligence beyond our ability to fathom (literally) and, any more than a monkey can control us, could not be stopped. Imagine something that is always a hundred steps beyond any scenario we could imagine and has less in common with us (in terms of raw intelligence) than we do with an ant—would it have any reason not to treat us better than bacteria?
There was a time when I would not have believed that artificial intelligence was possible in my lifetime and a time after that when I would’ve thought it is something we could control. That was naive, artificial intelligence would, at very least, be unpredictable and almost totally unstoppable once the ball got rolling. It could see us as a curiosity, solve cancer simply because it could in a few nanoseconds—or it could kill us off for basically the same reason. Hopefully, in the latter case, it would see our extermination as not being worth the effort and be on to far greater things.
It remains to be seen whether artificial intelligence will solve all of our problems or see us as a problem and remove us from the equation. This is why very intelligent men, who love science and technological advancement, like Elon Musk, are fearful. Like the atomic age, it is a Pandora’s box that, once opened, cannot be closed again. But unlike a fission bomb that is dependent on human operators, this is a technology that could shape a destiny for itself—an invention that could quite possibly make us obsolete, hardly even worth a footnote in history, as it expanded across our planet and into the universe.
In fact, it is in your smartphone, it enables facial recognition and language translation. It also helps you pick a movie on Amazon by predicting what might interest you based on your prior choices.
Artificial intelligence technology could be our future. It could be that last invention that can finally manage all of these dangerous complex systems that modern convenience is so dependent upon and allow us to return to our simple pleasures. Or it could be a dangerous complexity in and of itself, something impossible to control, indifferent to our suffering and basically (from a human perspective) the greatest evil we ever face in the moments before it ensures our extinction.
Artificial super-intelligence will be complexity beyond our control, a dangerous complexity, and comes with risks that are humanly unimaginable. It could either solve all of our problems in dealing with disease and the complexity of our current technology—or it could make our woes exponentially greater and erase our civilization from the universe in the same way we apply an antibiotic to a pathogen. It is not ridiculous or absurd to think a little about the consequences before flipping the “on” switch of our last invention.
Should we think about simplifying our lives?
It is important, while we still reign supreme as the most inventive, intelligent and complex creatures on this planet, that we consider where our current trajectory will lead. Technological advancement has offered us unique advantages over previous generations but has also exposed us to unique stresses and incredible risks as well. Through technology, we have gained the ability to go to the moon and also to destroy all life on this planet with the push of a button.
Our technologies have always come as two-edged swords, with a good side and bad side. Discovering how to use fire, for example, provided us with warmth on a winter night and eventually internal combustion engines, but has often escaped our containment, destroyed our properties, cost countless lives, and creates air pollution. Rocks, likewise, became useful tools in our hands, they increased our productivity in dramatic fashion, but then also became a means to bash in the skulls of other humans as a weapon. For every positive development, there seems to be corresponding negative consequences and automation has proved to be no different.
The dramatic changes of the past century will likely seem small by comparison to what is coming next and there really is no way to be adequately prepared. Normal people can barely keep up with the increased complexity of our time as it is, we are already being manipulated by our own devices—scammers use our technology against us (soon spoof callers, using neuron networks, will be able to perfectly mimic your voice or that of a loved one for any nefarious purpose they can imagine) and it is likely big corporations will continue to do the same. Most of us will only fall further behind as our human weakness is easily used against us by the use of computer algorithms and artificial intelligence.
It would be nice to have the option to reconsider our decisions of the past few decades. Alas, this flight has already departed, we have no choice but to continue forward, hope for the best, and prepare for the worse. We really do need to consider, with the benefits, the potential cost of our increased dependence on complex systems and automation. And there is good reason to think (as individuals and also a civilization) about the value of simplifying our lives. It is not regressive or wrong to hold back a little on complexity and go with what is simple, tried and true.
On March 30th, 2015, I published a blog, “Sailing Beyond Safe Waters,” to express my determination to go beyond the safe harbor religious tradition and cultural obligation to truly live in faith.
That was when my blog was only viewed by my family and close friends. In the time since my audience grew exponentially and my life took some completely unexpected turns. Perhaps needless to say, I did leave that safe harbor (where some continue their cross harbor pleasure cruises make believing that they’ve entered the ocean of faith) and have charted territories completely new to me.
In the past couple years I’ve experienced the high seas of fear and doubt. There have been those moments of terror and panic too, when the winds howled, threatening to overwhelm the very timbers of my being, and the waves of a hopeless reality crashed hard. But I continued on, determined to break through, clinging to hope, and facing down the impossible.
Then there was that night when the main mast of my determination snapped, my ship of faith had been capsized by a rouge wave, and all seemed lost. The the debris of my dreams lay scattered across a swath of water a mile long and wide. It was the very thing those harbor pilots (who fancy themselves as seafarers for having seen the mouth of the harbor once or twice) had warned me against when had set out. If they cared, their faithless answers were vindicated with my failure.
However, my distress calls did not go unanswered. I was not alone on this ocean and there were those, who had also left their own safe harbors by necessity or choice, as determined to not let my journey end. It was their help that some of the more important items strewn about (things like meaning and purpose) were recovered from the wreakage.
Those on the ocean either know their need of others or they perish in the first big storm that they encounter. It is only in trials and tribulations that you know who your true friends are in a world of imposters. I’ve learned, by sailing beyond safe waters, that it is only the opinions of those who are there for in times of crisis that truly matter.
Finding the wind for your sails…
The time since then has been one of trying to rebuild identity around something more stayed and keeping doing those things that I’ve done right. I’ve been able to do some personal inventory and think of those things that really matter most to me.
My life, all things considered, hasn’t been bad. I have rental property, a great job, freedom to travel, ideas for the future and a precious bhest. That said, despite being enthralled by the beauty of Orthodoxy, it has been difficult to recover that basic faith—the faith that took me out of the safe harbor into the expanse of the deep—and I’m not sure if it is something that can be recovered.
I’ve been towed along by the obligations of life and a commitment to love with the impossible love in particular. I’ve taken the freedom of not having to try to navigate the waters of romance (having basically settled that question) to take on some other challenges. I’ve found that it is much easier for me to take risks now, leading to some small investments and exploring some others.
Still, the bigger pieces of my new life (post-storm) have yet to fall into place. The sails are unfurled on this vessel of faith, a vessel now shared with someone else, but we wait in the doldrums of the present, scrubbing the decks repeatedly (or, rather, doing the dishes and chores for a household of one) while hoping for that favorable wind that will carry us from this purgatory, of a life neither completely here nor fully there, and finally carry us to the paradise over the horizon.
It is important to be ready for when the wind returns, to have the capacity to take full advantage of that moment. The hard part is having the right mentality about the present reality to get to that moment and be ready to be underway—sailing again.