Words and Wars — Why Musk Terrifies the Establishment

Standard

Some of us are old enough to remember the playground taunt, “Sticks and stones may break my bones, but words can never hurt me.”  That denial of the power of words, of course, was merely to disempower a bully and quite a bit more effective than crying for mommy in most circumstances.

In this age of online censorship and newly invented categories of offense, it is difficult to even claim that words have absolutely no impact on us.  Being called a “racist” or “domestic terrorist” does matter, it can come with serious social consequences and be used as a pretext for punishment of political opponents.  No laughing matter.

We are governed by words.  If we see a red sign emblazoned with the letters S-T-O-P, we tend to comply (at least partially) without much thought.  And, whether you want to comply or not, because of written laws, you’ll end up giving the IRS a significant portion of your income.  Words can and do hurt your wallet, they limit opportunity and shape outcomes.

We are steered, employed by others to their own ends, by use of description, framing and narratives.  For example, whether a deadly conflict is described as being a “military intervention” (Yemen) or as an “invasion” and “aggression” (Ukraine) has little to do with substantive difference and everything to do with how propagandists wish us to perceive the event. 

Context provided, what is or is not reported, changes the moral equation.  

Those who control social media platforms understand the power of words.  They know that awareness is induced through language and that narrative matters.  This is why they have taken such interest in curtailing speech and the dissemination of information.  Even if corrupted by partisanship, many of them likely see this as their responsibility or a moral obligation.

Unfortunately, regardless of intent, these self-appointed gatekeepers failed.  The same people who routinely “fact-check” hyperbole and satire, even banned people for suspecting the lab origin of the pandemic, have yet to identify the Russian collision narrative as false.  The most egregious act was Twitter using bogus reasons to suspend the account of the New York Post for their sharing the Biden laptop bombshell on the eve of the 2020 Presidential vote. Talk about election interference!

Elon Musk’s announcement of his ownership of a significant stake in Twitter and then subsequent buyout of the far-left’s favorite social media has shook up the political establishment.  Elizabeth Warren, a powerful US Senator, who leveraged a fiction about her Native American heritage to attain her own privileged position, somehow worth $67 million herself, had this to say:

Strange how now she speaks up about potential “dangerous to democracy,” but not when Big Tech was using the pretense of their “community standards” to ban content creators, including a former President, for challenging their ideological agenda and narratives.  Sure, they always could conjure their excuses or hide behind “Twitter is a private business, if you don’t like it start your own internet,” disingenuously while suing individuals who defied their demands, but now the truth comes out, suddenly it is all about democracy:

Credentialism much? I guess we should trust the privileged elites who trust the corporate system instead?

To those of us who have faced algorithmic demotion and punitive measures for our wrong-think, doing things like posting the actual flag of Ukraine’s Azov battalion or a quote of Hitler praising censorship intended as ironic, there is appreciation for Musk as a free speech advocate.  To those who use the word “democracy” as an excuse to trample rights, this represents an enormous threat to the ability to control narrative.

For those of us who have been paying close attention and involved, we know why Yahoo News, along with other far-leftist run online publishers, have shutdown their comment sections.  Sure, they may say this was to prevent misinformation, but the reality is that there would often be factual rebuttals or additional context that would undermine the narrative of the article.  It was always about control, not protection.

The war of words is as important as that which involves tanks, bombs and guns.  It was propaganda and censorship, as much as physical means, that enabled Nazis to put Jews in camps.  This is why Russo-phobia, the demonization and cancelation of a whole ethinic group, over things the the US-led imperial left, is so troubling.  President Obama was not accused of war crimes for a brutal AC-130 attack on an Afghan hospital, despite the dozens of verified casualties, why is that?

It is, of course, how the story is presented that makes all of the difference.  If a writer wants a leader to appear incompetent they might use the words like “bungled” as the description.  If they wish to spin it as positive they’ll say “setbacks” and dwell on framing the cause as righteous instead.  Those who want the public to support one side of the Ukrainian conflict will downplay or even completely ignore important context, like NATO expansion, the violent overthrow of Ukraine’s democratically elected government in 2014, and merciless shelling of the Donbass region.

And this is why Musk promising to restore freedom of speech on Twitter is such a big deal and especially to the current power brokers.  The military-industrial complex, which owns the corporate media and many of our politicians, stands to lose billions in revenue if they can’t convince the gullible masses that Vladimir Putin is literally Hitler for leading a US-style “regime change” effort in his own neighborhood.

I mean, how will US political families, like the quid pro quo Biden’s, continue to make their millions in kickbacks (Burisma/Hunter scandal) if Ukrainian’s energy is back under Russian control again?

This is why they’ll fight tooth and nail to keep the presentation of the story as one-sided as possible.  They do not want us to hear the facts that may cause questions.  They only want us to have their prepacked stawman “don’t say gay” version of their enemies, presented by the late-night funnyman for ridicule, rather than allow a truly informed debate. 

Unlike many, the ignorant who accept narratives at face value, the elites with government and corporate power understand that the world is run by ideas.  It is how wars are won.

Dangerous Complexity: What To Do About the Complex Problem of Complexity?

Standard

Air-travel has become safer than ever and that due, in large part, to the increase in automated systems in the cockpit. However, with this advanced technology there comes a downside and the downside being that an otherwise perfectly functional aircraft (I.e., mechanically sound) with competent operators, can be lost because of a small electronic glitch somewhere in the system.

This issue was discussed, at length in response to the crash of Air France flight 447, an Airbus A330, in 2009, when an issue with an airspeed indicator and automated systems led to pilot confusion—which, in the end, resulted in a plunge into the ocean and the loss of all 228 people on board. The pilots were ultimately responsible for not responding in the correct way (they were in a stall and needed to push the nose down to recover lift) and yet the reason for their failure is as complex as the automated systems that were there to help them manage the cockpit.

It is this problem with advanced electronics that is summarized as a “systemic problem with complexity” in the quote below:

One of the more common questions asked in cockpits today is “What’s it doing now?” Robert’s “We don’t understand anything!” was an extreme version of the same. Sarter said, “We now have this systemic problem with complexity, and it does not involve just one manufacturer. I could easily list 10 or more incidents from either manufacturer where the problem was related to automation and confusion. Complexity means you have a large number of subcomponents and they interact in sometimes unexpected ways. Pilots don’t know, because they haven’t experienced the fringe conditions that are built into the system. I was once in a room with five engineers who had been involved in building a particular airplane, and I started asking, ‘Well, how does this or that work?’ And they could not agree on the answers. So I was thinking, If these five engineers cannot agree, the poor pilot, if he ever encounters that particular situation . . . well, good luck.” (“Should Airplanes Be Flying Themselves?,” The Human Factor)

More recently this problem of complexity has come back into focus after a couple disasters involving Boeing 737 MAX 8 and 9 aircraft. Initial reports have suggested that at an automated system on the aircraft has malfunctioned—pushing the nose down at low altitudes on take-offs as if responding to a stall—and with catastrophic consequences.

It could very well be something as simple as one sensor going haywire. It could very well be that everything else on the aircraft is functioning properly except this one small part. If that is the case, it certainly not something that should bring down an aircraft and would not have in years past when there was an actual direct mechanical linkage between pilot and control surfaces. But, now, since automated systems can override pilot inputs and take away some of the intuitive ‘feel’ of things in a cockpit, the possibility is very real that the pilots simply did not have enough time to sift through the possibilities of what was going wrong enough to diagnose the issue, switch to a manual mode, and prevent disaster.

The FAA, following after the lead of China and the Europeans, has decided to ground the entire fleet of Boeing 737 MAX 8 and 9 aircraft pending the results of the investigations. This move on the part of regulators will probably be a big inconvenience for air travelers. Nevertheless, after two incidents, and hundreds dead, it is better to take the precaution and get to the bottom of the issue.

https://twitter.com/realDonaldTrump/status/1105471621672960000

President Trump’s off-the-cuff Twitter response, basically stating “the complexity creates danger,” was met with the usual ridicule from those who hate the man and apparently do not understand hyperbole. (It ironic that some, who likely see themselves as sophisticated, have yet to see that through Trump’s putting-it-in-simple-layman’s-terms shtick.) However, technically incorrect is not the same as totally wrong and there is absolutely nothing ridiculous about the general point being made—there are unique (and unforeseeable) problems that come with complex systems.

The “keep it simple, stupid” mantra (aka: KISS principle) is not without merit in an age where our technology is advancing beyond our ability to control it. If a minor glitch in a system can lead to a major disaster, that is dangerous complexity and a real problem that needs to be addressed. Furthermore, if something as simple as flight can be made incomprehensible, even for a trained professional crew, then imagine the risk when a system is too complicated for humans alone to operate—say, for example, a nuclear power plant?

Systems too complex for humans to operate?

On the topic of dangerous complexity, I’m reminded of the meltdown of reactor two at Three Mile Island and the series of small human errors leading up to the big event. A few men, who held the fate of a wide swath of central Pennsylvania in their hands, made a few blunders in diagnosing the issue with serious consequences.

Human operators aren’t even able to comprehend the enormous (and awful) potential of their errors in such circumstances—they cannot fear to the same magnitude or to the proportion of the possible fallout of their actions—let alone have the ability to respond correctly to the cascade of blaring alarms when things did start to go south:

Perrow concluded that the failure at Three Mile Island was a consequence of the system’s immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a ‘normal accident’. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. (“In retrospect: Normal accidents“. Nature.)

The system accident (also called the “normal” accident by Yale sociologist, Charles Perrow, who wrote a book on the topic) is when a series of minor things go wrong together or combine in an unexpected way and eventually lead to a cataclysmic failure. This “unanticipated interaction of multiple factors” is what happened at Three Mile Island. It is called ‘normal’ because people, put in these immensely complex situations, revert to their normal routines and (like a pilot who has the nose of his aircraft inexplicably pitch down on routine take off) they lose (or just plain lack) the “narrative thread” necessary to properly respond to an emerging crisis situation.

Such was the case at Three Mile Island. It was not gross misconduct on the part of one person nor a terrible flaw in the design of the reactor itself, but rather it was a series of minor issues that led to operator confusion and number of small of mistakes that soon snowballed into something gravely serious. The accident was a result of the complexity of the system, our difficulty predicting how various factors can interact in ways that lead to failure and is something we can expect as systems become more and more complex.

And increased automation does not eliminate this problem. No, quite the opposite, it compounds the problem by adding another layer of management that clouds our ability to understand what is going on before it is too late. In other words, with automation, not only do you have the possibility of mechanical failure and human error, but you also have the potential for the automation itself failing and failing in a way that leaves the human operators too perplexed to sort through the mess of layered systems and unable respond in time. As the list of interactions between various systems grows so does the risk of a complex failure.

[As a footnote, nuclear energy is cleaner, safer and far more reliable than wind and solar farms. And, in the same way, that it is safer to fly than to drive, despite perceptions to the contrary, the dangers of nuclear are simply more obvious to the casual observer than the alternatives. So, again, with the fierce opposition to nuclear power by those who are unwittingly promoting less effective and more dangerous solutions, the human capacity to make good decisions when faced with the ambiguous problems created by the interaction of various complex systems does certainly come into question.]

Has modern life become dangerously complex?

There is no question that technological advancement has greatly benefited this generation in many ways and few would really be willing to give up modern convenience. That said, this change has not come without a cost. I had to think of that reality over the past few weeks while doing a major overhaul of how we manage information at the office and considering how quickly years of work could vanish into thin air. Yes, I suppose that paper files, like the Library of Alexandria burned, are always susceptible to flames or other destructive forces of nature. But, at least fire (unlike the infamous “blue screen of death“) is a somewhat predictable phenomenon.

Does anyone know why the Bluetooth in my car syncs up sometimes and not always?

Or why plugging my Android phone into the charger causes my calls in Facebook Messenger to hiccup (I.e., disconnects and reconnects multiple times) sometimes but not always?

I’m sure there is a reason hidden somewhere in the code, a failed interaction between several components in the system, but it would take an expert to get to the bottom of the issue. That’s quite a bit different from the times when the problem was the rain and the solution was cutting down a few trees to create a shelter. That was also true in the early days of machines as well—a somewhat mechanically inclined person could maintain and repair their own automobiles. However, the complicating factor of modern electronics has put this do-it-yourself option out of reach for all but the most dedicated mechanics.

Life for this generation has also become exponentially more complex than it was for prior generations when travel was as fast as your horse and you were watching your crops grow rather than checking your Facebook feed updates every other minute. It is very easy to be overwhelmed, as individuals, by information overload. The common man is increasingly over his head in dealing with the technological onslaught. We have become increasingly dependent on technology that we cannot understand ourselves and fails spontaneously, without warning, at seemingly the most inopportune times.

Advanced modern technology represents a paradigm shift as much as the invention of the automobile was a revolution for personal transportation. We have gone from analog to digital—a change that has opened a whole new realm of possibilities and also comes with a new set of vulnerabilities as well that go beyond the occasional annoyance of a computer crash. We really have no idea how the complexity of the current system will fare against the next Carrington Event (a solar storm that caused widespread damage and disruptions to the electric grid in 1859—a time of very basic and sturdy technology) nor are we able to foresee the many other potential glitches that could crash the entire system.

It is easy to be lulled into thinking everything will be okay because it has been so far. But that is a false security in a time of complex systems that are extremely sensitive and vulnerable. As when a pilot of a sophisticated airliner fails to comprehend the inputs or like the flustered operators of a nuclear reactor when the alarm bells ring, our civilization may be unable to respond when the complex systems we now rely on fail in an unexpected way that we could not predict. It is not completely unlikely that a relatively small glitch could crash the entire system and lead to a collapse of the current civilization. That is the danger of complexity, having systems that are well beyond our ability to fix should they fail in the right way at the wrong time.

The last human invention will be too complex to control and could be our demise…

Computers far exceed the human capacity to process information. We’ve come a long way from Deep Blue versus Garry Kasparov in the 90s and the gap between man and machine continues to grow wider after our best representatives were surpassed. Yet, while vastly faster in their abilities, computers have long only been able to do what they were programmed to do and thus their intelligence is limited by the abilities of their human programmers.

However, we are on the cusp of development of this technology and the implications far beyond the finite capacity of the human mind to grasp. We could very soon couple the processing speed of a computer with a problem-solving ability similar to that of a human. Except, unlike us, limited by our brain size and relatively slow processing speed, this “machine learning” invention (a video on the progress so far) could continue to expand its own intellectual abilities.

Machine learning is a massive paradigm shift from the programmed computers we currently use. It would lead to super-intelligence beyond our ability to fathom (literally) and, any more than a monkey can control us, could not be stopped. Imagine something that is always a hundred steps beyond any scenario we could imagine and has less in common with us (in terms of raw intelligence) than we do with an ant—would it have any reason not to treat us better than bacteria?

There was a time when I would not have believed that artificial intelligence was possible in my lifetime and a time after that when I would’ve thought it is something we could control. That was naive, artificial intelligence would, at very least, be unpredictable and almost totally unstoppable once the ball got rolling. It could see us as a curiosity, solve cancer simply because it could in a few nanoseconds—or it could kill us off for basically the same reason. Hopefully, in the latter case, it would see our extermination as not being worth the effort and be on to far greater things.

It remains to be seen whether artificial intelligence will solve all of our problems or see us as a problem and remove us from the equation. This is why very intelligent men, who love science and technological advancement, like Elon Musk, are fearful. Like the atomic age, it is a Pandora’s box that, once opened, cannot be closed again. But unlike a fission bomb that is dependent on human operators, this is a technology that could shape a destiny for itself—an invention that could quite possibly make us obsolete, hardly even worth a footnote in history, as it expanded across our planet and into the universe.

Whatever the case, we will soon have an answer…

Neural nets, the key component to artificial super-intelligence, are already here…

In fact, it is in your smartphone, it enables facial recognition and language translation. It also helps you pick a movie on Amazon by predicting what might interest you based on your prior choices.

Artificial intelligence technology could be our future. It could be that last invention that can finally manage all of these dangerous complex systems that modern convenience is so dependent upon and allow us to return to our simple pleasures. Or it could be a dangerous complexity in and of itself, something impossible to control, indifferent to our suffering and basically (from a human perspective) the greatest evil we ever face in the moments before it ensures our extinction.

Artificial super-intelligence will be complexity beyond our control, a dangerous complexity, and comes with risks that are humanly unimaginable. It could either solve all of our problems in dealing with disease and the complexity of our current technology—or it could make our woes exponentially greater and erase our civilization from the universe in the same way we apply an antibiotic to a pathogen. It is not ridiculous or absurd to think a little about the consequences before flipping the “on” switch of our last invention.

Should we think about simplifying our lives?

It is important, while we still reign supreme as the most inventive, intelligent and complex creatures on this planet, that we consider where our current trajectory will lead. Technological advancement has offered us unique advantages over previous generations but has also exposed us to unique stresses and incredible risks as well. Through technology, we have gained the ability to go to the moon and also to destroy all life on this planet with the push of a button.

Our technologies have always come as two-edged swords, with a good side and bad side. Discovering how to use fire, for example, provided us with warmth on a winter night and eventually internal combustion engines, but has often escaped our containment, destroyed our properties, cost countless lives, and creates air pollution. Rocks, likewise, became useful tools in our hands, they increased our productivity in dramatic fashion, but then also became a means to bash in the skulls of other humans as a weapon. For every positive development, there seems to be corresponding negative consequences and automation has proved to be no different.

The dramatic changes of the past century will likely seem small by comparison to what is coming next and there really is no way to be adequately prepared. Normal people can barely keep up with the increased complexity of our time as it is, we are already being manipulated by our own devices—scammers use our technology against us (soon spoof callers, using neuron networks, will be able to perfectly mimic your voice or that of a loved one for any nefarious purpose they can imagine) and it is likely big corporations will continue to do the same. Most of us will only fall further behind as our human weakness is easily used against us by the use of computer algorithms and artificial intelligence.

It would be nice to have the option to reconsider our decisions of the past few decades. Alas, this flight has already departed, we have no choice but to continue forward, hope for the best, and prepare for the worse. We really do need to consider, with the benefits, the potential cost of our increased dependence on complex systems and automation. And there is good reason to think (as individuals and also a civilization) about the value of simplifying our lives. It is not regressive or wrong to hold back a little on complexity and go with what is simple, tried and true.