“The intelligence chiefs stressed that Russia was attempting to undermine Western democracies through its ongoing information war waged by humans and automated computer programs known as bots on websites like Twitter, Facebook, Reddit, and Google.”
~Time Magazine, 13 February, 2018
“As society becomes ever more computerized, the programmer becomes its unacknowledged legislator.”
~Nicholas Carr, The Glass Cage
During the first week of January, 1812, “a great number of men armed with pistols, hammers, and clubs, entered the dwelling house of George Ball, a framework knitter of Lenton, disguised with masks and handkerchiefs over their faces.” These men, following on the heels of numerous other similar attacks in the proceeding days and weeks, proceeded to beat Mr. Ball, and “wantonly and feloniously broke and destroyed five stocking frames standing in the work shop, four of which belonged to George Ball, and one frame, 40 gauge, belonging to Mr. Francis Braithwaite, hosier, Nottingham.”
The men were Luddites, and they were responding to the unnerving and disenfranchising appearance of automation in their lives, in the form of stocking frames for the knitting of textiles. This new technology, which the Luddites believed would eventually push them out of work and into poverty, also threatened the old orders of apprenticeship, and in the view of those blackening their faces and smashing frames in the night, it was an economic movement rapidly transforming into a revolutionary force.
But automation, and the efficiencies it provides for producers and consumers, and the profit it generates for business, generally wins the day.
Over time, many of the Luddites were hanged or sent away to penal colonies. Some were acquitted for lack of evidence. And though the figure of Ned Ludlam was probably a fiction, the Luddite Rebellion – which at one time employed more British soldiers than the fight against Napoleon – was a first manifest swing at resistance to industrialization and the encroachment of machines into the deeper reaches of daily life: employment, artistry, and traditional craftsmanship.
One of the greater realizations of the last century, and the early years of the 21st century, is that while technology evolves quickly — sometimes overnight — human beings are locked into an evolutionary framework of biology that is much, much, slower. And as our machine creations — from lawnmowers to supercomputers — seep ever deeper into our lives — beating us at chess, scheduling our days, influencing elections — many are rightly concerned that we are indeed becoming, as Marx wrote, merely “a living appendage of the lifeless mechanism.”
That tension is real. It is also frustrating and dangerous, and it is terribly important to consider as we face questions about models of interaction with our own creations — particularly software — into the future.
One goal of global software companies is to create an environment of “pervasive assistance”. We know that because Justin Rattner, Chief Technology Officer of Intel, has said so. The long‐term consequence of that is to make software’s presence and manipulative influence in our life invisible. And when it becomes invisible it also becomes unquestionable, unconfrontable, and uncontrollable.
Another goal of software companies is to break down naturally evolved human behaviors meant to protect us from each other. No longer is T.S. Eliot’s notion of “preparing a face to meet the faces that we meet,” a normal human function and key ingredient of intelligent and planned survival. According to Mark Zuckerberg, founder and CEO of Facebook, “You have one identity. The days of you having a different image for your work friends or co‐workers and for the other people you know are probably coming to an end pretty quickly…having two identities for yourself is an example of a lack of integrity.”
The presumption behind Zuckerberg’s statement is, in equal parts, ignorant, arrogant, and remarkable, and would render virtually every human relationship into a robotic exchange of algorithms mediated by software. Hence, Facebook. Literature and the other arts would become meaningless as, I think, a sound argument might be made that they are rapidly and regrettably becoming. When stories are reduced to mere algorithmic readings of complex human emotion and environmental inputs, we will have succeeded in sucking the soul out of life.
Which Yuval Noah Harari, in Homo Deus, says has already happened. “Homo Sapiens,” he writes, “is not going to be exterminated by a robot revolt. Rather, Homo Sapiens is likely to upgrade itself step by step, merging with robots and computers in the process, until our descendants will look back and realize that they are no longer the kind of animal that wrote the Bible, built the Great Wall of China and laughed at Charlie Chaplin’s antics.”
Harari argues that human beings have an innate desire to achieve immortality, and that having evolved out of, or away from, beliefs in the divine, the human response in the modern era has been necessarily to seek immortality through science and technology. It is this desire, he argues, that underwrites efforts at cryogenics, genetic cloning, and the merging of technology with the human body and mind.
“The rise of modern science and industry brought about the next revolution in human‐animal relations. During the Agricultural Revolution humankind silenced animals and plants, and turned the animist grand opera into a dialogue between man and gods. During the Scientific Revolution humankind silenced the gods too. The world was now a one‐man show. Humankind stood alone on an empty stage, talking to itself, negotiating with no one and acquiring enormous powers without any obligations. Having deciphered the mute laws of physics, chemistry and biology, humankind now does with them as it pleases.”
~Yuval Noah Harari, Homo Deus
It isn’t difficult to imagine what Rattner’s “pervasive assistance” looks like because it is already here, and it is fundamentally changing how we live on the planet. Google Maps is an excellent example.
Say you are about to visit a city. Calling up a Google Map on your phone or your computer will present you with a vision of the city that Google wants you to see — or thinks, based on an algorithmic judgment of your likes and habits, that you want to see.
Based on what you eat, certain restaurants will appear. If you buy books, a bookstore will appear. If you have searched for gas stations, they will appear, and so on. This sort of pervasive assistance is part of a trend toward “neuro‐ergonomics” in software design, programs designed to integrate so seamlessly into our thinking that we hardly notice that we have ceded our own responsibilities for discovery to a software program.
This is a fine example of the degenerative effects of computer automation, which flies in the face of what science calls the “Generation Effect.” The Generation Effect tells us that the more work the mind must do the more work the mind is capable of doing.
And certainly we are in the midst of an unparalleled surrender of our own curiosity and pioneering awareness — seen all around us in the humorous form of people running into telephone poles or toppling into water fountains while rubbing their smartphones — as we cede the once‐critical ownership of awareness and self‐worth to manipulation by a machine and its algorithms.
“We believe that we have built a perhaps limitless power of comprehension into computers and other machines, but our minds remain as limited as ever. Our trust that machines can manipulate to humane effect quantities that are unintelligible and unimaginable to humans is incorrigibly strange.”
~Wendell Berry, It All Turns on Affection
Nicholas Carr, in his excellent book “The Glass Cage,” poignantly highlights the dangers of technology when he writes: “Automation severs ends from means. It makes getting what we want easier, but it distances us from knowing.” And it is this distance from knowing that is the truly dangerous part of the equation. At its furthest ends, it can obliterate the knowing portion of the equation entirely, and nothing is worse that not even knowing that you do not know.
Automation, at this stage in its evolution, already touches every facet of our lives. Many analysts have attributed the wild market gyrations of early 2018 with “algorithmic manipulations of high‐speed traders.” Algorithmic‐based trading strategies are programmed to respond within milliseconds to market conditions, far outpacing a human’s ability to react competitively. When those algorithms are triggered, massive numbers of shares are bought or sold in a blink, and the market can quickly find itself beaten into a corner.
When, during the recent correction, I spoke to my financial advisor, I asked him if there was a certain Dow‐Jones number at which I should become concerned. His answer was revelatory: “It should probably be more of an emotional trigger than a specific number,” he said, which I maintain is probably true but leaves my family vulnerable to the behaviors of predatory software in the grand scheme.
“Taking the misanthropic view of automation, Google has come to see human cognition as creaky and inexact, a cumbersome biological process better handled by a computer.”
~Nicholas Carr, The Glass Cage
The idea of an emotional trigger to market conditions is one thing, but it also hints at the question of morality in machines, which is an enormous problem for software engineers. Carr points out the problems with programming, say, a driverless car, to understand the difference between a child chasing a ball into the street, or a small dog. How does the car react? What do insurance companies have to say about how the car reacts? How will our machines “calculate their way out of moral dilemmas”?
And of course there is the lingering question of LARs, or Lethal Autonomous Robots, which are probably the future of the battlefield, and who will need to make battlefield decisions of enormous complexity.
“The only way for robots to become truly moral beings would be to follow our example and take a hybrid approach, both obeying rules and learning from experience. But creating a machine with that capacity is far beyond our technological grasp,” writes Carr. “Before that happens, though, we’ll need to figure out how to program computers to display ‘supra‐rational faculties’ — to have emotions, social skills, consciousness, and a sense of ‘being embodied in the world.’ We’ll need to become gods, in other words.” See Harari.
And finally, Carr says, correctly, that “The first shot freely taken by a robot will be a shot heard round the world. It will change war, and maybe society, forever.”
“Sex‐Dolls Brothel Opens in Spain and Many Predict Sex‐Robot Tourism Soon To Follow.”
“Sex with humans could soon be a thing of the past.”
In the words of Arthur C. Clarke, “We’ve designed a system that discards us.” Which can already be seen almost anywhere we choose to look, and which was the original fear driving the Luddites to smash weaving frames and whose fears of displacement and unemployment were also finally realized.
At the current pace of development and disenfranchisement of human capacity, one might be forgiven for wondering at what point a modern version of the Luddites packs a van full of explosives and attempts to drive it through the gates of Google, or Apple, or Intel. It is interesting to realize that today’s version of the truly dangerous underground radical is anyone — marvelous people like my wife, for instance — who makes a conscious choice to remain disconnected, unplugged, separate from the pervasive and utterly controlling worldwide digital nervous system.
“The Robots are Coming for Garment Workers”
“There’s only one problem: most of the alternatives higher up the value chain, like electronics, are automating as well.”
~The Wall Street Journal, 2.16.2018
I will only be surprised if that sort of modern Luddite Rebellion doesn’t happen in my lifetime. I don’t expect it to win out, but I expect that at some point it will begin to manifest, and I expect that in another century or two someone will be writing about the quaint little rebellion against software that happened way back when.
Which is another problem with pervasive automation because it will begin to erode our ability to properly interpret history. We will, I would argue, eventually be incapable of reproducing, or even understanding, the conditions of antiquity. We will find ourselves so far removed from similar day‐to‐day circumstances that our ability to interpret or understand the equations of decision making in an untechnical world that we will essentially be looking at alien creatures.
To a large degree we do that already when looking at the bones and burials of our ancestors who, at this point, we still have far more in common with than future generations will be able to muster with us.
We spend a lot of time thinking about and interpreting the past, a necessary function to help explain how it is we have arrived at this particular point in history. But it occurs to me that I have not spent nearly enough time thinking about the future — except through a kind of emotional extrapolation‐projection exercise while sifting through a maze of collated assumptions — and often with a kind of weighty dread that what is in the offing can only be terrifying.
I’m hopeful that thinking more about the future, and in particular the role of technology in that future, might in some way inform my desire to find a balance between technology as a tool and technology as an invisible and inescapable crutch. Which it may very well become for our grandchildren, or their grandchildren, as software developers and engineers seek every day, inexorably, to combine the computer’s contemporary omnipresence with a future omniscience.
“The value of a well‐made and well‐used tool lies not only in what it produces for us, but what it produces in us,” Carr writes. We can see what pervasive technology produces for us, and some of it is extremely beneficial. But given that 1 in 5 Americans take either depression or anxiety medications, I think we also have a glimpse into what all of this pervasive technology produces in us.
We can guard against the designs of software engineers and futurists, if we are so inclined, by refusing to allow technology to become “enshrouded in abstraction,” by resisting the trend of “inscrutable technology becom(ing) an invisible technology.” Because, as Carr writes, “At that point, the technology’s assumptions and intentions have infiltrated our own desires and actions. We no longer know whether the software is aiding us or controlling us. We’re behind the wheel, but we can’t be sure who’s driving.”