Nobody seems to know, from sea to shining sea, what is wrong with our fair Republic. Added to the toxic fumes of disorientation and confusion, we are also witnessing sparks of anger and frustration – and not just in the US, but from London to Paris, from Berlin to Brussels. This is true while at the same time the traditional sources of authority and leadership — from the ideologically gridlocked clown‐show in the US Congress to the disaster of Brexit, from the cratering relevance of traditional religions to tenuous vision of NATO, from University cry closets to school districts issuing breathing balls and zenergy chimes — don’t seem to have any answers.
It’s quite clear at this point, under the spell of a worldwide fin de siècle funk, that we are on the precipice of falling into a profoundly different world, a fact which is deeply unsettling to hundreds of millions of people, even if they can’t quite articulate what it is that bothers them.
A couple of weeks ago I wrote a column piece for the newspaper called “The Great American Freak Out.” In the piece I toyed with an idea that has been floating around in my head for a while — the relatively new phenomenon of American adults losing their marbles in sudden and spectacular fails of self‐discipline. I wrote: “If the now ubiquitous American Freak Out is evidence of anything, perhaps it is a symptom of our lives on the new frontier. Maybe it’s happening because we are culturally marooned, neither here nor there just yet, but rather groaning through the death agonies of the old myths that once sustained us, while fighting savagely over the invention and control of the new myths we will eventually live by.”
I was thinking of human beings as free‐agents when I wrote that, and I think there is some truth in it, but what I wasn’t considering was the notion that our 18th century ideas about free‐will, and human free‐agency, are increasingly inadequate to address the realities of 21st century life on earth. What I think I’ve discovered since writing that piece is that many of the people who are involved in these Freak‐Out dramas have been hacked. They are no longer free‐agents. They actually have no recognizable free‐will whatsoever. They are bio‐technical zombies.
So it was serendipitous last week when I cracked Yuval Noah Harari’s latest masterpiece, 21 Lessons for the 21st Century. Readers of my work on this site know that I have referred to Harari at length in the past, and I do that because he is one of the very few historians and philosophers at work today who is demanding we pay attention to the present.
Harari’s principle effort is to force us to pay attention to what is happening TODAY, and to come to our own conclusions about whether the relentless mining of the past is going to work anymore to build a reliable trail into the future. Harari believes, and I think correctly, that we are now experiencing a kind of industrial revolution that is so revolutionary, so game‐changing, that our traditional philosophical underpinnings and the institutions they created are woefully unprepared to address what is actually happening. Furthermore, Harari stresses that the individual who doesn’t understand the power of Artificial Intelligence at work in our culture cannot possibly understand what it is that we are looking at, right now, directly in front of us. And if we can’t see the forest for the trees, how can we expect to make intelligent decisions going forward?
A recent string of Congressional hearings — where various Big Data bigshots like Mark Zuckerburg have been hauled in on the carpet to account for data breaches — are most revealing because of the questions that aren’t being asked. And those questions aren’t being asked because Congress, and most every average American, has no real grasp on the power that algorithms and AI are asserting over our daily living and in the direction of modern life. Harari argues that the reason we aren’t asking the right questions is that we are now in that strange arena where we are essentially “philosophically impoverished.” The questions we have always asked, and the institutions we have looked to for answers, are no longer sufficient to the task of providing answers in a world whose decision‐making loops are increasingly dominated by Big Data and AI algorithms.
Harari makes extremely powerful and compelling arguments in defense of his thesis, beginning with an explanation of what AI is, versus what it isn’t, who is using it, and how it is being used to capture and hold the attention of billions of people on earth — and then to influence their thinking by subtle suggestion.
Most importantly, Harari wants his readers to understand that our ideas about free‐will, and human agency, are under bombardment. Harari argues, and I think convincingly, that the human mind is not‐sacrosanct and inviolable. Not anymore. The sad fact is that in the 21st century the human mind can be hacked, and it is in fact now being hacked on a pervasive scale.
Tristan Harris, a former “design ethicist” at Google, and founder of the Center for Humane Tech, who together with Harari is on the forefront of trying to wake the world up to the forces that are helping to create the widespread disorientation and disillusionment in our culture — and in teaching how we might adapt the technologies to our lives in positive ways — told The Wire that by using big data algorithms and supercomputers directed at each and every user, “You (he means AI) can precisely target a lie directly to the people who are most susceptible.”
This is undeniably so, and its effects are everywhere. What concerns Harris and Harari, and should probably be of great concern to all of us, are the individual effects of invisible supercomputers pointed directly at each and every one of us, and then again on the downstream effects that will unavoidably have on the liberal underpinnings of western civilization. If you live downstream from DuPont, don’t wake up surprised when you have cancer caused by the Teflon chemicals coursing through your veins. Western liberalism has thus far proven remarkably elastic in the face of enormous political and economic challenges, but Harari and Harris argue that this newest test may be the most formidable of all. And, worryingly, they are not convinced it will survive.
“Democracy assumes that human feelings reflect a mysterious and profound ‘free will,’ that this ‘free will’ is the ultimate source of authority, and that while some people are more intelligent than others, all humans are equally free. Like Einstein and Dawkins, an illiterate maid also has free will, and therefore on election‐day her feelings – represented by her vote – count just as much as anybody else’s…This reliance on the heart might prove to be the Achilles’ heel of liberal democracy. For once somebody (whether in Beijing or in San Francisco) gains the technological ability to hack and manipulate the human heart, democratic politics will mutate into an emotional puppet show.”
It would be hard to argue that this isn’t already the case, as the recent “Smirking Boy” incident in front of the Lincoln Memorial, in which a group of Black Nationalist crazies, a busload of smarmy catholic school boys, and a handful of Native American activists kicked off one of the larger and more ridiculous emoti‐political media shitstorms in recent memory. And much of that cyclone of stupidity was driven by AI — supercomputers, design engineers, and control rooms — aware of and responding to the emotional and political preferences of 2 billion Facebook accounts, and countless hundreds of millions of newsfeeds and Snapchats and Instagram clients. The Smirking Boy incident created its own weather system and then rained garbage on the entire world for days.
This episode also pulls back the curtain on the pervasive lie that social media and other networks were meant to create interactive communities across the spectrum, and would draw people into conversations that expanded the mind and our ability to connect on the human plane. To an extraordinary degree, the opposite is true. Social and other forms of media, having hacked millions of minds, now carry their users off on strong digital currents, eventually depositing them on islands of ignorance and loneliness. News outlets, which were once a defense against ignorance, now serve only help to stoke the fires and perpetuate the problems. So much so that the consumer ends up like a marooned man walking endlessly around the same island, seeing the same palm tree, the same forlorn beach, the same vast ocean, the same skulking seagulls. According to Harris, “70% of 1.9 billion users of YouTube are watching videos chosen (for them) by an algorithm. That’s more people than follow Islam.”
It’s no wonder that, under these conditions, the ability to exchange thoughts in a face‐to‐face marketplace appears to be a thing of the past. Under these isolated conditions and in these engineered environments, it is no wonder that children can’t talk to their parents, parents can’t talk to their children, students can’t talk to teachers, teachers can’t talk to students, parents can’t talk to teachers, teachers can’t talk to parents, neighbors never talk to each other at all, nobody can talk to the cops, and Congress is incapable of listening. Even television news panels, stacked with powdered wigs and big brains, devolve into shouting matches, so much so that a disinterested observer can’t even understand what they are saying.
Nobody can talk to anybody because everybody has been sucked away on the algorithms meant to charge the dopamine response in their brains, and they are almost incapable of entertaining information that comes from outside of the bubble that AI has created for them based on their own preferences. The effects of digital isolation are so bad that an entire industry now exists merely to re‐teach human beings how to sit down in the same room and have a conversation.
This is a real thing that is happening right now. Pervasively. Every day. It is also a potential death sentence.
Harari backstops the thought: “Soon authority might shift again – from humans to algorithms…Just as divine authority was legitimized by religious mythologies, and human authority was justified by the liberal story, so the coming technological revolution might establish the authority of Big Data algorithms, while undermining the very idea of individual freedom.”
Because the real game in modern life is data. Who has it, and who controls it. How do we live in a future where human doctors are eventually replaced by AI doctors — because AI doctors are extremely inexpensive, don’t require 10 years of schooling, and make fewer diagnostic mistakes. What happens when insurance companies refuse to insure anyone who does not sign up for an AI doctor? Or when they refuse to insure a driver who won’t buy a self‐driving car, given that networked autonomous vehicles will have far fewer accidents. How do workers unite to protect themselves in mushroom industries driven by algorithms that bloom for ten years and disappear? How does one even live without surrendering one’s own data to unaccountable Big Data corporations who buy and sell information about people? Who regulates this? Who is accountable? Where is Congress? Does individual agency matter anymore in an AI world where integrated supercomputers collate data faster and better than human beings, and where policy makers increasingly rely on them for economic theory or the design of political platforms?
It’s tempting to just spit out a wad of tobacco and utter an ironic Pshaw, but ignoring these real questions isn’t going to make them go away. This is the world that is being created all around us, at increasing speed, and every single one of us has to live in it. And it’s leaving a lot of what we thought we knew about life in the dust.
“In ancient times land was the most important asset in the world, politics was a struggle to control land, and if too much land became concentrated in too few hands, society split into aristocrats and commoners. In the modern era machines and factories became more important than land, and political struggles focused on controlling these vital means of production. If too many of the machines became concentrated in too few hands, society split into capitalists and proletarians. In the twenty first century, however, data will eclipse both land and machinery as the most important asset, and politics will be a struggle to control the flow of data. If data becomes concentrated in too few hands, humankind will split into different species.
“The race to obtain the data is already on, headed by data giants such as Google, Facebook, Baidu, and Tencent. So far, many of these giants seem to have adopted the business model of ‘attention merchants.’ They capture our attention by providing us with free information, services, and entertainment, and they then resell our attention to advertisers. Yet the data giants probably aim far higher than any previous attention merchant. Their true business isn’t to sell advertisements at all. Rather, by capturing our attention they manage to accumulate immense amounts of dadta about us, which is worth more than any advertising revenue. We aren’t their customers – we are their product.”
The truth is startling. What’s more startling is that from a very young age Americans and other children around the world are being sucked into the data pool, whether they know it or not, and whether or not it is any good for them. And nobody seems to know how to stop the machine long enough to get off. In fact, there may very soon come a point where getting off isn’t even an option.
“Ordinary humans will find it very difficult to resist this process. At present, people are happy to give away their most valuable asset – their personal data – in exchange for free email services and funny cat videos. It’s a bit like African and Native American tribes who unwittingly sold entire countries to European imperialists in exchange for colorful beads and cheap trinkets. If, later on, ordinary people decide to try to block the flow of data, they might find it increasingly difficult, especially as they might come to rely on the network for all their decisions, and even for their healthcare and physical survival.”
I’ve said it before, I’ll say it again: Outlaws and Indians.
This world is already much closer to the new reality than we think, which exists in recognizable infant stages — such as the on‐going hubbub about Russian use of social media during the last election, the widespread use of bots and memes, and the buying and selling of data by large corporations. And it’s clear that the early stages of this paradigm shift underwrite a great deal of the confusion and desperation we are seeing all around us. Harari writes: “In 1938 humans were offered three global stories to choose from, in 1968 just two, and in 1998 a single story seemed to prevail. In 2018 we are down to zero. No wonder that the liberal elites, who dominated much of the world in recent decades, are in a state of shock and disorientation…To be suddenly left without any story is terrifying. Nothing makes any sense.”
And by liberal elites Harari is not talking about leftist professors in the LaLa Empires of University campuses – he’s talking about all of us in the western digital world.
We can begin to make sense of the confusion if we become more aware of what this newest industrial revolution portends, how it effects us individually, and begin to shape our lives with the full knowledge of its potential for harm. We should also be aware of its potential for good. Because that exists also. Either way, there are going to be unavoidable, and massive effects, and we are beginning to see those too. My personal concern is how to minimize becoming part of the unforeseen collateral damage — to avoid becoming one with the bomb crater.
We have long heard stories that AI will eventually push people out of the workforce. This is true, and will likely make many humans suddenly irrelevant. Harari suggests that it will not only make many humans irrelevant in the workplace, but that on a worldwide scale it may also eventually create a near‐permanent “useless class”, that is, a caste of human beings completely unable to participate in a ubiquitously AI workplace. Whereas in the past an unemployed farmer might take a job in a factory, by “2050 a cashier or textile worker losing her job to a robot will not be able to start working as a cancer researcher, as a drone operator, or as part of a human‐AI banking team. She won’t have the necessary skills.”
The pervasiveness of AI may in fact deprive the world of many of its “fall‐back” employment options. Imagine the effect of billions of unemployed, and unemployable people.
“Not only does AI stand poised to hack humans and outperform them in what were hitherto uniquely human skills, but it also enjoys uniquely nonhuman abilities, which make the difference between AI and a human worker one of a kind rather than merely of degree. Two particularly important nonhuman abilities that AI possesses are connectivity and updatability…What we are facing is not the replacement of millions of individual human workers by millions of individual robots and computers; rather, individual humans are likely to be replaced by an integrated network…Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story and open the way to the rise of digital dictatorships…Once AI makes better decision than we do about careers and perhaps even relationships, our concept of humanity and of life will have to change. Humans are used to thinking about life as a drama of decision‐making. Liberal democracy and free‐market capitalism see the individual as an autonomous agent constantly making choices about the world…What will happen to this view of life as we increasingly rely on AI to make decisions for us? At present we trust Netflix to recommend movies, and Google Maps to choose whether we turn right or left. But once we begin to count on AI to decide what to study, where to work, and whom to marry, human life will cease to be a drama of decision‐making. Democratic elections and free markets will make little sense…Democracy in its present form cannot survive the merger of biotech and infotech. Either democracy will successfully reinvent itself in a radically new form or humans will come to live in ‘digital dictatorships.’”
He goes on to point out that these new dictatorships will look entirely different than anything we have seen thus far. They will “be as different from Nazi Germany as Nazi Germany was different from Ancién Regime France.”
So this is our challenge. Here at RIR we are proud of our close‐hold on the lessons offered by the past. But we are fools to an extraordinary degree if we don’t stay out in front of what is happening in the world right now. This is our responsibility as we captain the ships of our families, serve as advisors to our friends, and seek to be men and women of influence in our communities. We can’t do that if we don’t know what’s on the trail ahead of us. And we are fortunate that big minds like Harari and Harris are out in front, scouting the territory, and returning to the campfire to offer ways to navigate our interaction with the supercomputers aimed at us from the other side of our television, computer, and mobile phone screens. Because the power of AI resides in its ability to infect us in the same way that small pox blankets infected Native Americans. It’s largely invisible, and in the moment it may even feel like a good way to stay warm through the winter. But this sort of trade remains exceedingly dangerous, and may even prove fatal if we are not very, very careful.