By making the Internet as weightless and as frictionless as possible, we made our lives easier. But now, the whole world is suffering the consequences, says Jaron Lanier.
In the early days of the popularization of the Internet, there was a debate about whether to make online digital experiences seem either casual and weightless or serious with costs and consequences. And there ended up being a massive desire to create the illusion of weightlessness.
In the service of weightlessness, Internet retailers would not pay the same sales taxes as brick-and-mortar ones. Cloud companies wouldn’t have the same responsibilities to monitor whether they were making money off copyright violations or forgeries. Accountability was recast as a burden or a friction, and since it costs money, an affront to weightlessness. The Internet would be designed as minimally as possible, so that entrepreneurs could experiment. It provided no hook for persistent personal identity, no way to conduct transactions and no way to know if anyone else was who they claimed to be. All of those necessary functions would have to be eventually fulfilled by private business like Facebook.
The result was a mad rush to corral users at any cost, even at the cost of caution and quality. We ended up with an uncharted, ad hoc Internet. We made our lives easier at the time, but the whole world is paying a heavy price many years later.
Gamergate turned out to be a prototype, rehearsal and launching pad for the alt-right.
One of the consequences of the weightless Internet — first emerging in alt.usenet groups — was an explosion of cruel nonsense. Nothing could be earned other than attention, and no one had a stake in being civil. Today, one of the biggest problems for virtual reality is that the immediately obvious customer base willing to spend money is gamers, and gaming culture has been going through misogynist convulsions. This phenomenon is known as Gamergate. Complaints about how women are portrayed in games are drowned out by blithering barrages of hate speech. When a feminist game design is promoted, the response is bomb threats and personal harassment. Women who dare to participate in gaming culture take real risks, unless they can adopt a persona that puts men first. Gamergate has left a trail of ruined lives. And yet, needless to say, the perpetrators feel they are the victims.
For years, Gamergate was only a plague within digital culture, but by 2016 its legacy was influencing elections, particularly one in the US. Gamergate turned out to be a prototype, rehearsal and launching pad for the alt-right. The kinds of problems that used to inflame only obscure reaches of Usenet now torment everyone. For instance, everyone, including the president, is upset about “fake news.” Even the news of the term itself was quickly made fake; the term “fake news” was deliberately overused to the point that its meaning was reversed within only a few months of its appearance. It became the way a grumpy American administration referred to real news.
Fortunately, more precise terms are available. For example, it’s been reported that the founder of the virtual reality company purchased by a social media company for a couple of billion dollars called the planting of sadistic online confabulations engineered to go viral “s—tposting” and “meme magic.” It was further reported that he spent serious lucre incentivizing the activity during the 2016 election. S—tposting is clearly distinct from low-quality journalism or dumb opinion. It’s one of those rare forms of speech that shuts down speech instead of increasing speech. It’s like playing loud, annoying music in the cell of a captured enemy combatant until he breaks. It clogs conversations and minds so that both truth and considered opinion become irrelevant.
Do we really want to privatize the gatekeeping of our public space for speech? Who knows who will be running Facebook when the founder is gone?
There have been widespread calls — from across the political spectrum — for the tech companies to do something about the prevalence of this kind of speech. Google acted first, and despite initial reluctance, Facebook followed. The companies now attempt to flag these posts, and they refuse to pay the sources. It’s worth trying, but I wonder whether this approach addresses the core issues.
Consider how odd it is that the whole society, not just in the US but globally, has to beg a few tightly controlled corporations to allow usable space for sincere news reporting. Isn’t there something strange, perilous and unsustainable about that, even if those corporations are enlightened and respond positively for now? Do we really want to privatize the gatekeeping of our public space for speech? Even if we do, do we want to do so irrevocably? Who knows who will be running Facebook when the founder is gone? Do billions of users really have the ability to coordinate a move off a service like that in protest? If not, what leverage is there? Are we choosing a new kind of government by another name, but one that represents us less?
The attempts by the tech companies to battle s—tposting comprise a fascinating confrontation between the new order of algorithms and the old order of financial incentives. Old and new have a lot in common. The most enthralled proponents perceive each not merely as technologies invented by people but as superhuman living things. In the case of financial incentives, the elevation occurred in the 18th century, when Adam Smith celebrated the “invisible hand.” In the case of algorithms, something similar occurred in the late 1950s when the term “artificial intelligence” was coined.
The prevalence of s—tposting and other degradations is fueled by the invisible hand, while the antidote is thought to be divined by artificial intelligence. So we are now witnessing a professional wrestling match between the old made-up god and the new one. But what if the new demigod cannot knock out the old demigod? Maybe social media companies need to change how they make money. Maybe anything short of that is just a hopeless propping up of algorithms that will always be toppled by tides of financial incentives.
Such fixes, like ethical algorithms or ethical filtering, will just be gamed and turned into more manipulation, nonsense and corruption.
Just to be clear, I don’t think ethical algorithms or ethical filtering can work, given the current level of our scientific understanding. Such fixes will just be gamed and turned into more manipulation, nonsense and corruption. If the way to protect people from AI is more AI, like supposed algorithms with ethics, then that amounts to saying that nothing will be done, because the very idea is the heart of nonsense. It’s the fantasy of a fantasy.
There’s no scientific description of an idea in a brain at this time, so there’s no way to even frame what it would be like to embed ethics into an algorithm. All algorithms can do now is compound what natural people do, as measured by our impressive global spying regime over the Internet. But just for the sake of argument, suppose the tech companies’ attempts to fix s—tposting with so-called artificial intelligence turn out to be spectacularly successful. Suppose that the filtering algorithms are so excellent that everyone comes to trust them. Even then, the underlying economic incentives would remain the same.
The likely result would be that the next best way to drive cranky traffic would come to the fore, but the overall result would be similar. As an example of a next-best source of crankiness, consider how Russian intelligence services have been identified by US intelligence as meddlers in the US elections. The method was not just to shitpost but to “weaponize” WikiLeaks to selectively distribute information that harmed only one candidate.
There are alternatives: for instance, people could get paid for their content on Facebook and pay for content from others, and Facebook could take a cut.
Suppose the tech companies implement ethical filters to block malicious selective leaking. Next up might be the subconscious generation of paranoia toward someone or something, in order to lock in attention. If the companies implement filters to prevent that, there will always be some other method-in-waiting.
How much control of our society do we want to demand from algorithms? Where does it end? Remember, well before we urged the tech companies to do something about fake news, we had demanded they do something about hate speech and organized harassment. The companies started booting certain users, but did society become any more temperate as a result?
At some point, even if moral automation can be implemented, it might still be necessary to appeal to the old demigod of economic incentives. There are alternatives to the current economics of social media. For instance, people could get paid for their content over Facebook and pay for content from others, and Facebook could take a cut. We know that might work because something like it was tried in experiments like Second Life.
There are undoubtedly other solutions to consider as well. I advocate an empirical approach. We should be brave about trying out solutions, such as paying people for their data, but also brave about accepting results, even if they are disappointing.
We must not give up.
Excerpted with permission from the new book Dawn of the New Everything: Encounters with Reality and Virtual Reality by Jaron Lanier. Published by Henry Holt and Co. Copyright © 2017 Jaron Lanier.