As the Electronic Frontier Foundation has pointed out, there are also serious implications for security: If ISPs look to sell consumer data, “internet providers will need to record and store even more sensitive data on their customers, which will become a target for hackers.” Even if they anonymize your sensitive data before they sell it to advertisers, they need to collect it first—and these companies don’t exactly have a perfect track record in protecting consumer data. In 2015, for example, Comcast paid $33 million as part of a settlement for accidentally releasing information about users who had paid the company to keep their phone numbers unlisted, including domestic violence victims.
This is all made much more difficult for consumers by the dearth of broadband competition. More than half of Americans have either one or even no options for providers, so if you don’t like your ISP’s data collection policies, chances are you won’t be able to do much about it, and providers know that. It’s highly unlikely that providers, particularly the dominant companies, will choose to forego those sweet advertising dollars in order to secure their customers’ privacy, when they know those customers don’t have much choice. […]
All is not completely lost. Your ISP still has to allow you to opt out of having your data sold, so you can call them or go online to find out how to do that. (If you do that, let us know how it went.) But today’s news is devastating for privacy overall. Consumers could have had more control over their privacy; your data could have been safer. Things could have been better, if Congress had done what it usually does and done nothing. Instead, they made things worse for anyone who doesn’t run an internet company or an advertising agency.
With a community of almost two billion people, it is less feasible to have a single set of standards to govern the entire community so we need to evolve towards a system of more local governance.
–Mark Zuckerberg, Building Global Community [via Recode]
There’s so much in the manifesto that smarter people than me will hash over, but this stood out to me, appearing as it does about three-quarters of the way through a polemic advocating for Facebook’s centrality to the building of a truly global community. I’ve no idea how this claim will be translated into algorithmic practice. The general tenor of that section of the manifesto gives the impression that what Zuckerberg means is that individuals will still (sort of) control what they see, but those settings will be refined by Facebook’s programmers to set regional norms for community standards. But in a global community, how are locality and region going to be defined? In a digital space where people choose their associations, how will Facebook determine boundaries? To what extent will cookies, likes, and reposts determine new forms of subcommunity identity? If Facebook is successful in its global agenda, will nation-states morph into digitally-facilitated forms of groupthink? Zuckerberg seems determined not to contribute to the atomization of society via his particular social media platform (and it’s clear that he’s wrestled with this issue pretty extensively), but what checks and balances do Zuckerberg and his army of programmers intend to build into the code? Zuckerberg also intends to grow the Facebook community; if 2 billion makes it “less feasible to have a single set of standards,” what happens when Facebook hits 3 billion? Zuckerberg claims at the outset of the manifesto that the goal is “building the long term social infrastructure to bring humanity together.” I feel like there’s a lot of slippage between terms like “community,” “government,” “standards,” and “infrastructure” throughout–as there tends to be in any extended political conversation–but very little acknowledgement of who or what comprises this infrastructure. It’s fine and dandy to insist that the sociability of people is the nucleus of Facebook. And that’s sort of true. But it’s also true that Facebook remains a private company whose product is a patented digital system whose language is known only to Zuckerberg and his employees. Facebook is infrastructure, even social infrastructure in a capacious sense of the word. But Zuckerberg seems to entertain seriously the idea that it’s the users who are driving the formation of the community even as he promotes the role of the Facebook corporate entity in giving it shape and function. What does locality look like in a global village whose infrastructure is house in Silicon Valley, yet whose fiberoptic materials and electronic signals remain almost literally invisible to the eye of the people who “live” there?
A model of education tied to platforms rather than institutions may seem liberating at first — “I can learn everything I need to know at Khan Academy!” — but that sense of liberation will continue only insofar as users train themselves to ask the questions the platforms already know how to answer, and think the thoughts that the platforms are prepared to transmit.
Very few people will see any of this as problematic, and only those very few will look to work outside the shaping power of the dominant platforms. This means that such institution-building as they manage will have to happen on a small scale and within limited geographical areas. As far as I’m concerned that’s not the worst thing that could happen.
But the majority will accommodate themselves to the faceless inflexibility of platforms, and will become less and less capable of seeing the virtues of institutions, on any scale. One consequence of that accommodation, I believe, will be an increasing impatience with representative democracy, and an accompanying desire to replace political institutions with platform-based decision-making: referendums and plebiscites, conducted at as high a level as possible (national, or in the case of the EU, transnational). Which will bring, among other things, the exploitation of communities and natural resources by people who will never see or know anything about what they are exploiting.
–Alan Jacobs, platforms and institutions
I have come to believe that it is impossible for anyone who is regularly on social media to have a balanced and accurate understanding of what is happening in the world. To follow a minute-by-minute cycle of news is to be constantly threatened by illusion. So I’m not just staying off Twitter, I’m cutting back on the news sites in my RSS feed, and deleting browser bookmarks to newspapers. Instead, I am turning more of my attention to monthly magazines, quarterly journals, and books. I’m trying to get a somewhat longer view of things — trying to start thinking about issues one when some of the basic facts about them have been sorted out. Taking the short view has burned me far too many times; I’m going to try to prevent that from happening ever again (even if I will sometimes fail). And if once in a while I end up fighting a battle in a war that has already ended … I can live with that.
–Alan Jacobs, recency illusions
Look, I think it’s important to understand that these minimization procedures are taken very seriously, and all other agencies that are handling raw signals intelligence are essentially going to have to import these very complex oversight and compliance mechanisms that currently exist at the NSA.
Within the NSA, those are extremely strong and protective mechanisms. I think people should feel reassured that the rules cannot be violated—certainly not without it coming to the attention of oversight and compliance bodies. I am confident that all of the agencies in the U.S. intelligence community will discharge those very same obligations with the same level of diligence and rigor, adhering to both the spirit and the letter of the law.
–Susan Hennessey, interviewed by Kaveh Waddell for the Atlantic
Many of Rid’s tales unfold in the Defense Department and in the General Electric factory in Schenectady, New York, where Vietnam-driven businessmen, engineers, and government men created (unsuccessful) prototypes of robot weapons, and where Kurt Vonnegut sets his first novel, the cybernetics-inspired Player Piano. It turns out, although Rid does not say this in so many words, that science fiction has been as instrumental in the rise of the digital as any set of switches. Consider, for example, the creation of the Agile Eye helmet for Air Force pilots who need to integrate “cyberspace” (their term) with meatspace. The officer in charge reports, according to Rid, “We actually used the same industrial designers that had designed Darth Vader’s helmet.” This fluid movement between futuristic Hollywood design, science fiction, and the DOD is a recurring feature of Rise of the Machines. Take the NSA’s internal warning that “[l]aymen are beginning to expect science fiction capabilities and not scientific capabilities” in virtual reality. Or Rid’s account of the so-called “cypherpunks” around Timothy May. Their name was cribbed from the “cyberpunk” science fiction genre (“cypher” refers to public-key encryption), and they were inspired by novels like Vernor Vinge’s True Names (1981), one on a list of recommended books for the movement on which not a single nonfiction text figures.
–Leif Weatherby, The Cybernetic Humanities
In the accounts given by philosophers like Bernard Stiegler, the human stands on the point of vanishing entirely; we become something incidental to a total technological system. As he points out, a human being without any technological prostheses is nothing, an unsteady sac of flesh defined only by what it doesn’t have: no shelter, no protection, no society. We create tools, but technical apparatuses and their milieus advance according to their own logic, and these non-living objects have their own strange form of life. Our brains developed to control our hands; human consciousness itself was only the by-product of a technical evolution that moved from flint-knapping to the hammer to the virtual bartender; its real job isn’t to perform any particular task but to perpetuate itself. “Robots,” he writes, are “seemingly designed no longer to free humanity from work but to consign it either to poverty or stress.” Whatever illusion of predominance we had is fading: For others, like Benjamin Bratton, the real political subject is no longer a human individual but a “user,” which can be any kind of biological or digital assemblage. With production automated according to algorithmically generated targets, with the vast majority of all written language taking the form of spam and junk code, this system has less and less use for us—even as a moving part—with every passing day.
Web Summit is where humanity rushes towards its extinction.
Yet the following sentence–more of an aphorism, really–declares simply, “Web Summit is where humanity rushes towards its extinction.” I emphasize those words because they suggest that we are actively moving–as opposed to being moved–toward self-annhilation.
To me, this rhetorical confusion is pretty significant. At the end of the essay, Kriss documents a meet-and-greet at a local watering hole. He laments that human sociality has been transmogrified into ever more affective labor.
A human enjoyment as basic as getting drunk together had been transformed into something else; everyone was still at work, being pulled along by the logic of whatever it is that they’d collectively invented. In a corner of one bar, a muted TV was showing the presidential election on CNN: state by state slowly turning red, a grinning goblin creeping closer to the brink of power. People around me were worried; they thought that a nuclear-armed Donald Trump might lead to the end of humanity. For all the tech industry’s claims to be the leading edge of tomorrow, these people were still thinking in terms of a very old world. The end of humanity had already arrived; it was everywhere around us.
But what or who, exactly, had done the transforming? The “logic of whatever it is that they’d collectively invented”? Kriss’s refusal to name names makes a sort of sense, given that a main theme of his essay is the way our society is increasingly a “system terrifyingly self-sustaining and utterly opaque.” This systemic opacity makes it impossible or difficult. Or perhaps, for his rhetorical purposes, it’s rhetorically undesirable to name openly and clearly. Look at that passage again. States, one by one, turn red, apparently of their own accord, as if the vote tallies themselves are “a grinning goblin.” Is Donald Trump the goblin? Or does Kriss simply fear to name the agent that is actually responsible for turning those states red: humanity? States don’t just turn red of their own accord, even on CNN election maps. Voters vote, and the color reflects their choice.
I honestly can’t tell if Kriss is being imprecise as a matter of rhetorical strategy or because he’s not carefully thinking through the implications of his premise. In a lot of ways, it seems politically easier to say, “The end of humanity had already arrived.” It lets us off the hook for having to take responsibility and a measure of control over our technosphere. It also gives people like Kriss persmission to dismiss and demean those people whose goals and purposes are opaque to him.
Kriss can’t understand why tech entrepreneurs and venture capitalists would choose to congregate in such a chaotic way. He doesn’t get the appeal of running a 90% chance of failure as an app innovator. The idea of using a social occasion as a milieu for professional networking appears vaguely insidious. Lurking behind it all is some Illuminati-esque entity called Technology (or maybe the Tech Industry?), whose agency and motives are obscure and sinister, possibly apocalyptic. And: surely real people could not choose to vote for Trump of their own volition. There must be something compelling them. Kriss (willfully?) ignores the fact that people and their choices remain utterly central to the maintenance of any and all tools that comprise our systems.
I’m largely sympathetic to Kriss’s critique, I think. Most of the systems that hold our social world together mystify me in many profound ways. People themselves constantly disappoint and mystify me, too. But I do think it’s a categorical error to ascribe agential vitality to “systems” or “technology” without doing at least minimal definitional work. Where does human agency end and systemic agency begin? What is the nature of this “strange form of life” posessed by non-living objects? Are all non-living objects possessed by the same, strange life-force, and do they all exert it the same way upon humanity? Is Kriss overwhelmed by The Tech Industry at the Web Summit, or is he primarily overwhelmed by the apparently chaotic society of its attendees–the people who’ve chosen to work there?
Most importantly, doesn’t Kriss fall into the ancient trap of self-fulfilling prophecy? Non-living systems built by people enervate human societies precisely to the extent that humanity cedes agential authority to its tools. When the crash happens, it’s often because people start thinking that their tools will take care of themselves. Worse, cataclysms often happen because people start confusing other people with tools. Kriss seems to lament that transformation; he also empowers and perpetuates it. Instead of trying to understand how and why people would be so gung-ho about valorizing their tools (rightly or wrongly), he speculates that the tools have simply gotten the better of their masters. And instead of trying to understand how and why people would choose to vote for a grinning goblin (rightly or wrongly), he intimates that they’ve simply already sacrificed their humanity. “Web Summit is a hyper-concentrated image of our entire world, and the panic and confusion that is to come,” Kriss says, because society’s “structure is one of increasing chaos.” Perhaps. Probably. Or there’s a pattern there that Kriss can’t see because he refuses recognize it as an extension of his own humanity.
The founders of Twitter are to our discursive culture what Robert Estienne — the guy who divided the Bible up into verses — is to biblical interpretation. Is it possible, when faced with Paul’s letter to the Ephesians divided into verses, to keep clearly in mind the larger dialectical structure of his exposition? Sure. But it’s very hard, as generations of Christians who think that they can settle an argument by quoting a verse, a verse that might not even be a complete sentence, have demonstrated to us all. Becoming habituated to tweet-sized chunks of thought is damaging to one’s grasp of theology and social issues alike.
–Alan Jacobs, against tweetstorms