Introducing “iGen.”

From Jean M. Twenge’s recent essay in The Atlantic:

The more I pored over yearly surveys of teen attitudes and behaviors, and the more I talked with young people like Athena, the clearer it became that theirs is a generation shaped by the smartphone and by the concomitant rise of social media. I call them iGen. Born between 1995 and 2012, members of this generation are growing up with smartphones, have an Instagram account before they start high school, and do not remember a time before the internet. The Millennials grew up with the web as well, but it wasn’t ever-present in their lives, at hand at all times, day and night. iGen’s oldest members were early adolescents when the iPhone was introduced, in 2007, and high-school students when the iPad entered the scene, in 2010. A 2017 survey of more than 5,000 American teens found that three out of four owned an iPhone.

The advent of the smartphone and its cousin the tablet was followed quickly by hand-wringing about the deleterious effects of “screen time.” But the impact of these devices has not been fully appreciated, and goes far beyond the usual concerns about curtailed attention spans. The arrival of the smartphone has radically changed every aspect of teenagers’ lives, from the nature of their social interactions to their mental health. These changes have affected young people in every corner of the nation and in every type of household. The trends appear among teens poor and rich; of every ethnic background; in cities, suburbs, and small towns. Where there are cell towers, there are teens living their lives on their smartphone.

Twenge supplies a lot of correlated data that strongly link smartphone use to a number of generationally-distinct patterns in what she calls “iGen.” Among the more worrying data, she documents the rise of cyberbullying among young people, especially among girls. Then this:

Social-media companies are of course aware of these problems, and to one degree or another have endeavored to prevent cyberbullying. But their various motivations are, to say the least, complex. A recently leaked Facebook document indicated that the company had been touting to advertisers its ability to determine teens’ emotional state based on their on-site behavior, and even to pinpoint “moments when young people need a confidence boost.” Facebook acknowledged that the document was real, but denied that it offers “tools to target people based on their emotional state.”

At no time in human history have we possessed tools more finely-attuned to the art of manipulating the psychology of masses of people. These tools are supremely scalable. The same platforms that can target a demographic of heterogenous millions can individualize their content to reach, perhaps, a niche demographic of dozens. Taken in the context of Mark Zuckerberg’s utopian manifesto from earlier this year, the existence of the “boost” document ought to give us serious pause.

Allow me to go one step further. Scientists based in Portland, Oregon, recently succeeded in using the gene-editing program CRISPR/Cas9 to edit the DNA of embryos to eliminate the development of a genetic mutation that would cause hypertrophic cardiomyapathy. This is an incredible victory for medical science. But as I’ve said before, I’ve read Margaret Atwood’s Oryx and Crake. You should, too.

We have the tools to shape and reshape the human experience on a very literal level. On the genetic level, CRISPR is but the first feeble step toward technology whose power will enable us to program our own genetic makeup on scales previously imagined only in science fiction. Similarly, the algorithms of social media sites like Facebook have the potential to shape their users’ desires, feelings, and perceptions in ways that are simultaneously microscopically managed and macroscopically unpredictable. I strive to make these observations not in a spirit of alarm or histrionics but in the mindset of sober assessment. If, despite my attempts at sobriety, you feel alarmed… well, good.

Not alone in the universe

When people are searching for meaning, their minds seem to gravitate toward thoughts of things like aliens that do not fall within our current scientific inventory of the world. Why? I suspect part of the answer is that such ideas imply that humans are not alone in the universe, that we might be part of a larger cosmic drama. As with traditional religious beliefs, many of these paranormal beliefs involve powerful beings watching over humans and the hope that they will rescue us from death and extinction.

–Clay Routledge, Don’t Believe in God? Maybe You’ll Try U.F.O.s

Routledge ends with this: “The Western world is, in theory, becoming increasingly secular — but the religious mind remains active. The question now is, how can society satisfactorily meet people’s religious and spiritual needs?”

To betray our nature.

Anthropocene describes what we are doing to our environment, while posthuman is largely phenomenological, a condensed articulation of what it’s like to live in a world where we are constantly making and remaking ourselves, especially via biotechnology. And surely there is some truth in these points, but I want to suggest that the apparent disjunction obscures a deeper unity. A world in which we remake our environment and ourselves is a world that does not feel human to us. We do not know how to play the role of gods, and on some level perceive that to act as gods is to betray our nature.

–Alan Jacobs, Anthropocene theology

The tortoise, the hare, and American science

“Put simply, privatization will mean that more ‘sexy,’ ‘hot’ science will be funded, and we will miss important discoveries since most breakthroughs are based on years and decades of baby steps,” said Kelly Cosgrove, an associate professor of psychiatry at Yale University. “The hare will win, the tortoise will lose, and America will not be scientifically great.”

–Adrienne Lafrance, Scientists Brace for a Lost Generation in American Research

Ice Age Solastalgia

The mammoth’s extinction may have been our original ecological sin. When humans left Africa 70,000 years ago, the elephant family occupied a range that stretched from that continent’s southern tip to within 600 miles of the North Pole. Now elephants are holed up in a few final hiding places, such as Asia’s dense forests. Even in Africa, our shared ancestral home, their populations are shrinking, as poachers hunt them with helicopters, GPS, and night-vision goggles. If you were an anthropologist specializing in human ecological relationships, you may well conclude that one of our distinguishing features as a species is an inability to coexist peacefully with elephants.

But nature isn’t fixed, least of all human nature. We may yet learn to live alongside elephants, in all their spectacular variety. We may even become a friend to these magnificent animals. Already, we honor them as a symbol of memory, wisdom, and dignity. With luck, we will soon re-extend their range to the Arctic. […]

Nikita and Sergey seemed entirely unbothered by ethical considerations regarding mammoth cloning or geoengineering. They saw no contradiction between their veneration of “the wild” and their willingness to intervene, radically, in nature. At times they sounded like villains from a Michael Crichton novel. Nikita suggested that such concerns reeked of a particularly American piety. “I grew up in an atheist country,” he said. “Playing God doesn’t bother me in the least. We are already doing it. Why not do it better?”

–Ross Andersen, Welcome to Pleistocene Park

Curiosity over partisanship?

In other words, curiosity seems to be the pin that bursts our partisan bubbles, allowing new and sometimes uncomfortable information to trickle in. Nothing else works like curiosity does, the authors point out—not being reflective, or good at math, or even well-educated.

Instead, they write, it’s “individuals who have an appetite to be surprised by scientific information—who find it pleasurable to discover that the world does not work as they expected … [who] expose themselves more readily to information that defies their expectations.”

–Olga Khazan, How to Overcome Political Irrationality About Facts

Residents and transients

The scientists who study these communities are microbial ecologists, and like ecologists of the macro world, they like to think about the interactions between different organisms. But this research is still new, and this way of thinking is only just starting to make it into the world of clinical microbiology, which is still focused on defeating the bad microbes. “We noticed there is kind of a division between the clinical human skin microbiology research and this more recent emergence of microbial ecology,” says Roo Vandegrift, an ecologist who co-authored the recent paper, “There wasn’t very much crosstalk between those segments of the scientific literature.”

Even the language the two groups use is different. Microbial ecologists tend to divide microbes into how they behave in communities: are they residents or are they transient? Clinical microbiologist divide them up based on whether they’re harmful: commensal or pathogenic? You could say the two sets of vocabulary roughly map onto one another. Resident microbes tend to be commensal (the bacteria are harmless to you but they’re benefitting from living on you) and the transients are the pathogens that make you sick when they appear.

–Sarah Zhang, What Is the Right Way to Wash Your Hands?

Every week

To be clear, these E. coli with mcr-1 found in China were still susceptible to antibiotics other than colistin, but if a bacterium with genes that make it resistant to every other drug then picks up mcr-1, you get the nightmare scenario: a pan-resistant bacteria. These drug- resistant infections usually happen in people who are already sick or have weakened immune systems.

The Lancet report immediately kicked off a flurry of activity in microbiology labs around the world. And soon enough, the reports started coming out. In December, scientists reported finding mcr-1 in Denmark. Then Germany and Vietnam and  Spain and  United States and on and on. It seems like every month is a new mcr-1 development, I said to James Spencer, a microbiologist University of Bristol who collaborated with the Chinese scientists on the original Lancet paper. “Every week,” he corrected me. By the time anyone had figured out mcr-1’s existence, it had already spread around the world.

–Sarah Zhang, Resistance to the Antibiotic of Last Resort is Silently Spreading

Science fiction capabilities, not scientific capabilities

Many of Rid’s tales unfold in the Defense Department and in the General Electric factory in Schenectady, New York, where Vietnam-driven businessmen, engineers, and government men created (unsuccessful) prototypes of robot weapons, and where Kurt Vonnegut sets his first novel, the cybernetics-inspired Player Piano. It turns out, although Rid does not say this in so many words, that science fiction has been as instrumental in the rise of the digital as any set of switches. Consider, for example, the creation of the Agile Eye helmet for Air Force pilots who need to integrate “cyberspace” (their term) with meatspace. The officer in charge reports, according to Rid, “We actually used the same industrial designers that had designed Darth Vader’s helmet.” This fluid movement between futuristic Hollywood design, science fiction, and the DOD is a recurring feature of Rise of the Machines. Take the NSA’s internal warning that “[l]aymen are beginning to expect science fiction capabilities and not scientific capabilities” in virtual reality. Or Rid’s account of the so-called “cypherpunks” around Timothy May. Their name was cribbed from the “cyberpunk” science fiction genre (“cypher” refers to public-key encryption), and they were inspired by novels like Vernor Vinge’s True Names (1981), one on a list of recommended books for the movement on which not a single nonfiction text figures.

–Leif Weatherby, The Cybernetic Humanities

You don’t even need the pretense of methodology.

“The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

–Audrey Watters, The Best Way to Predict the Future is to Issue a Press Release