Sherry Turkle and the pharmacology of phones

family distracted by technologySherry Turkle’s recent piece in The New York Times, “Stop Googling. Let’s Talk,” appears to take on the key points of her latest book, Reclaiming Conversation (also reviewed in NYT.) Turkle reports on a decline in empathy, particularly among younger people, which she asserts is a result of emerging technologies–social media and especially smartphones. While she cites some research in support of this claim (research which itself only suggests there might be a connection between technology and decreased empathy), Turkle also says “In our hearts, we know this, and now research is catching up with our intuitions.” An interesting rhetorical appeal since so often research demonstrates counter-intuitive discoveries.

But here’s a more interesting line from Turkle: “Our phones are not accessories, but psychologically potent devices that change not just what we do but who we are.” Indeed, though the distinction between doing and being is not so easily made or maintained. The point though is that we are changing. We’ve always been changing, though maybe now we are in a period of more rapid change. She writes that “Every technology asks us to confront human values. This is a good thing, because it causes us to reaffirm what they are.” And I wonder at the choice of “reaffirm.” Why re-affirm? Because human values are never changing? Why not discover or construct?

Continue reading Sherry Turkle and the pharmacology of phones

Algorithm objects: people are the things they do

mathematical equation.We do things. It’s an interestingly Latourian idiomatic expression, a kind of dancer and the dance moment. And in the moment of that linguistic confusion, we become those things: consumers, workers, believers, lovers, and so on. Not in an permanent sense though, always moving from one thing we do to another. One of the things we do, increasingly and often without much thought, is interact with algorithms. Sadly there’s no convenient “-er word” for that, but it is a thing we do nonetheless.

In a recent Atlantic article Adrienne Lafrance reports that “Not even the people who write algorithms really know how they work.” What does she meant by that? Basically that no one can tell you exactly why you get the particular results that you get from a Google search or why Facebook shows you one set of status updates rather than another. (I notice I get a very different set of updates on my phone Fb app than I do from my web browser.) And of course that goes on and on, into the ads that show up on the websites you visit, the recommendations made to you by Amazon or Netflix and other such sites, etc.

Continue reading Algorithm objects: people are the things they do

microaggression, victimhood, and digital culture

Take as evidence these two recent articles in The Atlantic by Conor Friedersdorf, “The Rise of Victimhood Culture” and “Is ‘Victimhood Culture’ a Fair Description?” These articles take up research by sociologists Bradley Campbell and Jason Manning (“Microaggression and Moral Cultures“). As Campbell and Manning observe:

In modern Western societies, an ethic of cultural tolerance – and often incompatibly, intolerance of intolerance – has developed in tandem with increasing diversity. Since microaggression offenses normally involve overstratification and underdiversity, intense concern about such offenses occurs at the intersection of the social conditions conducive to the seriousness of each. It is in egalitarian and diverse settings – such as at modern American universities – that equality and diversity are most valued, and it is in these settings that perceived offenses against these values are most deviant.

They also make the fairly obvious observation (which I’d like to explore further in a moment) that

As social media becomes ever more ubiquitous, the ready availability of the court of public opinion may make public disclosure of offenses an increasingly likely course of action. As advertising one’s victimization becomes an increasingly reliable way to attract attention and support, modern conditions may even lead to the emergence of a new moral culture.

However, the part of the article that becomes the focus of Friedersdorf’s articles comes at the end, where Campbell and Manning contend that while historically we have had an “honor culture,” where typically people resolve disputes unilaterally, often through violence (think duels), and a “dignity culture,” where people turn to third parties (e.g. courts) to resolve disputes but would tend to ignore microaggressions. Today we find ourselves in what they term a “victimhood culture” which is

Continue reading microaggression, victimhood, and digital culture

Genre, media formats, and evolution

Mackenzie Wark has a useful extended discussion of Lev Manovich’s Software Takes Command. If you haven’t read Manovich’s book, it offers some great insights into it. I think Manovich’s argument for software studies is important for the future of rhetoric, though admittedly my work has long operated at points of intersection between rhetoric and media study.

But here’s one way of thinking about this. How do we explain the persistence of the “essay,” not only in first-year composition but as the primary genre of scholarly work in our field and really across the humanities? Indeed we might take this question more broadly and wonder about the persistence of scholarly genres across disciplines and beyond. That is, we might ask why genres of scientific scholarly articles have not changed much in the wake of digital media or newspaper articles or novels and so on.

Or maybe we should ask about the photograph.

Manovich's media lab image wall.

It’s likely that you have some recent family photos hanging somewhere in your house. They were probably taken with a digital camera, maybe even with your smartphone. But sitting in that frame, they probably don’t look very different from photos that would have hung there thirty years ago. The photo may not reveal to you the complete transformation of the composition process that led to its production. That transformation has led to the erasure of some photographic capacities that were available to chemical film that do not exist for digital images. However, as we know, most of those compositional activities are now simulated in software. Additionally, many new capacities have emerged for photographs, most notably, at least for the everyday user, the capacity to share images online.

Continue reading Genre, media formats, and evolution

rhetorical throughput

One of the projects I have been regularly pursuing (and I’m certainly not alone in this) is investigating the implications of rhetoric’s disciplinary-paradigmatic insistence on a symbolic, anthropocentric scope of study and entertaining the possibilities of rethinking those boundaries. I’ve been employing a mixture of DeLanda, Latour, and other “new materialist/realist/etc.” thinkers, always with the understanding that these theories don’t fit neatly together and with the understanding that I’m not in the business of building a comprehensive theory of the world.

I’m interested in rethinking how rhetoric works to maybe get a new way of approaching how to live in a digital world.

So take for example this recent piece of research from Experimental Brain Research, “Using space and time to encode vibrotactile information: toward an estimate of the skin’s achievable throughput” (paywall) by Scott Novich and David Eagleman, or perhaps just watch Eagleman’s TED talk where he asks “Can we create new senses for humans?”

Continue reading rhetorical throughput

academic “quit pieces” and related digital flotsom

Before I get into this, I should try to make a few things clear. This post isn’t about the structural problems facing higher education right now (issues of cost and access, the changing cultural-economic role of universities nationally and globally, or shifts in media-information technologies that are reshaping our work). It’s not even about the increasing politicization of those problems as they become bullet points in campaign stump speeches or the subject of legislation. No, this post is really about the rhetorical response to these exigencies among academics and in the higher education press (and as the two become difficult to separate).

So I am willing to accept that things are as bad as they have ever been in higher education…. well, at least for a century? Of course, Bill Readings published University in Ruins in the nineties, detailing the increasing corporatization of the university. In the eighties, when I was an undergrad, students on my campus protested in the hundreds or thousands for a variety of issues related to apartheid, the CIA on campus, and, yes, tenure and rising tuition. Of course, as the song calls us to remember, students in 1970 were shot and killed by national guard at Kent State, resulting in a national student strike. Maybe the Golden Age of the American university was in the 50s when women were English majors, commie professors were pursued by senators, and non-white students had their own colleges. Look, I assume you all know this history at least as well as I do. So what’s my point? It’s not that “the more things change the more they stay the same.” I’m willing to accept as a premise that things are worse now than they have been in the last half century as long as we are all also willing to accept that there is hardly some ideal moment to point back to either.

My interest is in this post is in the rhetorical responses to this situation, specifically our near-viral interest in “quit pieces.”

Continue reading academic “quit pieces” and related digital flotsom

faculty at work

This is one of those posts where I find myself at a strange intersection among several seemingly unrelated articles.

The first three clearly deal with academic life, while the last two address topics near and dear to faculty but without addressing academia.

The Rees, Scott, and Gilbert pieces each address aspects of the perceived and perhaps real changing role of faculty in curriculum. Formalized assessment asks faculty to articulate their teaching practices in fairly standardized ways and offer evidence that if not directly quantitative at least meets some established standards for evidence. It doesn’t necessarily change what you teach or even how you teach, but it does require you to communicate about your teaching in new ways. (And it might very well put pressure on you to change your teaching.) The Scott piece ties into this with the changing demographics and motives of students and increased institutional attention to matters of retention and time to degree. While most academics likely are in favor of more people getting a chance to go to college and being successful there, Scott fears these goals put undo pressure on the content of college curriculum (i.e. dumb it down). Clearly this is tied with assessment, which is partly how we discover such problems in the first place. It’s tough if you want your class to be about x, y, and z, but assessment demonstrates, students struggle with x, y, and z and probably need to focus on a, b, and c first.

Though Rees sets himself at a different problem, I see it as related. Rees warns faculty that flipping one’s classroom by putting lecture content online puts one at risk. As he writes:

When you outsource content provision to the Internet, you put yourself in competition with it—and it is very hard to compete with the Internet. After all, if you aren’t the best lecturer in the world, why shouldn’t your boss replace you with whoever is? And if you aren’t the one providing the content, why did you spend all those years in graduate school anyway? Teaching, you say? Well, administrators can pay graduate students or adjuncts a lot less to do your job. Pretty soon, there might even be a computer program that can do it.

It’s quite the pickle. Even if take Rees’ suggestion by heart, those superstar lectures are already out there on the web. If a faculty member’s ability as a teacher is no better than an adjunct’s or TA’s then why not replace him/her? How do we assert the value added by having an expert tenured faculty member as a teacher? That would take us back to assessment, I fear.

Like many things in universities, we’re living in a reenactment of 19th century life here. If information and expertise is in short supply, then you need to hire these faculty experts. If we measure expertise solely in terms of knowing things (e.g. I know more about rhetoric and composition, and digital rhetoric in particular, than my colleagues at UB) then I have to recognize that my knowledge of the field is partial, that there’s easy access to this knowledge online, and there are many folks who might do as good a job as I do with teaching undergraduate courses in these areas (and some who would be willing to work for adjunct pay).  I think this is the nature of much work these days, especially knowledge work. Our claims to expertise are always limited. There’s fairly easy access to information online which does diminish the value of the knowledge we embody. And there’s always someone somewhere who’s willing to do the work for less money.

It might seem like the whole thing should fall apart at the seams. The response of faculty, in part, has been to demonstrate how hard they work, how many hours they put in. I don’t mean to suggest that faculty are working harder now than they used to; I’m not sure either way. The Gilbert, Scott, and Rees articles would at least indicate that we are working harder in new areas that we do not value so much. Tim Wu explores this phenomenon more generally, finding it across white collar workplaces from Amazon to law firms. Wu considers that Americans might just have some moral aversion to too much leisure. However, he settles on the idea that technologies have increased our capacity to do work and so we’ve just risen (or sunken) to meet those demands. Now we really can work virtually every second of the waking day. Unfortunately Wu doesn’t have solution; neither do I. But assessment is certainly a by-product of this phenomenon.

The one piece of possibly good news comes from Steven Johnson, whose analysis reveals that the decline of the music industry (and related creative professions), predicted by the appearance of Napster and other web innovations, hasn’t happened. Maybe that’s a reason to be optimistic about faculty as well. It at least suggests that Rees’ worries may be misplaced. After all, faculty weren’t replaced by textbooks, so why would they be replaced by rich media textbooks (which is essentially what the content of a flipped classroom would be)? Today people spend less on recorded music but more on live music. Perhaps the analogy in academia is not performance but interaction. That is, the value of faculty, at least in terms of teaching, is in their interaction with students, with their ability to bring their expertise into conversation with students.

Meanwhile we might do a better job of recognizing the expansion of work that Wu describes.. work that ultimately adds no value for anyone. Assessment seems like an easy target. Wu describes how law firms combat one another with endless busy work as a legal strategy: i.e. burying one another in paperwork. Perhaps we play similar games of oneupmanship both among universities and across a campus. However, the challenge is to distinguish between these trends and changes in practices that might actually benefit us and our students. We probably do need to understand our roles as faculty differently.

Neoliberal and new liberal arts

In an essay for Harper’s William Deresiewicz identifies neoliberalism as the primary foe of higher education. I certainly have no interest in defending neoliberalism, though it is a rather amorphous, spectral enemy. It’s not a new argument, either.

Here are a few passages the give you the spirit of the argument:

The purpose of education in a neoliberal age is to produce producers. I published a book last year that said that, by and large, elite American universities no longer provide their students with a real education, one that addresses them as complete human beings rather than as future specialists — that enables them, as I put it, to build a self or (following Keats) to become a soul.

Only the commercial purpose now survives as a recognized value. Even the cognitive purpose, which one would think should be the center of a college education, is tolerated only insofar as it contributes to the commercial.

Now here are two other passages.

Continue reading Neoliberal and new liberal arts

digital ethics in a jobless future

What would/will the world look like when people don’t need to work or at least need to work far less? Derek Thompson explores this question in a recent Atlantic article, “The World Without Work.” It’s an interesting read, so I recommend it to you. Obviously it’s a complex question, and I’m only taking up a small part of it here. Really my interest here is not on the politics or economics of how this would happen, but on the shift in values that it would require.

As Thompson points out, to be jobless in America today is as psychologically damaging as it is economically painful. Our culture more so than that of other industrialized nations is built on the value of hard work. We tend to define ourselves by our work and our careers. Though we have this work hard/play hard image of ourselves but we actually have a hard time with leisure, spending much of our time surfing the web, watching tv, or sleeping. If joblessness leads to depression then that makes sense, I suppose. In a jobless or less-job future, we will need to modify that ethos somehow.  Thompson explores some of the extant manifestations of joblessness: makerspaces, the part-time work of Uber drivers and such, and the possibility of a digital age Works Progress Administration. As he remarks, in some respects its a return to pre-industrial, 19th-century values of community, artisanal work, and occasional paid labor. And it also means recognizing the value of other unpaid work such as caring for children or elders. In each case, not “working” is not equated with not being productive or valuable.

It’s easy to wax utopian about such a world, and it’s just as easy to spin a dystopian tale. Both have been done many times over. There is certainly a fear that the increasing precarization of work will only serve to further exacerbate social inequality. Industrialization required unions and laws to protect workers.  How do we imagine a world where most of the work is done by robots and computers, but people are still able to live their lives? I won’t pretend to be able to answer that question. However, I do know that it starts with valuing people and our communities for more than their capacity to work.

I suppose we can look to socialism or religion or gift economies or something else from the past as providing a replacement set of values. I would be concerned though that these would offer similar problems to our current values in adapting to a less-job future.

Oddly enough, academia offers a curious possibility. In the purest sense, the tenured academic as a scholar is expected to pursue his/her intellectual interests and be productive. S/he is free to define those interests as s/he might, but the products of those pursuits are freely accessible to the community. In the less-job future I wonder if we might create a more general analog of that arrangement, where there is an expectation of contribution but freedom to define that contribution.

Of course it could all go horribly wrong and probably will.

On the other hand, if we are unwilling to accept a horrible fate, then we might try to begin understanding and inventing possibilities for organizing ourselves differently. Once again, one might say that rhetoricians and other humanists might be helpful in this regard. Not because we are more “ethical,” but because we have good tools and methods for thinking through these matters.



hanging on in quiet desperation is the English way

The song refers to the nation, of course, and I’m thinking of a discipline where perhaps we are not so quiet.

Here’s two tangentially related articles and both are tangentially related to English, so many tangents here. First, an article in Inside Higher Ed about UC Irvine’s rethinking of how they will fund their humanities phd programs: a 5+2 model where the last two years are a postdoctoral teaching fellowship. Irvine’s English hasn’t adopted it (maybe they will in the future), but it is an effort to address generally the challenges of humanities graduate education that many disciplines, including our own, face. In the second article, an editorial really in The Chronicle, Eric Johnson argues against the perception (and reality) that college should be a site of workforce training. It is, in other words, an argument for the liberal arts but it is also an argument for more foundational (i.e. less applied, commercial) scientific research.

These concerns interlock over the demand for more liberal arts education and the resulting job market it creates to relieve some of the pressure on humanities graduate programs.

Here’s a kind of third argument. Let’s accept the argument that specialized professionalizing undergraduate degrees are unfair to students. They place all the risk on the students who have to hope that their particular niche is in demand when they graduate, and, in fact, that it stays in demand. In this regard I think Johnson makes an argument that everyone (except perhaps the corporations that are profiting) should agree with: that corporations should bear some of the risk/cost of specialized on-the-job-training, since they too are clearly profiting.

Maybe we can apply some of that logic to humanities graduate programs and academic job markets. I realize there’s a difference between undergraduate and graduate degrees, and that the latter are intended to professionalize. But does that professionalization have to be so hyper-specialized to meet the requirements of the job market? I realize that from the job search side, it makes it easier to narrow the field of applicants that way. And since there are so many job seekers out there, it makes sense to demand specific skills. That’s why corporations do it. I suppose you can assume it’s a meritocratic system, but we don’t really think that, do we? If we reimagined what a humanities doctoral degree looked like, students could easily finish one in 3 or 4 years. No, they wouldn’t be hyper-specialized, and yes, they would require on-the-job-training. But didn’t we just finish saying that employers should take on some of that burden?

Here’s the other piece… even if one accepts the argument (and I do) that undergrads should not be compelled to pursue specialized professionalizing degrees, it does not logically follow that they should instead pursue a liberal arts education that remains entrenched in the last century.

In my view, rather than creating more hyper-specialized humanities phds, all with the hope that their special brand of specialness will be hot at the right time so that they can get tenure-track jobs where they are primed to research and teach in their narrow areas of expertise, we should produce more flexible intellectuals: not “generalists” mind you, but adaptive thinkers and actors. Certainly we already know that professors often teach outside of their specializations, in introductory courses and other service courses in a department. All of that is still designed to produce a disciplinary identity. This new version of doctoral students would have been fashioned by a mini-me pedagogy; they wouldn’t identify with a discipline that requires reproducing.

So what kind of curriculum would such faculty produce? It’s hard to say exactly. But hopefully one that would make more sense to more students than what is currently on offer. One that would offer more direct preparation for a professional life after college without narrowly preparing students for a single job title. In turn, doctoral education could shift to prepare future faculty for this work rather than the 20th-century labors it currently addresses. I can imagine that many humanists might find such a shift anti-intellectual, because, when it comes down to it, they might imagine they have cornered the market on being intellectual. Perhaps they’re right. On the other hand, if being intellectual leaves one cognitively hamstrung and incapable of change, a hyper-specialized hothouse flower, then in the end its no more desirable than the other forms of professionalization that we are criticizing.