faculty at work

This is one of those posts where I find myself at a strange intersection among several seemingly unrelated articles.

The first three clearly deal with academic life, while the last two address topics near and dear to faculty but without addressing academia.

The Rees, Scott, and Gilbert pieces each address aspects of the perceived and perhaps real changing role of faculty in curriculum. Formalized assessment asks faculty to articulate their teaching practices in fairly standardized ways and offer evidence that if not directly quantitative at least meets some established standards for evidence. It doesn’t necessarily change what you teach or even how you teach, but it does require you to communicate about your teaching in new ways. (And it might very well put pressure on you to change your teaching.) The Scott piece ties into this with the changing demographics and motives of students and increased institutional attention to matters of retention and time to degree. While most academics likely are in favor of more people getting a chance to go to college and being successful there, Scott fears these goals put undo pressure on the content of college curriculum (i.e. dumb it down). Clearly this is tied with assessment, which is partly how we discover such problems in the first place. It’s tough if you want your class to be about x, y, and z, but assessment demonstrates, students struggle with x, y, and z and probably need to focus on a, b, and c first.

Though Rees sets himself at a different problem, I see it as related. Rees warns faculty that flipping one’s classroom by putting lecture content online puts one at risk. As he writes:

When you outsource content provision to the Internet, you put yourself in competition with it—and it is very hard to compete with the Internet. After all, if you aren’t the best lecturer in the world, why shouldn’t your boss replace you with whoever is? And if you aren’t the one providing the content, why did you spend all those years in graduate school anyway? Teaching, you say? Well, administrators can pay graduate students or adjuncts a lot less to do your job. Pretty soon, there might even be a computer program that can do it.

It’s quite the pickle. Even if take Rees’ suggestion by heart, those superstar lectures are already out there on the web. If a faculty member’s ability as a teacher is no better than an adjunct’s or TA’s then why not replace him/her? How do we assert the value added by having an expert tenured faculty member as a teacher? That would take us back to assessment, I fear.

Like many things in universities, we’re living in a reenactment of 19th century life here. If information and expertise is in short supply, then you need to hire these faculty experts. If we measure expertise solely in terms of knowing things (e.g. I know more about rhetoric and composition, and digital rhetoric in particular, than my colleagues at UB) then I have to recognize that my knowledge of the field is partial, that there’s easy access to this knowledge online, and there are many folks who might do as good a job as I do with teaching undergraduate courses in these areas (and some who would be willing to work for adjunct pay).  I think this is the nature of much work these days, especially knowledge work. Our claims to expertise are always limited. There’s fairly easy access to information online which does diminish the value of the knowledge we embody. And there’s always someone somewhere who’s willing to do the work for less money.

It might seem like the whole thing should fall apart at the seams. The response of faculty, in part, has been to demonstrate how hard they work, how many hours they put in. I don’t mean to suggest that faculty are working harder now than they used to; I’m not sure either way. The Gilbert, Scott, and Rees articles would at least indicate that we are working harder in new areas that we do not value so much. Tim Wu explores this phenomenon more generally, finding it across white collar workplaces from Amazon to law firms. Wu considers that Americans might just have some moral aversion to too much leisure. However, he settles on the idea that technologies have increased our capacity to do work and so we’ve just risen (or sunken) to meet those demands. Now we really can work virtually every second of the waking day. Unfortunately Wu doesn’t have solution; neither do I. But assessment is certainly a by-product of this phenomenon.

The one piece of possibly good news comes from Steven Johnson, whose analysis reveals that the decline of the music industry (and related creative professions), predicted by the appearance of Napster and other web innovations, hasn’t happened. Maybe that’s a reason to be optimistic about faculty as well. It at least suggests that Rees’ worries may be misplaced. After all, faculty weren’t replaced by textbooks, so why would they be replaced by rich media textbooks (which is essentially what the content of a flipped classroom would be)? Today people spend less on recorded music but more on live music. Perhaps the analogy in academia is not performance but interaction. That is, the value of faculty, at least in terms of teaching, is in their interaction with students, with their ability to bring their expertise into conversation with students.

Meanwhile we might do a better job of recognizing the expansion of work that Wu describes.. work that ultimately adds no value for anyone. Assessment seems like an easy target. Wu describes how law firms combat one another with endless busy work as a legal strategy: i.e. burying one another in paperwork. Perhaps we play similar games of oneupmanship both among universities and across a campus. However, the challenge is to distinguish between these trends and changes in practices that might actually benefit us and our students. We probably do need to understand our roles as faculty differently.

Neoliberal and new liberal arts

In an essay for Harper’s William Deresiewicz identifies neoliberalism as the primary foe of higher education. I certainly have no interest in defending neoliberalism, though it is a rather amorphous, spectral enemy. It’s not a new argument, either.

Here are a few passages the give you the spirit of the argument:

The purpose of education in a neoliberal age is to produce producers. I published a book last year that said that, by and large, elite American universities no longer provide their students with a real education, one that addresses them as complete human beings rather than as future specialists — that enables them, as I put it, to build a self or (following Keats) to become a soul.

Only the commercial purpose now survives as a recognized value. Even the cognitive purpose, which one would think should be the center of a college education, is tolerated only insofar as it contributes to the commercial.

Now here are two other passages.

Continue reading Neoliberal and new liberal arts

digital ethics in a jobless future

What would/will the world look like when people don’t need to work or at least need to work far less? Derek Thompson explores this question in a recent Atlantic article, “The World Without Work.” It’s an interesting read, so I recommend it to you. Obviously it’s a complex question, and I’m only taking up a small part of it here. Really my interest here is not on the politics or economics of how this would happen, but on the shift in values that it would require.

As Thompson points out, to be jobless in America today is as psychologically damaging as it is economically painful. Our culture more so than that of other industrialized nations is built on the value of hard work. We tend to define ourselves by our work and our careers. Though we have this work hard/play hard image of ourselves but we actually have a hard time with leisure, spending much of our time surfing the web, watching tv, or sleeping. If joblessness leads to depression then that makes sense, I suppose. In a jobless or less-job future, we will need to modify that ethos somehow.  Thompson explores some of the extant manifestations of joblessness: makerspaces, the part-time work of Uber drivers and such, and the possibility of a digital age Works Progress Administration. As he remarks, in some respects its a return to pre-industrial, 19th-century values of community, artisanal work, and occasional paid labor. And it also means recognizing the value of other unpaid work such as caring for children or elders. In each case, not “working” is not equated with not being productive or valuable.

It’s easy to wax utopian about such a world, and it’s just as easy to spin a dystopian tale. Both have been done many times over. There is certainly a fear that the increasing precarization of work will only serve to further exacerbate social inequality. Industrialization required unions and laws to protect workers.  How do we imagine a world where most of the work is done by robots and computers, but people are still able to live their lives? I won’t pretend to be able to answer that question. However, I do know that it starts with valuing people and our communities for more than their capacity to work.

I suppose we can look to socialism or religion or gift economies or something else from the past as providing a replacement set of values. I would be concerned though that these would offer similar problems to our current values in adapting to a less-job future.

Oddly enough, academia offers a curious possibility. In the purest sense, the tenured academic as a scholar is expected to pursue his/her intellectual interests and be productive. S/he is free to define those interests as s/he might, but the products of those pursuits are freely accessible to the community. In the less-job future I wonder if we might create a more general analog of that arrangement, where there is an expectation of contribution but freedom to define that contribution.

Of course it could all go horribly wrong and probably will.

On the other hand, if we are unwilling to accept a horrible fate, then we might try to begin understanding and inventing possibilities for organizing ourselves differently. Once again, one might say that rhetoricians and other humanists might be helpful in this regard. Not because we are more “ethical,” but because we have good tools and methods for thinking through these matters.

 

 

hanging on in quiet desperation is the English way

The song refers to the nation, of course, and I’m thinking of a discipline where perhaps we are not so quiet.

Here’s two tangentially related articles and both are tangentially related to English, so many tangents here. First, an article in Inside Higher Ed about UC Irvine’s rethinking of how they will fund their humanities phd programs: a 5+2 model where the last two years are a postdoctoral teaching fellowship. Irvine’s English hasn’t adopted it (maybe they will in the future), but it is an effort to address generally the challenges of humanities graduate education that many disciplines, including our own, face. In the second article, an editorial really in The Chronicle, Eric Johnson argues against the perception (and reality) that college should be a site of workforce training. It is, in other words, an argument for the liberal arts but it is also an argument for more foundational (i.e. less applied, commercial) scientific research.

These concerns interlock over the demand for more liberal arts education and the resulting job market it creates to relieve some of the pressure on humanities graduate programs.

Here’s a kind of third argument. Let’s accept the argument that specialized professionalizing undergraduate degrees are unfair to students. They place all the risk on the students who have to hope that their particular niche is in demand when they graduate, and, in fact, that it stays in demand. In this regard I think Johnson makes an argument that everyone (except perhaps the corporations that are profiting) should agree with: that corporations should bear some of the risk/cost of specialized on-the-job-training, since they too are clearly profiting.

Maybe we can apply some of that logic to humanities graduate programs and academic job markets. I realize there’s a difference between undergraduate and graduate degrees, and that the latter are intended to professionalize. But does that professionalization have to be so hyper-specialized to meet the requirements of the job market? I realize that from the job search side, it makes it easier to narrow the field of applicants that way. And since there are so many job seekers out there, it makes sense to demand specific skills. That’s why corporations do it. I suppose you can assume it’s a meritocratic system, but we don’t really think that, do we? If we reimagined what a humanities doctoral degree looked like, students could easily finish one in 3 or 4 years. No, they wouldn’t be hyper-specialized, and yes, they would require on-the-job-training. But didn’t we just finish saying that employers should take on some of that burden?

Here’s the other piece… even if one accepts the argument (and I do) that undergrads should not be compelled to pursue specialized professionalizing degrees, it does not logically follow that they should instead pursue a liberal arts education that remains entrenched in the last century.

In my view, rather than creating more hyper-specialized humanities phds, all with the hope that their special brand of specialness will be hot at the right time so that they can get tenure-track jobs where they are primed to research and teach in their narrow areas of expertise, we should produce more flexible intellectuals: not “generalists” mind you, but adaptive thinkers and actors. Certainly we already know that professors often teach outside of their specializations, in introductory courses and other service courses in a department. All of that is still designed to produce a disciplinary identity. This new version of doctoral students would have been fashioned by a mini-me pedagogy; they wouldn’t identify with a discipline that requires reproducing.

So what kind of curriculum would such faculty produce? It’s hard to say exactly. But hopefully one that would make more sense to more students than what is currently on offer. One that would offer more direct preparation for a professional life after college without narrowly preparing students for a single job title. In turn, doctoral education could shift to prepare future faculty for this work rather than the 20th-century labors it currently addresses. I can imagine that many humanists might find such a shift anti-intellectual, because, when it comes down to it, they might imagine they have cornered the market on being intellectual. Perhaps they’re right. On the other hand, if being intellectual leaves one cognitively hamstrung and incapable of change, a hyper-specialized hothouse flower, then in the end its no more desirable than the other forms of professionalization that we are criticizing.

It turns out that the Internet is a big place

I suppose this is coincidentally a follow-up of sorts on my last post. It might also be “a web-based argument for the humanities” of a sort. We’ll see.

On The Daily Beast, Ben Collins asks the musical question “How Long Can the Internet Run on Hate?” One might first be inclined to answer, “I don’t know, but we’re likely to find out.” However, on reflection, one might take pause: hold on, does the Internet run on hate? I don’t think I need to summarize Collins’ argument, as we all know what he’s on about here. If one wasn’t sure, then the comments following the article would at least give one a taste.

So a couple observations.

1. The Internet cannot be separated all that easily from the rest of culture. One might as well ask how long can civilization run on hate (the answer? apparently a good long while). Obviously the Internet did not invent hate. Does it make us hate more? Or does it simply shine a light in the dark corners of humanity’s hatred? Probably both.

Continue reading It turns out that the Internet is a big place

expression is not communication

I’ve been struck with a patch of Internet curmudgeon syndrome of late: spending too much time on Facebook probably. One of the ongoing themes of my work as a digital rhetorician is the observation that we do not know how to behave in digital media ecologies. That observation is not a starting point for a lesson on manners (though we certainly get enough of those too!). Instead, it’s a recognition of the struggle we face in developing digital-rhetorical practices.

Those of us who were online in the 90s (or earlier) certainly remember flame wars on message boards and email lists. This was the start of trolling, a familiar behavior to us all which in some respects I think has mutated and become monetized as clickbait. Of course trolls are just looking to get a rise out of you. It may be hard to tell the difference from the outside, but some of these incendiary conversations were genuine disagreements. I know I was part of some very heated exchanges as a grad student on our department email list. Eventually you realize that you’re not in a communication situation but instead you’re part of a performance where the purpose is not to convince the person you’re putatively speaking to but to make that person look foolish in front of a silent audience who has been subjected to your crap by being trapped on the same email list with you. That changes one’s entire rhetorical approach, especially when you realize that the bulk of that captive audience isn’t captive at all but simply deleting emails.

Continue reading expression is not communication

the humanities’ dead letter office

Adeline Koh writes “a letter to the humanities” reminding them that DH will not save the humanities (a subject I’ve touched on at least once). Of course I agree, as I agree with her assertion that we “not limit the history of the digital humanities to humanities computing as a single origin point.” Even the most broadly articulated “DH” will not save the humanities, because saving is not the activity that the humanities require: ending maybe, but more generously changing, evolving, mutating, etc.

Koh’s essay echoes earlier arguments made about the lack of critical theory in DH projects (narrowly defined). As Koh writes:

throughout the majority of Humanities Computing projects, the social, political and economic underpinnings, effects and consequences of methodology are rarely examined. Too many in this field prize method without excavating the theoretical underpinnings and social consequences of method. In other words, Humanities Computing has focused on using computational tools to further humanities research, and not to study the effects of computation as a humanities question.

But “digital humanities” in the guise of “humanities computing,” “big data,” “topic modelling,” (sic) “object oriented ontology” is not going to save the humanities from the chopping block. It’s only going to push the humanities further over the precipice. Because these methods alone make up a field which is simply a handmaiden to STEM.

I have no idea what object oriented ontology is doing in that list. Maybe she’s referring to object oriented programming? I’m not sure, but the philosophical OOO is not a version of DH. However, its inclusion in the list might be taken as instructive in a different way. That is to say that I was maybe lying when I said I had no idea what OOO is doing on this list alongside a couple DH tropes. It is potentially a critical theorist’s list of enemies (though presumably any such list would be incomplete without first listing other competing critical theories at the top). And this really brings one to the core of Koh’s argument:

So this is what I want to say. If you want to save humanities departments, champion the new wave of digital humanities: one which has humanistic questions at its core. Because the humanities, centrally, is the study of how people process and document human cultures and ideas, and is fundamentally about asking critical questions of the methods used to document and process. (emphasis added)

So “humanistic questions” are “critical questions.” As I read it, part of what is going on in these arguments is an argument over method. As Koh notes, DH is a method (or collection of methods, really, even in its most narrow configuration). But “critical theory” is also a collection of methods. As the argument goes, if the humanities is centrally defined by critical-theoretical methods then any method that challenges or bypasses those methods would be deemed “anti-humanistic.”

I’ve spent the bulk of my career failing the critical theory loyalty litmus test, so I suppose that’s why I am unsympathetic to this argument. Not because my work isn’t theoretical enough! One can always play the theory oneupmanship game and say “my work is too theoretical. It asks the ‘critical questions’ of critical theory.” But actually I don’t think there’s a hierarchy of critical questions, though there clearly is a disciplinary paradigm that prioritizes certain methods over others, and from within that paradigm DH (and apparently OOO as well while we’re at it) might be viewed as a threat. The rhetorical convention is to accuse such threats of being “anti-theoretical,” as being complicit with the dominant ideology (like STEM), or, perhaps worse, being ignorant dupes of that ideology.

I can certainly account for my view that critical-theoretical methods are insufficient for the purposes of my research. That said, I have no issue with others undertaking such research. The only thing I really object to is the claim that a critical-theoretical humanities serves as the ethical conscience of the university.  If the argument is that scholars who use methods different from one’s own are “devaluing the humanities” then I question the underlying ethics of such a position.

I’m not sure if the humanities need saving or if the critical-theoretical paradigm of the humanities needs saving or if it’s not possible to distinguish between these two. I’m not part of the narrow DH community that is under critique in this letter. I’m not part of the critical-theoretical digital studies community that Koh is arguing for. And I’m not part of the other humanities community that is tied to these central critical-humanistic questions.

I suppose in my view, digital media offers an opportunity (or perhaps creates a necessity) for the humanities to undergo a paradigm shift. I would expect that paradigm shift to be at least as extensive as the one that ushered in critical theory 30-40 years ago and more likely will be as extensive as the one that invented the modern instantiation of these disciplines in the wake of the second industrial revolution. I’m not sure if the effect of such a shift can be characterized as “saving.” But as I said, I don’t think the humanities needs saving, which doesn’t necessarily mean that it will continue to survive, but only that it doesn’t need to be saved.

writing in the post-disciplines

Or, the disorientation of rhetoric toward English Studies…

In her 2014 PMLA article “Composition, English, and the University,” Jean Ferguson Carr makes a strong argument for the value of rhetoric and composition for literary studies in building the future of English Studies. She pays particular attention to composition’s interests in “reading and revising student writing,” “public writing,” “making or doing,” and using “literacy as a frame.” As I discussed in a recent post, there’s a long history of making these arguments for the value of composition in English, an argument whose proponents one assumes would welcome MLA’s recent gestures toward inclusiveness. Of course the necessity of these arguments, including Carr’s, stems from the fact that mostly the answer to the question “what is the value of composition to English?” has been answered as “nothing” or “not much,” at least beyond the pragmatic value of providing TAships for literary studies Phd students.

I’m more interested in the opposite question though, “what’s the use of literary studies to rhetoric/composition?” It’s not a question Carr really concerns herself with, mentioning only in passing that “a more intentional and articulated relationship between composition and English is still mutually beneficial,” though she doesn’t offer much evidence for this. Presumably she (rightly) identifies her audience as literary scholars for whom this question would likely never arise. However, I think the answer might be similar: nothing, or not much, at least beyond the pragmatic value that the institutional security of an established English department might provide. And with that security wavering, well…

Continue reading writing in the post-disciplines

“this will revolutionize education”

I picked up on this from Nick Carbone here. It’s a video by physics educator Derek Muller (who I think I’ve written about before here but I can’t seem to find it if I did).  Here’s actually two videos.

The share a common there. The first deals with the long history of claims that various technologies will “revolutionize education.” In debunking those claims, Muller argues for the important role of the teacher as a facilitator of the social experience of education and an understanding of learning as a dialogic experience, though he doesn’t quite put it in those terms. The second video discusses research he has done with using video to instruct students in physics (he has a YouTube channel now with around 1M subscribers). Similar to the first video, he finds that a video that enacts a dialogue and works through common misconceptions, while being more confusing and demanding more attention of the viewer, ultimately results in more learning.

Continue reading “this will revolutionize education”

what to do when a professional organization tries to embrace you

Yesterday, at least in my disciplinary corner of the online world, there was a fair amount of discussion about the Chronicle of Higher Education report of the Modern Language Association’s upcoming officer elections, which will ultimately result in someone from the field of rhetoric becoming MLA president. I was interviewed and briefly quoted for the article, so I thought I’d be a little more expansive here.

In the most cynical-pragmatic terms, one imagines that MLA can see that rhetoric faculty are underrepresented among its members, so they are an obvious potential market. One can hardly blame an organization for trying to grow its membership, so what does it have to do to appeal to those potential consumers?  In more generous terms, MLA might view itself as having some professional-ethical obligation to better represent all of the faculty it lays claim to when it asserts itself as representing “English.” It specifically names “writing studies” in its mission, though not rhetoric or composition. Of course it is always a happy coincidence when the pragmatic and the ethical are in harmony.

Continue reading what to do when a professional organization tries to embrace you