Writing for the Natural Sciences
In this class, I tackled the challenge of having students conduct an empirical study in my classroom even though I'm not a scientist. It went swimmingly. I developed this course as an extension of my institution's "Writing for" series of junior-level courses, which offers specialized writing instruction in various fields like the Social Sciences or the Humanities. We surveyed the College of Science and Engineering about their students' writing needs, and then I designed a course that meets the needs of students in the Natural Sciences—biology, chemistry, physics, and so on. In keeping with Moskovitz and Kellogg’s call for “inquiry-based writing,” I wanted students to be able to learn about scientific writing as an integral part of the process of empirical study. However, I did not have the luxury of attaching my writing class to a science lab class, so I had to come up with another solution that did not require me to spend half the semester teaching science and/or statistics instead of teaching writing. In short, I wanted an experiment that was extremely simple (but not intended for middle schoolers), that could be done in a classroom equipped only with desks and chairs (but that required specific materials), and that produced genuine data (but was straightforward enough that students could get a sense of patterns in the data without having to perform statistical analysis). This proved virtually impossible to find Out There. My search results just kept oscillating between proto-science “experiments” for children that did not produce any actual data (including many that claimed they were for adults) and “simple” experiments that assumed the practitioners already had a specialized knowledge that I simply couldn’t assume of my class, which would be made up of people in very different majors. So I made one up. The class was piloted in Spring 2023 as a themed section of Technical Writing. It was based around my experiment, Trashketball. Participants (i.e., students in the class) threw differently weighted balls of paper into trashcans, while researchers (i.e., also those same students) made observations to see if weight impacts accuracy. There we many other details to hash out, but at the end I had a simple experiment that did not require any specialized disciplinary knowledge, that could be performed in our classroom with materials we could make or cheaply purchase, that we could run many trials of in just a couple of days, and—thanks to the demographic data we collected—also allowed students to write Not The Same Paper As Each Other. (In other words, one student might be more interested in whether ball weight impacts accuracy differently for those over a certain height, while another student might be more curious whether weight impacts accuracy differently for students with different athletic backgrounds.) Although I wrote the study design in its entirety in preparation for the class, I did include an assignment where I removed some of the details and asked students to make those choices as a class. For instance, my initial design had the participant standing 5 feet from the trashcan, then 10 feet, then 15 feet. When students were allowed to make this design choice, though, we ended up with 5 feet, then 7.5 feet, then 10 feet. Their contribution to the study’s design allowed them to experience the ways that writing is impacted by those scientific choices and, in retrospect, how their choices might be impacted by a better understanding of the writing task they will undertake after the study is over. Students wrote in their reflections things like “The methods section got difficult when I had to explain X because there were a lot of details that didn’t make a lot of difference; if I were doing this again, I think I would have simplified that part of the experiment.” This is a lesson learned much more thoroughly through the “mistake” than it ever could have been learned through taking notes on a lecture that tells you to simplify things where possible. So, in Unit 1, we finalized the design of the experiment and ran it, while learning how to keep a good lab notebook. In Unit 2, we developed an IMRaD-style Lab Report based on the experiment, drafting and workshopping one section per week until the paper was submitted in the 11th week (of a 15-week class). In Unit 3, while I was grading Lab Reports so that students could revise them if they wanted, the students translated their Lab Reports into Conference Posters. We also touched on several other science writing genres in this unit, like proposals and press releases. In Fall 2023, I will be running the class again under its own newly-assigned course number. This time, I expect to have mostly (or maybe all!) science majors instead of the mostly technology and engineering majors I had last spring. I’ve made some tweaks from the pilot run (primarily with the order of readings/coverage), but it was so successful the first time around that I’m planning to largely run it back again. One not-so-minor change: we may be including ANOVA. I think I can handle it, and more importantly, I think they can handle it.
The Rhetoric of Science Communication
Next summer, I hope to run a Selected Topics in Writing on the subject of “science communication”—not the writing scientists do for other scientists, but the writing that scientists (and others) do for non-scientists. While working on the scientific writing class, I found myself often wanting to fold in scientific comm. There are many reasons why a practicing scientist would benefit from the ability to explain their work to people outside their field ($$$!!), but bringing in science communication got more and more difficult as I worked through the nuances. I ended up settling on a single week devoted to science comm, and that week focused almost exclusively on why a scientist ought to care about how their work is discussed in popular media. In a class devoted entirely to exploring science communication, I’m imagining a few different angles from which we can approach the topic. The term ‘science communication’ tends to encompass both science popularization (think Carl Sagan or Neil DeGrasse Tyson) and science journalism (think Rachel Carson or Brian Deer). In the first group are those folks who are taking on the role of ambassadors for science, hyping up the findings, processes, and products of science in order to increase recruitment as well as public support and funding. In the second group are those who are taking on the role of watchdog, performing third-party investigations in order to uncover corruption, fraud, and negligence. These roles have a complex and overlapping history, and even today we often see science reporting that seeks to hype alongside science popularization that seeks to correct misinformation. "Science communication" refers to this entire rhetorical ecosystem. There are three interlocutors at work here: scientists, journalists/writers, and readers/the public. One way of organizing a class on science comm might be to focus a unit on the ways that journalists talk to the public about science, another unit on the ways that scientists talk to journalists about science, and a third unit on the ways that scientists talk directly to the public about science. Major assignments might include a rhetorical analysis of a person, event, company, or other anchoring focus that takes all three of these angles into consideration. Students might additionally choose between projects that perform one of these subgenres: a science lab press release, a hype video or article, or a critical investigation. The class would serve as an introduction to rhetorical theory by way of science studies—Bruno Latour, Emily Martin, Paul Feyerabend, and Jeanne Fahnestock, for example. This structure allows the class to serve students who are going into the sciences, students who are going into journalism, and students who are interested in science and its role in the world.
An experiment in ✨contract grading✨
I am taking the plunge into contract grading. A colleague first introduced me to the concept via Asao B. Inoue’s book Labor-Based Grading Contracts, in which Inoue describes a pro-labor, antiracist pedagogy that champions the labor students perform over the products they produce. One might think of this as going truly all-in on “process over product.” You should really read the book, which is wonderful, but the short version is that student work is not assessed on quality at all, but on completion. Yes, on every assignment. The instructor, of course, determines in advance what counts as “complete,” and if a student fulfills those requirements, they receive a mark of Satisfactory (or similar), and if they miss those requirements through lateness, incompletion, or simply not turning something in, they get a mark of Unsatisfactory (or similar). A final grade of B is achieved by staying under the maximum number of Unsatisfactory marks, usually in different categories. For example, here is the difference between a B and a C in my summer class, in which there are 4 Major Assignments, 3 Peer Review sessions, and 14 Engagement tasks: A final grade of B (default): All Major Assignments submitted and marked S. No more than 1 late/incomplete Major Assignment. All peer review sessions submitted and marked S. At least 11 Engagement tasks submitted and marked S. A final grade of C: All Major Assignments submitted and marked S. No more than 2 late/incomplete Major Assignments. At least 2 peer review sessions submitted and marked S. At least 10 Engagement tasks submitted and marked S. What really drew me to this grading schema is that it largely reflects how I’ve been grading my students for years, but is much more transparent about it. One of my oldest mantras in the classroom is “Your job is to do the tasks you’re asked to do with good-faith effort and engagement. My job is to make sure that the tasks I ask you to do will give you an opportunity to learn something.” Learning writing is a process fundamentally opposed to test-based instruction. Another way of saying this is that writing students must feel safe to try-but-fail. The act of scoring a final product will always carry with it a testing logic—at least in the US where such logic permeates students’ very concept of education. Writing instructors have been wrestling with this problem for years; for me, the solution has turned out to be a rejection of scoring altogether. There are many reasons why this actually makes a lot more sense than it may seem, including practical ones. First, this grading system takes great strides toward a classroom that is welcoming and inclusive for English as an Additional Language students, first-generation college students, students whose first dialect is considered nonstandard, students with disabilities and neuroatypicality, and students with other non-classroom-based challenges, such as caregiving. It creates an environment where it is the work, effort, and practice that is valued, and it acknowledges that different students may have different goals in this regard. Second, it produces a student-instructor relation in which the instructor is guide to the student’s agency. They get to own their writing decisions without fear of “losing points” for their choices. They get to approach every assignment as something to try, not something to have already mastered by the due date. They get to receive comments on their work not as an explanation of the ways they did not meet expectations, but as the thoughts and advice of someone further along in the process. Note that each of these is an example of testing logic: losing points, mastery, failing to meet expectations… I have had students who received essay grades in the upper 90s ask me what they lost points on! While my knee-jerk response was “You gotta be kidding me,” the truth is that on reflection, they aren’t wrong to think this way. If I’m scoring their papers with marks like 83%, then the implication is clearly that there is, in fact, a potential 100% and they failed to meet it. I am the unreasonable one if I insist that a 100% is not a real possibility for an essay but a 98% is. Finally, this system in no way precludes “standards” or “rigor.” Each assignment still has requirements that students must meet in order for the work to be considered Satisfactory. These naturally will include things like length, structure or genre, research quality/quantity, formatting and other disciplinary conventions, and so on, but they can also include things like “making a contribution to the scholarly conversation” or “re-imagining the problem in light of recent events” or “demonstrating the film analysis skills discussed in Unit 2” or whatever else the class calls for. I expect a different set of skills on display in my First-Year Writing classes than I do in my junior-level specialty classes, and the criteria for a Satisfactory reflect that. As of this writing, my first section to utilize the contract grading system is kicking off. I’ve always maintained that being scared spitless is not conducive to writing good papers, and I’m happy to report that, at least in the first few reading responses that have been submitted so far, the relief is palpable. We’ll see how it goes.