2021 Reading

Inspired by Brian‘s 2020 list, I decided to keep a list of all the books I read in 2021. I wish I had done this in 2020, since the lack of live sports meant that I read a ton that year. You can see evidence of the return of baseball and soccer in the 2021 list.

I only counted books that I read in their entirety – abandoned books and skimmed books (for work) didn’t count.

  1. City of Saints and Madmen, Jeff Vandermeer
  2. Asymmetry, Lisa Halliday
  3. Kent State: Four Dead in Ohio, Derf Backderf
  4. How to Write One Song, Jeff Tweedy
  5. Gestures of Concern, Chris Ingraham
  6. Black Futures, Kimberly Drew and Jenna Wortham
  7. Your Black Friend, Ben Passmore
  8. Detransition, Baby, Torrey Peters
  9. Tokyo Ueno Station, Yu Miri
  10. The Future of Another Timeline, Annalee Newitz
  11. The Hare, Melanie Finn
  12. No One is Talking About This, Patricia Lockwood
  13. If You Kept a Record of Sins, Andrea Bajani (trans. Elizabeth Harris)
  14. The Gloaming, Melanie Finn
  15. Golem, Nick Montfort
  16. White Dialogues, Bennett Sims
  17. A Little Devil in America, Hanif Abdurraqib
  18. Hummingbird Salamander, Jeff Vandermeer
  19. Haints Stay, Colin Winnette
  20. Minor Feelings, Cathy Park Hong
  21. Crying in H Mart, Michelle Zauner
  22. A Questionable Shape, Bennett Sims
  23. Acrobat, Nabaneeta Dev Sen (trans. Nandana Dev Sen)
  24. A Door Behind a Door, Yelena Moskovich
  25. The Blurry Years, Eleanor Kriseman
  26. The Incantations of Daniel Johnston, Ricardo Cavolo and Scott McClanhahan
  27. I Sexually Identify as an Attack Helicopter, Isabel Fall
  28. (Re)Born in the USA, Roger Bennett
  29. The Mushroom at the End of the World, Anna Lowenhaupt Tsing
  30. A Children’s Bible, Lydia Millet
  31. The Drop Edge of Yonder, Rudolph Wurlitzer
  32. Transparent Designs, Michael Black
  33. Cloud Ethics, Louise Amoore
  34. The Sympathizer, Viet Thanh Nguyen
  35. Klara and the Sun, Kazuo Ishiguro
  36. A Mouthful of Air, Amy Koppelman
  37. Palaces, Simon Jacobs
  38. The Underneath, Melanie Finn
  39. Giving Voice: Mobile Communication, Disability, and Inequality, Meryl Alper
  40. My Heart is a Chainsaw, Stephen Graham Jones
  41. Digital Black Feminism, Catherine Knight Steele
  42. Ennemonde, Jean Giono (trans. Bill Johnston)
  43. Darryl, Jackie Ess
  44. The Loneliness of the Long Distance Cartoonist, Adrian Tomine
  45. Black Skin, White Masks, Frantz Fanon
  46. The Undercommons, Stefano Harney and Fred Moten
  47. Glitch Feminism, Legacy Russell
  48. Piranesi, Susanna Clarke
  49. Experiments in Imagining Otherwise, Lola Olufemi 
  50. Grievers, Adrienne Maree Brown
  51. Harlem Shuffle, Colson Whitehead
  52. Three to Kill, Jean-Patrick Manchette
  53. Barn 8, Deb Olin Unferth

About one book per week isn’t too bad, I guess. Nearly all the academic books in this list are from my Digital Inequality class, and some of the fiction came from two subscriptions: Two Dollar Radio and Archipelago.

This year I learned that I’ll likely read anything by Melanie Finn and that I need to track down all of Jean-Patrick Manchette’s stuff (fun!)

Some favorites this year: Piranesi, A Children’s Bible, and Digital Black Feminism.

I end the year in the middle of two books, so I guess those will be the first on the 2022 list:

Dance of the Infidels: A Portrait of Bud Powell, Francis Paudras
Madmen, Shriek: An Afterword, Jeff Vandermeer

What can we learn from the Fediverse even if we remain inside of the Corporate Internet?

This piece in the Atlantic about “how to put out democracy’s dumpster fire” turns to a range of experts who are rethinking the motivational structure and design of the Internet. It starts with a discussion of Tocqueville’s Democracy in America, and harkens back to a day when the U.S. thrived due to its various “associations.” Groups of workers or other like-minded citizens gathering to do the work of democracy. The history told in this piece is over-simplified, and this critique from Jeff Jarvis is fair:

Jeff Jarvis’ critique of the “moral panic” that defines much of the Atlantic article.

However, I’m most interested in the article’s turn to a “new generation” of people who are trying to reimagine the Internet:

“A new generation of internet activists, lawyers, designers, regulators, and philosophers is offering us that vision, but now grounded in modern technology, legal scholarship, and social science. They want to resurrect the habits and customs that Tocqueville admired, to bring them online, not only in America but all across the democratic world.”

That generation includes a range of folks, including J. Nathan Matias, Ethan Zuckerman, Eli ariser, Talia Stroud, and many others. It also discusses Polis, a platform that I’ve made note of in a previous post. However, there’s no mention of the fediverse, and I think it would be interesting to set those efforts alongside those mentioned in the article. The fediverse is driven by folks who are trying to build a new infrastructure, while the Atlantic appears mostly interested (with some exceptions) in those studying the existing infrastructure.

Obviously, there is room (and a need) for both, though I’ve thinking of those managing federated servers as doing something like the “vernacular” work in this area – developing some on-the-ground strategies. Those strategies are doing theory by way of software, codes of conduct, web development, server maintenance, etc. I wonder if these research centers are studying this (often niche) portion of the internet or if the focus is primarily on how to rethink the corporate Internet that we all live inside of.

As I think more about this project that I want to get off the ground, I am interested primarily in the applications emerging out of the fediverse as models for a range of activities. Yes, I am interested in understanding how a federated model might be enacted by more people – by people who are outside of the very tech-savvy groups that currently run and maintain federated services. But I’m also interested in understanding if those tools and practices of these fediverse communities might be something that could be enacted on the corporate Internet or if the mere fact that a service runs through servers that are extracting and monetizing data essentially kills any possibility of learning from federated networks.

Taiwan’s approach to broken social media systems

A former student passed along this story about Taiwan’s approach to building more trust between citizens and the government. It just so happens that I’m part of a panel proposal for the Association of Internet Researchers conference that includes a researcher, Misti Yang, who is researching Taiwan’s approach. I would imagine I’ll learn much more about this situation from Misti, but for now I thought I’d jot down a few thoughts about this particular writeup.

When Taiwan’s government experienced a breakdown in trust with citizens (a situation that came to a head during the Sunflower Revolution), it turned to “civic hackers.”

Taiwan’s civic hackers were organized around a leaderless collective called g0v (pronounced “gov zero.”) Many believed in radical transparency, in throwing opaque processes open to the light, and in multi-stakeholderism, the idea that everyone who is affected by a decision should have a say in it. They preferred establishing consensus to running lots of majority-rule votes. These were all principles, incidentally, that parallel thinking about how software should be designed — a philosophy that g0v had begun to apply to the arena of domestic politics.

g0v thought the problem in Taiwan was linked to a disconnect between politicians and the public – there was no clear way of gauging public sentiment and no good way of crafting some kind of consensus. Social media spaces, unsurprisingly, were of little use, since their algorithms (built on “engagement”) amplify more extreme content that increases division. So, they built a new digital space:

The hackers’ answer was called vTaiwan. (The “v” stands for virtual.) A mixed-reality, scaled listening exercise, it was an entirely new way to make decisions. The platform invites citizens into an online space for debate that politicians listen to and take into account when casting their votes. Government would start a new vTaiwan process on a political question it was deliberating, and Taiwanese people from across the full spectrum of opinion would join one another to discuss it online.

But vTawain wasn’t just a replication of corporate social media platforms. Instead, it used Pol.is, a platform that essentially amplifies consensus building statements and hides trolling and flaming. The focus is on finding points of agreement.

My initial response, before knowing too much about the intricacies of Pol.is, is that this searching for consensus might come at the expense of important counter arguments. Consensus can often mean just this: the silencing of opinions that are important but have perhaps not gained enough traction or political momentum to get a foothold. I’ll be interested to learn more about how Pol.is navigates this problem.

“Groups are bad”

The “groups are bad” commonplace is well established, and it makes sense. Groups that serve to circulate misinformation and disinformation are quite obviously bad for any number of reasons. But is it “groups” that are bad? Or is the problem a set of social media companies that conceive of groups as singular, disconnected spheres? My sense is that it’s the latter.

The federated model allows for groups, but it also allows for those groups to decide together how they are connected to other groups. This is clearly not the focus of the Facebook model (or of Discord and others, at least as far as I can tell).

But it’s worth pausing over the “groups are bad” commonplace in order to think about what implementation of groups might actually be good. Groups allow marginalized groups protection (among other things), so insisting that they automatically lead to misinformation/disinformation, echo chambers, etc. is not the whole picture.

Federalism’s Silos

In a recent interview with NPR, GOP Congressman Don Bacon (a Republican from Nebraska) spoke about what legislation an incoming Biden administration might work on to “unit” Congress. I was struck by this moment, when he was commenting on the new Coronavirus relief bill proposed by the incoming Biden administration:

There’s a couple of areas in this bill that will cause heartburn (laughter) on the right. I’ll give you an example. A lot of money is going towards state and local governments. And, you know, the sense on our side of the aisle is it’s helping bail out bad governance. Well, in Nebraska, we have a balanced budget, and there’s a reluctance to have our taxpayer money going to New York and Chicago, Los Angeles, if they’re not being fiscally responsible. So there’s…

There are plenty of reasons why different state and local governments spend different amounts of money and have balanced or unbalanced budgets. I’m less interested in the reasons for those discrepancies than I am in how Bacon views the people and governments of “New York and Chicago, Los Angeles.” They are connected to his Nebraska district because they are in the United States of America, but his statement doesn’t seem to provide any recognition of that. He is a member of the House “problem solvers caucus” so he would presumably be classified as a moderate. This is not an outside the norm view.

What are the limits of a system of government that encourages a congressman from Nebraska to shrug his shoulders at the suffering of residents of Chicago? The “silo” problem is often talked about as a digital problem, but this issue of not having connections between “bubbles” seems much larger. In the U.S., the federal system does not necessarily encourage bubbles or silos, but it does require conversations about how to connect the concerns of different states, municipalities, and cities with one another.

Are we connected? If so, how do we make that connection? Those connections must be maintained and cared for. They can’t be taken for granted or off-loaded to “institutions” that just run on auto pilot.

Trigger Warnings, Caveat Emptor, and the Ideology of Solitary Reading

*During the Recent MLA conference, I presented this short paper as part of a panel on the relationship between reading and writing. This presentation was an opportunity to use some of the ideas I’ve been exploring as part of the Federated project, so I thought I’d post it here.

Trigger warnings emerged in digital spaces as a writing practice that could shape the reading practices of others. A piece of content is accompanied by a warning to the audience about what they are about to read, watch, or listen to, and that warning is meant to prepare the audience for troubling and potentially traumatic content. It is a practice rooted in the idea that reading is collaborative and that trauma can shape how we are affected by a range of content. But in recent years, critics on the left and right have derided it, and these controversies reveal an ideology that understands reading as a solitary activity. This ideology sees the reader as responsible for their encounter with the text: caveat emptor. The reader must assume responsibility for absorbing whatever a text inflicts. But this is a standardized notion of reading, one that understands all reading practices and situations as roughly the same and as solitary. Where does that ideology come from? I’m hoping to begin to answer that question in this brief presentation.


Now, please excuse the fact that I’m going to start with a viral metaphor, but you can blame Kendall Gerdes’ brilliant RSQ article and not me. In “Trauma, Trigger Warnings, and the Rhetoric of Sensitivity,” Gerdes analyzes the controversy surrounding trigger warnings that emerged around 2013. As part of that study, Gerdes points out that the trigger warning blowup often featured a rhetoric of viral infection to describe how trigger warnings moved from digital spaces like Tumblr onto college campuses:

One prominent commonplace about trigger warnings was the description of their spread to college campuses from their use on online forums and blogs (especially feminist blogs) as a virus or cancer. A telated move was to place blame for the outbreak on coddled, entitled millennials, or teenagers, particularly users of the social media site tumblr.

(Gerdes 6–7)

The academy’s immune response to this “infection” was quite effective, since the concern that trigger warnings would be widely implemented (or were becoming, in the words of Jack Halberstam, “standard fare”) turned out to be overblown. As Gerdes notes, a 2015 survey of MLA and College Art Association members revealed that 0.5% of respondents said their institutions had adopted a trigger warnings policy. We could (and I promise the virus metaphor is almost done) see this whole episode as an example of what happens when a system is exposed to a less effective version of an infection, since the insistence from critics that trigger warnings were going to take over the academy and academic freedom in general turned out to be a strawman, one that allowed us all to develop effective (and affective) antibodies. (Sorry, again.)


Trigger warnings had indeed been used in feminist and queer online spaces prior to this explosion of public discussion around 2013. And that discussion misunderstood the content warning in a variety of ways, not least of which was an insistence that the warning is a barrier to communication rather than as a gateway to it. This misunderstanding positions the warning as a binary logic gate of “read” or “don’t read” rather than a preparation or even a welcoming gesture. The trigger warning has never been primarily about stopping people from encountering things – it is instead a way for people to prepare for potentially triggering material. In fact, Neil Simpkins has argued that trigger warnings are often used as


“inventive devices, inviting authors to compose more expressive, visceral writing. Trigger warnings frequently function as the starting point of a post where writers position themselves in relation to their audiences.”

Orem and Simpkins


The trigger warning offers a way in to a text, image, or video, and it reveals that certain online communities must carefully construct the collaborative spaces. The trigger warning is a situated practice that understands reading and writing as activities that are happening in communities, not in some generic space. This insistence on the specific context is key to understanding why the trigger warning was so misunderstood by many, but especially academics. In a sense, we can understand the hostility to the content warning as an attempt to transform a bespoke, customized tool into something standardized and mass-produced.

The understanding of content warnings as a binary choice (read or don’t read, enter or don’t enter) was not only an overreaction by academics and others supposedly concerned about “free speech” or “snowflakes” or “academic freedom” but also a symptom of how an approach based on standardization clashes with tools and practices built for situated practices and for spaces with borders that are constantly revisited and renegotiated. Research on writing and rhetoric has long explored the collaborative nature of writing, and it has often seen digital environments such as wikis or social media platforms not as creating collaborative writing but rather as revealing how writing has always been a collective effort. Much of this research has looked to digital environments for clues as to how to theorize collaborative writing, and those same digital environments offer us clues as we consider how reading is also collaborative. Unfortunately, the trigger warning controversy indicates that we have been less open to digital culture’s cues when it comes to collaborative reading.

Interestingly, it may be that the digitization of nearly all aspects of life is partially to blame for this problem. We are now living in a moment where the logic of our digital environments so imbues (and is imbued by) everything we do – something that is beyond an infection and is more like an inflection – that we aren’t looking to digital technologies to learn about reading and writing. Instead, we are looking for the next module, app, or patch. The next killer app. When we tried to have a conversation about trigger warnings, it seemed to be a conversation about a “plug and play” tool that could be applied in a standardized way. But trigger and content warnings are not standardized practices or sets of rules to be applied. They are instead indicative of an ethics of reading, one piece of an entire suite of practices that includes trigger warnings, content warnings, avoiding some content, highlighting some content, contextualizing content differently, seeing reading and writing spaces as something constructed and maintained, and more.

Critics argued that warnings figure the reader as fragile and helpless: “What [those who reject trigger warnings] take issue with is the projection of the student as a fragile organism with no intellectual immune system and a minefield of a psyche that may explode into pieces at any moment” (Halberstam 539). While trigger warnings do ask us to care for our audience, the roots of that request are that we see reading as collaborative. The commonplace that writing is collaborative has largely settled, and while digital technology did not create this commonplace it certainly played a role. But for some reason, the trigger warning’s emergence on the scene had the opposite effect. Instead of playing a part in solidifying the notion that reading is a collaborative practice – shaped by communities, texts, technologies – the trigger warning was broadly rejected by a number of loud voices who claimed it was a threat to academic freedom and was a way of coddling those who were too sensitive and fragile. Instead of learning from tumblr about how we might understand the ways communities can be maintained and sustained, we fell back on the notion that an encounter with troubling or traumatic content either can’t or shouldn’t be avoided.

If we examine critiques of trigger warnings, we see that those critics envision this practice, developed for specific reasons and with specific goals as a standardized practice applied everywhere, like a blanket that would be laid over all of academia, snuffing out the flames of academic freedom. In one example that is indicative of this broader pattern, Lisa Duggan imagined what would happen when trigger warnings were widely adopted and standardized: “once they become the province of student senates, administrative bodies and university policies, they run the risk of marking and targeting the courses on gender and sexuality, critical race theory, colonial and postcolonial studies” (Duggan). We should note here the valid concern that the trigger warning would be turned against the very communities that advocate for it. But this is one more reason not to treat the trigger warning as a standardized practice implemented everywhere in the same way, an approach that very few advocates seemed to be calling for.

This logical leap, which moved quickly away from an understanding of warnings as a situated practice applied in fluid ways and toward a standardized practice imposed rigidly on all, is linked to an educational infrastructure that is understood as modular and standardized. It is rooted in the assumption that education is aimed at this rhetorical figure of “the student,” and it sees education as a set of practices that must be built the same way for everyone. This is where the logic of reading as “caveat emptor” comes from – it relies on the figure of the “generic student” who is treated the same in all contexts because doing it any other way is too difficult and requires too much care and maintenance

Trigger warnings were seen as one more solution that would be added to our standard set of scalable tools – the tools that can meet “any” need and that address a generic student, regardless of situation. However, the trigger warning operates in a different register, one that actually resists the standardization imagined by its critics. Trigger warning practices are negotiated and renegotiated in communities, not imposed from without, and their logic does not easily “click in” to the standardized infrastructure. That is because they were developed as a response to that standardized infrastructure. Digital environments imagine a generic user, and this means that they are always built for “someone else.” They are an attempt to be as generic as possible so as to be scalable. And so this software is by definition “badly made.” For some, it is especially bad – what is an inconvenience for one community is openly hostile and dangerous to another. Given many platforms disinterest in protecting people from abuse, communities have had to develop their own, bespoke tools and practices in order to operate inside of a standardized environment Trigger warnings and other similar practices were developed as tactical responses to these often hostile networked environments, and they were never really intended to become some broad template applied everywhere in some standard way. Unfortunately, this same logic of standardization, of the generic user, shapes our educational infrastructure.

The trigger warnings uproar was about a generic, solitary reader – an ideology of reading that doesn’t always adequately account for how we read with others, in communities. Critics actually projected this notion of reading onto those advocating for TWs, saying that trigger warning advocates see the reader as “fragile” and helpless. But trigger warnings are actually rooted in a collaborative and situated theory of reading, and that’s because they emerged in spaces that needed to be maintained and cared for, spaces with borders that had to be carefully managed. Marginalized groups on the internet need ways to carve out and maintain spaces, and this means that these communities do a great deal of work to protect themselves and their communities. They develop practices, like trigger warnings, not only to be inclusive but also to exclude the harmful, bilious trolls and harassers who are floating around the internet. These practices are custom-made, they are built inside of a standardized internet, and that very fact offers us a clue as to their true value to a range of places, including academia.

Given our current set of technical arrangements, it is easy to quickly jump to the logic of standardization, the modular thinking that shapes so much of what we do, and we did apply this same logic to trigger warnings, banishing it because it couldn’t be plugged in like an out-of-the-box solution, a Canvas application, some new piece of educational software, some five-year-old syllabus pulled from a folder and dusted off. The standardized approach to education sees all students (and readers) as roughly the same, plugged into the same boxes and courses. This generic student is expected to move through institutions of higher education that are designed the same way Facebook is, a scalable tool that is broken for everyone and is especially broken for others. But what if a course is a space that should be rethought and renegotiated constantly? There’s no question this is more work, but it is the approach suggested by trigger warnings and the communities that developed similar practices. This set of practices offers a way to resist the forces of standardization with different ethical commitments and different notions of what it means to read and write with others.

Works Cited
Duggan, Lisa. “On Trauma and Trigger Warnings, in Three Parts.” Bully Bloggers, vol. 23, 2014.


Gerdes, Kendall. “Trauma, Trigger Warnings, and the Rhetoric of Sensitivity.” Rhetoric Society Quarterly, vol. 49, no. 1, Taylor & Francis, 2019, pp. 3–24.


Halberstam, Jack. “Trigger Happy: From Content Warning to Censorship.” Signs: Journal of Women in Culture and Society, vol. 42, no. 2, University of Chicago Press Chicago, IL, 2017, pp. 535–42.


Orem, Sarah, and Neil Simpkins. “Weepy Rhetoric, Trigger Warnings, and the Work of Making Mental Illness Visible in the Writing Classroom.” Enculturation, vol. 16, 2015.

Federated Networks and Teaching

One of the things I hope to do in this project (whatever it is) is to think about federated networks from multiple angles. In fact, I’m even thinking about dipping into discussions of federalism in political science in future posts. I want to stretch the term (but not too much) to understand different ways of organizing networks and communities.

One way I attempted to think with this term was through a course I taught last semester. The course was called “Writing New Media,” and it fulfills a general education writing requirement. The actual content of the course is less important than the fact that I tried two new things: 1) I ran asynchronous course discussions using Mastodon, a federated social networking platform; 2) I started the course with sustained discussion of the course’s “Code of Conduct” and wrote that CoC along with my students. I’d never done either of these things before, but they were connected. The use of Mastodon gave us a place to have discussions that were not linked to corporate platforms (we did use Canvas for certain course business) and that allowed us to think about how we wanted to configure that places. I had full administrative control, so we were able to design the space how we wanted. We didn’t do too much customization, but we had the ability to if we wanted it.

The CoC activity gave us the opportunity to think about the rules and values that would shape our class. We essentially drafted a constitution for the course. It helped that this was a digital writing class, but I think both of these practices would be applicable in other types of courses as well. In our class, the use of Mastodon opened up questions about how social media platforms are used and abused, and we read some CoC theory to think about what should or shouldn’t be included in such a document.

Both of these practices got me thinking about how discussions of federated networks might help us think differently about teaching and learning. For one, these practices forced all of us (me included) to think carefully about the kind of space we wanted to create. One interesting thing emerged as we wrote our CoC: How would we deal with violations of the CoC? And what if one of those violations included me? We had to imagine this scenario develop a procedure for it (you can read our CoC here), which really changed the way I thought about my interactions with students. There was a certain vulnerability in these discussions, since I had to lay out how I would recuse myself from any CoC violations involving me.

This process very much relates to how I’ve been thinking about federated networks, which force us to think about how or whether we’re going to connect with one another. This is a messy and complex process, and it’s this complexity that I think leads many to call federated networks unrealistic. But rather than taking connection or relations for granted (as I probably have in most other courses I’ve taught), our class had to think through how we wanted those relations to look and how we wanted to manage the complex process of building and maintaining a community.

Standardized Networks and the Unscalable

From the standardization of packets of information in Internet protocols to the radio buttons we click on in social media privacy settings, the insistence on standardized tools drives Internet communications technology. These protocols and technologies worship at the alter of “scalability.” Any networked software tool attempting to gain traction (that is, venture capital) must have a plan for scalability. It must be usable by individuals, small groups, and large groups without much customization. It must operate in some kind of standardized way.

Scalability and standardization have become entirely invisible and commonplace, so much so that it is now just common sense. Who would question standardized Internet protocols? Who would argue for “unscalable” software that is built for very specific purposes? This push to standardize means that networked environments are, in essence, always built for “someone else.” They are an attempt to be as generic as possible so as to be scalable. (This also means that they are aligned with whiteness and its desire to disappear, the subject of a future post.) And so this software is by definition “badly made.” For some, it is especially bad – what is an inconvenience for one community is openly hostile and dangerous to another. Sometimes it’s even worse than this, since a convenience for one group might be actively harmful to another. Consider the fact that Discord, the chat tool initially designed for gamers but now applied to a range of other communication situations, allows different group discussions (called “servers”) to generate invitation links so that others can be easily invited to the conversation. This same function, when used by groups who are targets of harassment, can serve as a backdoor for organized raids of abusers and harassers (this is something Greg Hennis and I wrote about for an edited collection called Digital Ethics). Invitations links circulate on message boards, and bad actors use them to coordinate raids and attacks. This is just one example of how a standardized set of tools is simultaneously built for everyone and no one.

Given that we appear to be mostly locked in to certain software environments (Facebook, Google, Amazon, Alibaba, etc.), what are our options for living within and with these ill-fitting tools and environments? Our best approach may be to seek out and learn from the practices of those who have long had to work against the grain to live in an ill-fitting world. We could refer to these practices as the building of “bespoke networks” within these ill-fitting, standardized environments. They offer a glimpse at how we can reimagine networked life on a broader scale. These bespoke networks/tools exist throughout the networked life, so my argument is not that we need to necessarily start from scratch to build an entirely “new” set of tools. Instead, we can turn to those who have worked to build tools and spaces that operate by different logics, logics that do not fit well in standardized networks. When such tools catch the eye of the standardized scalable Internet, they are too often misunderstood as “add-ons” for this environment. But not everything is an app or a plug-in, especially when the tools and methods in question have been built for specific communities and purposes.

Darius Kazemi’s take on this (mentioned in my previous post) is especially helpful:

Any time I propose a new piece of software to a group of software engineers I’m asked the same question: how will it scale? We are trained as a group to ask this question. I think it’s the software equivalent of in manufacturing when someone asks “What will it cost to produce?” Since the marginal cost of producing software is effectively zero, it’s the scale, the ability for the software to be used by millions or billions of people, that becomes the limiting factor that everyone brings up.

Imagine two different software developers. One person writes a piece of software that makes the lives of one million people slightly easier. Maybe it’s better routing for navigation software and it shaves 30 seconds off the commute of a million people. Another person writes a piece of software that only ten people ever use, but it tangibly changes their lives for the better in very material ways; maybe they learn a trade that becomes a career.

One of these outcomes is not necessarily better than the other, and yet due to myriad factors, only the software with a million users is likely to get funding from entities—whether the context is for profit or not for profit.

I’d like to advance the notion that software does not have to scale, and in fact software can be better if it is not built to scale. I hope some of the examples I’ve given above have illustrated what is possible when software is used by a small number of people instead of a large number of people.

This approach to software design can be generalized to the design and maintenance of social networks, and this is also what Darius argues in his “Run Your Own Social” project. A small network of people can gather together and determine how they want to connect to others or whether they want to connect to others. They can gather under the banner of unscalability.

Beginnings

I don’t know exactly what this project is, but I can start with some stories about how I started thinking about it.

I have arrived at a half-formed research question: What does a federated model of networks (social networks primarily, but perhaps other networks as well) offer? A federated social network (or “distributed social network”) is “an Internet social networking service that is decentralized and distributed across distinct providers.” In the simplest terms, instead of sending data through servers owned by Facebook or some other company, groups can run their own social networking servers. But it’s not just a collection of “silos” (or echo chambers, or filter bubbles), because each of these smaller networks can link to one another, should they choose to do so.

In general terms, I am interested in what this kind of model offers our current situations, digital and otherwise. However, I’m interested in federated networks not just as a niche, technical solution to social networking software, one that we can implement with software like Mastodon, but also as a theory for working through our existing network infrastructure. I want to argue two things at once: we should push toward more federated networks, and we participate in a number of them already. Group texts, for instance, are something like a federated network. While the data itself is housed somewhere on a corporate server, there is at least some control over who is part of the conversation and how that conversation might invite others in (or not). This is stretching the notion of federated networks, but I’m interested in what that stretching might achieve.

My first real exposure to federated networks came through Darius Kazemi’s project, “Run your own social: How to run a small social network site for your friends.” I’ll return to Darius’ work in future posts, since it continues to shape my thinking about federated networks. Briefly, Darius is interested in giving people the theoretical and technical tools for creating their own small social networking site, one that does not rely on large corporations for infrastructure and that also does not sell off data in a Faustian bargain. Darius’ key insight, as I see it, is that setting up such a network does not really mean being a technical wiz but is instead “social first and technical second.” You have to learn how to create a community that agrees on norms and a code of conduct, which is much harder than learning the technical intricacies of social networking software like Mastodon.

Soon after reading Darius’ essay, I invited him to run a “Run your own social” workshop at the R-CADE Symposium we host each year at DiSC. The pandemic meant that R-CADE was postponed, but the project stuck with me. I used some of its insights to build a small social networking site for a class I taught this past semester (more on that in future posts as well).

I am interested in how a federated model might present a middle way between massive, corporate, standardized networks like Facebook and the smaller networks that some worry will turn into “echo chambers.” Like others, I’m suspicious of the argument that echo chambers or filter bubbles are “the problem” to be solved. Instead, I think the bigger problem isn’t that we won’t connect to others who disagree but that we currently have few options for deliberating collectively about whether and how we connect to others. The value of a federated model is that it allows a smallish group to determine how they want to connect to one another and to other networks. Instead of kneeling at the alter of connectivity, insisting that it is good to connect with everyone always, the federated model calls for a community actually think through how connectivity would happen.

This project, whatever it is, will try to figure out what a federated model looks like, what its roots are, how we are already using it already (if at all), how we might use it in the future, and more.