Over the past few months, there has been a simmering controversy over the phenomenon of “Substack Nazis”—that is, Nazis who have been able to start newsletters on the Substack platform, and whether Substack should ban them.
I have an interest in this as the proprietor of more than one Substack newsletter—but also because I think the whole debate and how it is framed shows something that is missing from our discussions about freedom of speech and tolerance.
Indiana Jones and John Stuart Mill
The discussion was touched off late November by an article from Jonathan Katz in The Atlantic pointing to the existence of Nazis on Substack. It’s something that requires a little digging, because unlike on a social media site like Twitter, the Nazis won’t just come swarming into your mentions (or these days, be force-fed to you because they were retweeted by Elon Musk). Substack is a platform for private, subscription-based newsletters, so for the most part, you have to go out there and find them.
That’s what Katz did, concluding that “the platform has become a home and propagator of white supremacy and anti-Semitism. Substack has not only been hosting writers who post overtly Nazi rhetoric on the platform; it profits from many of them.”
At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics. Andkon’s Reich Press, for example, calls itself “a National Socialist newsletter”; its logo shows Nazi banners on Berlin’s Brandenburg Gate, and one recent post features a racist caricature of a Chinese person. A Substack called White-Papers, bearing the tagline “Your pro-White policy destination,” is one of several that openly promote the “Great Replacement” conspiracy theory that inspired deadly mass shootings at a Pittsburgh, Pennsylvania, synagogue; two Christchurch, New Zealand, mosques; an El Paso, Texas, Walmart; and a Buffalo, New York, supermarket. Other newsletters make prominent references to the “Jewish Question.” Several are run by nationally prominent white nationalists; at least four are run by organizers of the 2017 “Unite the Right” rally in Charlottesville, Virginia—including the rally’s most notorious organizer, Richard Spencer.
Some Substack newsletters by Nazis and white nationalists have thousands or tens of thousands of subscribers, making the platform a new and valuable tool for creating mailing lists for the far right. And many accept paid subscriptions through Substack, seemingly flouting terms of service that ban attempts to “publish content or fund initiatives that incite violence based on protected classes.” Several, including Spencer’s, sport official Substack “bestseller” badges, indicating that they have at a minimum hundreds of paying subscribers. A subscription to the newsletter that Spencer edits and writes for costs $9 a month or $90 a year, which suggests that he and his co-writers are grossing at least $9,000 a year and potentially many times that. Substack, which takes a 10 percent cut of subscription revenue, makes money when readers pay for Nazi newsletters.
To throw a little cold water on this argument, 16 is just a tiny fraction of the number of newsletters on Substack and a tinier fraction of the platform’s revenue. But this small fraction really does include bona fide Nazis. For example, does Junior Gruppenführer Richard B. Spencer actually have a Substack newsletter? Yep, there he is.
You can see how this creates a dilemma for us Substack writers. There are certain things that attract us to the platform—the ease of setting up a newsletter and managing all its disparate elements, the ease of taking payments from readers, and the “network effect” of attracting existing Substack readers from other publications—these are all the benefits we don’t want Substack to be offering to advocates of genocide, and we know that bringing our business and our subscribers to Substack helps make this possible. Yet the value Substack offers also makes it difficult for us to walk away.
As Katz concludes:
How long will writers…be willing to stake our reputations on, and share a cut of our revenue with, a company that can’t decide if Nazi blogs count as hate speech?
Hence the debate that has been going on for the past six weeks or so. First there was a “collective letter” demanding the company explain why it isn’t filtering out Nazis. Then there was an open letter by Substackers on the other side, arguing that “Substack Shouldn’t Decide What We Read.” Substack co-founder Hamish McKenzie declared that Substack would not block Nazis because the company is “committed to upholding and protecting freedom of expression” as opposed to “censorship.” Substack subsequently shut down a few Nazi sites, but on the grounds that they violated existing rules against things like inciting violence, but this was too late and too tentative for some Substack writers.
First, let’s be clear that “censorship” and “free speech” are not the issues here. People continue to get this issue wrong in the way they consistently get it wrong—even, if I might grumble for a bit, after we keep pointing out how they’re getting it wrong. “Censorship” is what happens when the government bans the expression of ideas. Freedom of speech does not require that any person or business actively support the expression of specific ideas by hosting them on its forum, or in Substack’s case, by helping someone make money from a newsletter.
This is an important point to remember. Describing Substack as a “platform” is a deceptively passive way of putting it. Substack is actually an integrated bundle of services offered to its writers, including web hosting, mailing list management, mass e-mail sending, cybersecurity and spam filtering, and marketing. There is no principle—legal or otherwise—that requires them to offer these services to everyone.
Of course, we also talk about “freedom of expression” in a non-legal, non-political sense. We talk about a “culture of free speech” that deliberately entertains a wide range of ideas. The better name for this might be “tolerance,” and I like that term because nobody would describe themselves as “tolerating” something they like. It only applies to the forbearance you extend to ideas and actions you don’t agree with.
Why would we tolerate wrong ideas? Because we regard it as valuable to have a vibrant and wide-ranging debate and specifically to hear trenchant criticisms of widely accepted notions. We regard this as valuable because the criticisms might actually be true or contain some element of truth, and because the right ideas can only prove themselves by being open to criticism and counter-argument. We also regard it as valuable in the hope that people with wrong ideas are more likely to be convinced of their errors through discussion and debate than through mere hectoring.
Here is how the liberal philosopher John Stuart Mill famously put these points.
The peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.
If that’s the issue, however, we can immediately see some of its limits. None of this applies all that well to Nazis, because that debate has already been had, both with words and as a contest of force, and it’s hard to see what is left that we might have missed. If ever there were a viewpoint that has been thoroughly explored—and had its monstrous meaning and consequences demonstrated in blood across a continent—this is one of them. So why would we bother going out of our way to give a hearing to Nazis? Maybe we should just invoke the Indiana Jones Rule.
Moreover, Nazis are trolls. Allowing discussion as a better way of convincing people of the error of their views is an idea that has a lot of merit. But if you’ve ever had a run-in with online white nationalists, you know that they do not argue in good faith. Think what it means to commit to Nazism in this day and age, with the whole history of the 20th Century behind us. Someone who does so is not honestly confused. He’s looking for the most noxious and outrageous ideology to adopt in an attempt to vent his resentment at the world and feel a momentary illusion of importance by goading other people into anger. Like I said, it’s the psychology of the Internet troll, and we don’t have anything to gain from engaging with trolls or giving them a platform.
So why shouldn’t Substack just ban Nazis from its platform?
The Moderator’s Dilemma
The case on the other side is perhaps made most forcefully by Substacker Ben Dreyfuss, who asserts that “deplatformers” “lost the war” by overreaching and attempting to deplatform essentially everyone they don’t like. It’s not an ideal presentation of the case—long, rambling, and angry. (It is ironic, perhaps deliberately, that his blog is called “Calm Down.”) And his big example of the supposed overreach of deplatforming is kicking Donald Trump off Twitter after he incited the January 6 insurrection—which I would consider a clear-cut case in favor of deplatforming.
But he captures the logic behind the anti-anti-Nazi side of this argument. The basic fear is that if you give the deplatformers an inch, they’ll take a mile. Let them argue that you should kick out the Nazis, and next they will demand that you ban anyone they label as a “bigot.” As Dreyfuss points out, this included a handful of transgender bloggers who quit the platform last year because it refused to drop critics of their dogmas. And then they will move on and try to ban ordinary political disagreement. (Dreyfuss traces this back to leftists being angry that Donald Trump was elected in 2016. I can assure you that it started much earlier.)
There is also a clear economic incentive for Substack. Content moderation on any significant scale is expensive and time consuming, and Substack is still a relatively small start-up. They must fear the bad example set by Twitter in the early 2010s, when it spent a lot of money to add a whole content-moderation bureaucracy that was occasionally biased and overzealous and still couldn’t quite tamp down the monsters in the site’s collective Id.
From the perspective of writers, we fear that a platform that starts making decisions even on the obvious cases will be pushed by the most puritanical pressure groups—among the public, among its customers, even among its own employees—to start deplatforming everyone. When I came to Substack in 2021, my main reservation was that the site had become notorious as a way for anti-vaxxers and covid conspiracy theorists to make small fortunes by lying to their readers—at a real cost in human lives. But in the back of my mind, I’ll admit it made me more confident that if those people didn’t get kicked off, I would not get kicked off for my own heterodox views.
From the perspective of the platform, it’s the old Moderator’s Dilemma. Once you start blocking one group, then everybody assumes you must approve of whoever remains, and you suddenly have to draw a lot of difficult lines and generally be everyone’s intellectual babysitter.
So it is better, in this view, to tell the deplatformers to pound sand at the very beginning. Declare that you won’t ban anyone, so you won’t have to ban everyone.
But to state it this way indicates the problem. We seem to have only two modes: libertine and prig. Ban nothing or ban everything. I think we would all be better off if a prominent company set the example of upholding a reasonable standard of moderation.
Where Are the Moderators?
More broadly, that’s the thing I see missing in our current woke-versus-anti-woke debate: rationality, and the courage to stick to it.
Let me put it this way. If I asked, “Should a company help Nazis to make money spreading their ideas?” a reasonable person would answer “no.” If I asked, “Should a company police its platform comprehensively for ideological correctness and uniformity?” a reasonable person would also answer “no.” So why can’t we do what a reasonable person would do?
Why can’t we acknowledge that it’s good to tolerate and even to support a wide range of intellectual debate on a wide range of issues—but not the Nazis, because to hell with those guys? It’s important to have an “open mind,” but not so wide open that you can’t make firm judgements in the obvious cases. The Nazis are an obvious case of a false and irredeemably evil ideology, and what is more, they are universally recognized as an obvious case. This should be a no-brainer.
When it comes to the interpretation of the law, there is a well-established standard of the “reasonable man.” You cannot be held liable for negligence, for example, if you acted as a “reasonable person”—it has been updated to gender-neutral terminology—would have done. This standard has sometime been criticized for not laying down specific, hard-and-fast rules, but this reflects the fact that every case is specific, and the jury has to make a judgment call about what a “reasonable man” would have done in those exact circumstances. In effect, it calls upon the defense to invoke rational arguments to explain the logic behind their client’s decisions—or to reveal the absence of such forethought.
When you think about it, this is what we all do in drawing judgments about what to read or who to subscribe to—or who we want to share a platform with. We make judgements about who is making arguments that appeal to reason, versus who is arguing emotionally, dogmatically, or “in bad faith.” And one of the values the reasonable man has is that of rationality itself, including Mill’s argument for tolerating a very wide range of views in order to put comfortably established notions to the test. For this reason, a reasonable man recognizes that the viewpoints we consider so far beyond the pale that we don’t even want to hear them are, and ought to be, a very small list.
A reasonable man running Substack would say: “We’re going to make this one big, easy call and kick out the explicit Nazis. If you then send us a list of other people we should kick off just because you don’t like them, we will ignore it.” All that’s required is reasonableness and the courage to stick to it.
I think the overhwhelming majority of Substackers would happily go along with this. People can be convinced to leave a platform en masse if it’s supporting guys who post swastikas. Like I said, that’s the Indiana Jones Rule. But start telling people we have to leave because somebody disputed a claim about gender, and that’s wrong because [insert 2,000 words of tedious logic-chopping here], and we’re going to tune you out. Precisely because the non-obvious calls are non-obvious, relatively few people will be up in arms about them. In this regard, I think Substack fell into a trap set by the deplatformers, by agreeing to fight this battle over the obvious cases.
When people try to define this as an issue of “free speech,” I point out that Section 230 of the Communications Decency Act of 1996 was written specifically to enable and encourage platforms to engage in partial moderation of content. Its goal was to eliminate the Moderator’s Dilemma—at least as a legal matter—and break down the false alternative between moderating everything and moderating nothing.
But if Section 230 allowed for platforms to engage in moderation, we should then ask: Where are the moderators?
Remember that Section 230 was part of the Communications Decency Act, and its direct goal was to encourage platforms to make the internet more family-friendly by policing smut. You will notice that this is something Substack already does. It does not allow “sex workers” on its platforms, for fear of being turned into a version of OnlyFans. As a business matter, they decided that serving one undesirable kind of customer might drive away the ordinary readers they want as their core market.
In this regard, the most compelling argument for blocking Nazis from Substack is the Nazi bar scenario. The idea is to suppose that you’re running a neighborhood bar, and a guy comes in with a swastika tattoo. But it’s just one guy, and he’s not causing any trouble, so you don’t make a scene. Then he comes back with a couple of his friends, but still, they’re not causing any real trouble. And then they come back with even more friends, and suddenly you realize you’re running a Nazi bar.
This is more than just a fanciful scenario. It has already been acted out by several would-be Twitter competitors—most prominently Gab and Parler—that loudly advertised themselves as “free speech” plaforms who would not kick out anyone for any reason. They quicky attracted all the flakes and crackpots and Nazis who had been kicked off every other platform—and drove away everyone else. Twitter started with a much larger base of ordinary customers, but Elon Musk has invited some Nazis to hang out with him at the bar and is currently acting out that whole scenario. Substack would do well to learn from their example.
What we need is not the complete absence of moderation on a platform, nor do we need overzealous, dogmatic moderation. What we need is reasonable moderation, moderation that does what a reasonable man would do: keep out the trolls and the Nazis, while allowing a free-flowing discussion for everyone else.
It doesn’t sound so difficult when you put it that way, and the fact that it seems hard in practice points to a cultural problem that is deeper and more important than the question of how platforms moderate content. I think people feel so philosophically lost that they are afraid even to attempt to define or stick to what is “reasonable.” And that is what is missing from our debate.
WWRMD?
Don’t worry. I am staying with Substack for now because the value proposition it offers is still good—unlike Twitter, which never paid me and now wants me to pay them. So far, the problem of the Substack Nazis is still very small and marginal. If they start to think this is a Nazi bar and swarm here in large numbers, Substack’s founders will have to bite the bullet and ban them—and if they don’t, then the rest of us will make our decisions. But for now, the cost of leaving is greater than the risk of staying.
And this will play out over the long term. The most off-putting argument by the deplatformers is that we all have to make a decision now. This story blew up just after Thanksgiving, and Hamish McKenzie made his announcement on December 19, but we were all supposed to revamp our entire business model over our Christmas cookies. That’s the old panic-the-herd mentality of “cancel culture,” and we should reject it. This is a situation that will unfold over months and years, not days and weeks.
And we should also take seriously the cost this imposes on writers. Substack is an integrated and smoothly functioning bundle of services that is not easy to manage on one’s own—and believe me, because I’ve done that for 20 years. So the other off-putting message from the deplatformers is that it is somehow easy and trivial to leave. That’s true for some of the largest Substack newsletters, for whom it is cheaper to hire someone to run the back end than it is to pay Substack its ten percent. And it’s true for some of the people who run smaller unpaid blogs that are less work to manage. For the rest of us, until a competitor comes along, there is a significant cost that will keep us from jumping ship until things get worse.
Yet this also undermines the least convincing argument offered by Substack, which is that deplatforming doesn’t work. In McKenzie’s words, “we don't think that censorship [sic] (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.” Well, then why are we so worked up about not doing it? Why all the yelping about cancel culture?
The whole reason we’re concerned about deplatforming and about cancel culture is because it does have an impact. The internet is a big place, and people can find all sorts of independent options. Substack could not “decide what we read” if it tried. But leaving the big, established platforms usually comes at a cost in time, effort, and higher fees—and the loss of digital networks, as the canceled are shunted into their own bubbles where they are readily accessible by a smaller group of readers. We should deplatform the Nazis because deplatforming works—and protect just about everybody else, also because deplatforming works.
To draw that line is not as difficult as we think, because we just need to ask: What would a reasonable man do? And what we most desperately need, in these early days of the internet, is a company that can set an example by making these reasonable decisions.
If you haven't read Sam Kahn's analysis of what's going on you should. He argues, convincingly, that the whole 'Nazis on Substack' charge is mostly unfounded, that the number of Nazis, and the amount of attention they were actually getting (e.g. fewer than 20 followers, tiny numbers of subscriptions) suggests that this controversy is being managed by someone who wants to discredit Substack for other reasons. I had been reading Katz (who started the fire) but had already realized he was prone to flights of fancy because of other things he's written.
Having read your work for all 20 of those years, and knowing your underlying philosophy I know that you are that 'reasonable man' who would be terrific as that moderator. I also surmise that that is not what you aspire to, but your Symposium is exactly the kind of 'platform' you are describing. If you could find people to partner with who do want to do the work behind providing the many services Substack provides, then Symposium could be that sorely needed competitor. In any case, I have been happy with the writers Substack has introduced me to as it has given me a better understanding of where a lot of well-meaning people are coming from even when I don't agree with much the believe.