Jan 28, 2026

Tools for Anti-Conviviality

Meditations on slop, why I'm not convinced, and what to do about it.

Some people faint at the sight of blood. I am becoming increasingly nauseous at the sight of slop. Whether it’s a visceral fear of losing the thing that I’m good at to the machines, disgust at seeing craven humans bow before their technological idols, or just the toxic buildup of sludge in my veins, I’m not sure. But I’ve had enough.

The Machine is bleeding, or maybe we are. The sum of human knowledge and culture has been scraped, ingested, and sprayed across the internet. To borrow a Vervaekeism, pay attention to the words I’m using. Words are intentional, multi-valent, the result of a mind struggling to reveal itself.

The “tokens” an LLM outputs are none of these things. Although it hijacks “language” and bends it to its uses, the LLM cannot produce words, only “content” — pure, decontextualized, decompositional liquid. In other words, “slop”.

Slopitechture

Post-industrial sludge is not a good building material. Sure, you might be able to squirt it in the joints and empty spaces to fill up space or glue surfaces together. You might even be able, with sufficient discipline, to create some kind of cellular superstructure to hold the slop in with. But slop software has as much in common with software craftsmanship as a 3D printed building has in common with architecture.

In both cases, we countenance the perversion of our discipline in the name of “efficiency”. If it reduces labor and material costs, how can we argue? While you might not want to live in a goop house yourself, at least it benefits the poor who couldn’t otherwise afford one. Stained glass, wooden beams, and brick fireplaces are relics of a time gone by. We may feel nostalgic for the past, but we must leave it behind in the name of Progress.

But do we know what we’re giving up? The gradual phasing out of wood-burning stoves means people can no longer heat their own homes. The use of industrial materials in construction means people can no longer build or repair their own houses. If I must depend on industrial inputs to keep the machine that is my home running, am I really free, or have I been “domesticated” in a new sense?

Slopware Development

Slop coding is the same kind of transition, from beauty, purpose, meaning, and elegance, to “output". It is the displacement of human skill, ingenuity, and attention to detail by the brute force volumetric deposition of gunk.

This is the tyranny of the average. Slop code is by definition, exactly as good as what a moderately competent developer can produce in his specific area of expertise. It will be far “better” in any given circumstance than code produced by novice programmers, programmers working in a different industry or language, or the non-programmer.

To that I say: wonderful. I’m not an elitist. If a shop owner, analyst, or mom can whack together custom software to solve their particular problem, great. I’ll even use it too: to work in a language or domain I’m not personally familiar with, stand up a prototype, write a quick utility.

But do we know what we’re giving up when we apply this style of programming? The proliferation of average is the sacrifice of the weird. Typescript displaces Clojure. Idiosyncratic coding styles are swallowed under a wave of monotone. Deep understanding of a problem is replaced by a rat’s nest of exponential backoff, try/catch, and convergent interfaces. Minds skim across the surface of understanding like rocks on a lake. Programming pearls are cast before swine.

If you’re a professional programmer, try to go a week without using an LLM and you’ll notice just how much you rely on it. Admit it, you don’t read documentation anymore, do you? When was the last time you actually looked at a page of search results instead of just asking Kagi Assistant or Perplexity for the answer? You mean I have to use my hands to type? That’s like a baby’s toy!

This isn’t just about the atrophy of skills through disuse, although it is that. It’s about our newfound ability to just not bother. Why bother read documentation for half an hour when the LLM already knows how to use the tool? Why bother reviewing the code, when it’s probably good enough, and can be re-generated if not. Why bother designing anything when average is good enough?

But if we don’t bother, we don’t care, and no fire hose of shit can build a house when the architect is asleep in his office.

Upslopping

Call this up-skilling if you like. Tell me all about your polecats and how if you yell at them with just the right tinge of paternal disappointment they won’t light your factory on fire or delete your hard drive. Tell me about how if I just pay Anthropic $1,000 every week (you don’t really think your “pro” subscription is still going to be there after the bubble pops, do you?) I’ll be able to do the work of ten sad, unfulfilled developers instead of only one.

I can’t help but feel that all this hype is just cope for people who like to be dominated by their own tools. LLMs are suitable to particular types of tasks, for which average and fast are unqualified endorsements. The kind of task which was already easy, but which is now easier. Pounding out reams of boilerplate with a single prompt is an exhilarating experience, but it’s still just reams of boilerplate.

When I look at the hype surrounding LLM-assisted coding, I don’t see people writing ground-breaking new products. I see clones of clones: derivative, unoriginal, average work. But the hard parts are still there. So you cloned Slack — now execute on a go-to-market strategy. We’ve always felt the draw to work on easy tasks and busywork instead of the important stuff. LLMs juice that impulse, making it almost impossible to resist. I can build 10 complex projects in a month! Who cares if no one uses them and I have no intention of marketing, supporting, or maintaining them?

And at the end of the day, nothing can replace code except for code. We seem to have all forgotten that code is not written for machines, it’s written for humans. Code is language — a medium of communication intended to be shared between people. This is why we have higher-level languages — because it’s a waste of your short life to sit staring at the screen of your hex editor.

This is the same argument AI evangelists will use in favor of LLM-generated code. The claim is that the “prompt” is a new kind of higher-level programming language which frees us from the drudgery of syntax. Anyone who thinks otherwise is a technophobic luddite.

But this just doesn’t hold up under scrutiny. Constraints are not limitations, they are affordances which focus the attention and intention of the builder. Whatever problems programming language design causes through the “foolish consistency” of “little minds”, it more than makes up for by establishing conventions that structure our thought. This is why programmers learn new languages — to learn to think in new ways.

The idea of “prompt as source code” sounds nice. But think for two seconds about this. Your compiler is a massively expensive thinking machine in a data center somewhere. Are you really going to spend $100 every time you want to update your specification so you can rebuild your app? Are you really going to tolerate the massive variability in output that will occur when the LLM one-shots your project from scratch over and over?

Even if we solve these problems (and we will, for example by asking the agent to do a comprehensive diff between the prompt and the code in order to make incremental, directed changes) a specification is still a specification — i.e., code.

It’s easy for programmers to underestimate just how much source code matters. Syntax is not mere ceremony; boilerplate is meaningful. We get so used to our coding conventions that they become invisible to us. Maybe some of this implicit information can be captured in a prompt, or replicated by an LLM. But for projects that aspire to maturity, the developer will always need a way to reach underneath the level of the conceptual to specify how as well as what. The failure of purist declarative programming is evidence enough of this.

For me at least, getting my hands dirty also helps me to think. If I don’t look at code and think about how it fits together, I’m never quite able to comprehend the problem I’m addressing, or what the proper solution is. I know that this is less of a thing for people with management or engineering experience, but I don’t think it can be fully discounted. The fact is, the farther you are from the code, the farther you are from the code.

Anyway what would it look like to bring this vision of LLM-as-compiler to fruition? Well, a new programming language, of course. But this time it’s better than before, because it’s slower, more expensive, non-deterministic, and doesn’t run on your own hardware! Sounds like a winning combination to me.

Sloponomics in One Lesson

Notice what I’m not saying. I’m not saying LLMs are useless, that they are a fad, or that I won’t use them. They are an almost ideal tool for non-experts (and all of us are non-experts in most areas). What I am saying is that their use has a cost. That there are trade-offs to technical choices. That many of these trade-offs are invisible, and only show up at scale, and over time.

It’s pretty universally agreed upon at this point that we are in the midst of an AI bubble. The disruptive model of Silicon Valley has itself been disrupted, and now every tech company on earth is in a mad dash to realize the promises of the machine mind. This means that we are in a period of growth, not consolidation. In order to gain market share, everyone is moving as quickly as possible to give away as much as possible. Think about where we will be when OpenAI, or Anthropic, or Google, or Microsoft captures a decisive share of the market and consumers lose the ability to choose between providers. Someone has to pay for all those data centers.

I don’t think LLMs are going to get prohibitively expensive, pricing out these companies’ current customers. But I do think there will be an effort to cut costs by switching work loads to cheaper models, resulting in the gradual enshittification of LLM tools along with everything else in our declining dollar economy. For those of us who hand-pick our models or run them ourselves, well, we’ll pay for what we get. For the rest, it doesn’t take much imagination to anticipate how LLM providers are going to abuse their ability to intermediate our conversations with the bots, from injecting content directly into conversations or software products, to massaging our chat histories into the highest-resolution advertising (and propaganda) profile ever witnessed.

The bigger point here though is not an economic one. It’s that whether in terms of the economy, our skills, or our culture, there’s no going back. Our decisions have consequences; the technologies we create and adopt have an effect, not just in terms of getting done the work we aim them at, but also in terms of unintended second-order consequences. People and tools exist together in a complex, ecological system. We may put our oar in to try to influence the development of this ecosystem, but the technological millieu influences us comprehensively. The technological world creates the circumstances upon which our very existence as technological man is predicated. A shift as large as the one we are undergoing now will not be inconsequential.

There are unique risks involved in this particular technological shift too. The possibilities of model collapse mean that the returns on efficiency of LLMs may be diminishing, or even reversing, as the machines are fed with their own output. The atrophy or abandonment of human skills plays into this too, because if organic activity is displaced by artificial, then the amount of new content on which LLMs can be trained is rapidly declining.

Already, StackOverflow’s traffic has declined to 10% of where it was only 5 years ago. TailwindCSS recently laid off 75% of their team because no one is finding out about their premium offering from their documentation any more. There’s a chance that we are entering a dark age of innovation in software. If LLMs are only able to regurgitate reconstituted ideas from the past, they can’t be considered a source of new ideas.

One counter-argument of course is that LLMs make it easy to learn new subjects, increasing the spread of knowledge. I think this argument is over-stated. LLMs can help someone get to basic competency quickly, but cannot replace deep study. Innovation still has to come from a deep well of experience, which is comprised not just of knowledge, but also of pain, frustration, and nights spent awake imagining possible futures.

Another counter-argument is that LLMs will allow disciplined programmers to curate their attention, directing it toward solving interesting and hard problems. This may be the case in some resource-scarce environments like start-ups, but in most cases the gains will be marginal. Chores are rarely a real excuse, and can themselves be a source of inspiration; under-performance is more often attributable to complacency, incentives, and sources of external friction like corporate bureaucracies than mere drudgery. And for the occupational “innovators” in academia, well, they’re already pursuing the cutting edge in their micro-niche with their full attention.

The Technological Slopciety

A helpful paradigm for looking at the problems of tools and technology which I’m currently obsessed with is called “conviviality”, coined by Ivan Illich in his eponymous book. Here’s how he defines the concept:

Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user. The goal of such a tool is to serve a society “in which modern technologies serve politically interrelated individuals rather than managers”. In other words — freedom.

At first glance, LLMs seem like an ideally convivial tool. Even I am excited for the possibility that laymen may be able to build better tools for themselves than any vendor might be able to offer. But there’s one key point in that definition that does not apply to a slop society: we cannot use LLMs as seldom as we might desire.

As work and communication become increasingly LLM-shaped, both through the direct adoption of LLM tools and through the consequent reshaping of public spaces to accommodate them, society itself will become less human-shaped, to the point at which even those of us who have carefully maintained our skills against atrophy will no longer have an outlet for them. Just as agricultural implements have been adapted for use by tractors rather than by farmers, so our information landscape will adapt for the alien mind of the machine, rather than for our wet eyeballs and trembling tympanic membranes.

This is already happening — the pull request that the TailwindCSS team rejected was for the addition of text-only documentation aimed explicitly at LLMs. How long until people simply stop writing documentation for humans entirely? Might we eventually learn how to speak the LLMs' native language, and write our documentation using that? Humans may eventually lose the ability to even read without an LLM.

The trajectory of the LLM society is decisively anti-convivial. More quickly than ever, we are reaching the point in the adoption of a new technology which Illich describes as its “negative” range:

When an enterprise grows beyond a certain point on this scale, it first frustrates the end for which it was originally designed, and then rapidly becomes a threat to society itself. These scales must be identified and the parameters of human endeavors within which human life remains viable must be explored. This is tyranny of a kind that does not come from governments, but from technology itself. What Illich calls “radical monopoly”:

Radical monopoly exists where a major tool rules out natural competence. Radical monopoly imposes compulsory consumption and thereby restricts personal autonomy. It constitutes a special kind of social control because it is enforced by means of the imposed consumption of a standard product that only large institutions can provide. If we have any desire for freedom and independence in an increasingly digital society, we have to find a way to protect ourselves from this radical monopoly. Self-hosting our own LLMs will not suffice, because even they will contribute to the reshaping of society and our exclusion from it.

Rage Against The Slop

In the face of a globally advancing tsunami of machine turds, I can only offer sheer bloody-minded curmudgeonliness, like someone who pays all their expenses in cash in an effort to stave off the advance of the global financial surveillance panopticon. That is the knife that we need to bring to this gunfight.

It’s impossible to be anti-LLM, just like it is impossible to be anti-tree. The existence of trees is just a fact, and so is the existence of LLMs. Instead of thinking about LLMs on their own terms — derascinated, platonic, monadic — we should instead think in terms of instances. One cannot abolish all trees, but you can certainly cut one down, or plant one.

An individual tree differs from the idea of a tree in almost every way. The idea of a tree is determined, abstract, normative. The tree in my front yard is constantly changing, is concrete, contextual. It was a rotten elm, it is now a rotting stump.

The challenge for us in the age of the LLMs is to not give ourselves up to the apparent determinism of the machine. Stop thinking about the abstract promised or dreaded future, open your eyes and look at the LLM sitting in front of you on your desk or in your hand. Is this a tree you want to water, to fertilize, to see grow? Does it need some pruning in order to bear fruit in your life? Or does it need to be cut to the ground.

Every prompt not written, every agent not installed, every token not purchased is a decision, a vote for who you want to be, and the home you want to live in. Resistance against the machine is not a simple matter of bugging out to the back country, but of creative subversion, malicious compliance, and dynamic equilibrium. It is the design and use of new tools and technologies which mediate the old ones. And it is the refusal to give up your soul to that of the machine.