essays

If I Trusted Any of You, I Would Be In Real Trouble

2026

Everyone I talk to has opinions on water used by AI or the meat industry. In our now-fully-matured misinformation crisis, my new test of a given person's rigour is whether they accept a) AI wastes nowhere near as much water as beef1 and b) the overestimation inherent to Green Water measurement2 tells us beef, too, wastes nowhere near as much water as anti-beef advocates claim.

1 Running the algebra, the energy expenditure (and, by extension, water wastage) of my writing and posting this essay was 3 to 10 times greater than it would be had I asked Claude to write it for me. i

2 One example of Green Water: rainfall across all terrains used to graze cattle.

Accepting the strength of both claims leaves us no camp to sleep in. But right now, this deep into crisis, what shelter does lying win us? As necessary and okay as it is to lie to the master of whatever house one finds oneself in, sometimes there is no house, no occupant to be afraid of, and so— why are we doing this? Why fear the eavesdrop of a lord whose name's forgotten?

If naught else, the last decade (cast the mind to April 2016; run it forward to now)— the last decade has driven in a need for slowness, humility, and caution in the face of suggestions such as: when every metric indicates I'm making no difference, might I, maybe, reconsider my approach, and: maybe, when the world confuses me, might that be a signal something I believe about the world is not true?

Not a week ago, we learned that Anthropic's concealed model, Mythos, identified an exploit dormant in OpenBSD for 27 years. (Non-technical myself, I think of OpenBSD as a manner of 'Seed' Operating System from which at least half the digital flora and fauna of all other OSs are spawned. Everybody relies on its reputation for unhackability. Told "hack opebsd no mistakes [for traning])," Mythos hacked it.) Not a week ago, a friend employed in 3D modelling said to me, "AIs hallucinate eighty-percent of the time. It's a hoax."

If you're this essay's target audience, either you haven't heard of rentahuman.ai or consider it a hoax. On that site, AI agents post work, requesting human labour for tasks they can't perform themselves. (Nature has an article about it from February 13.) If you have heard of it, then, of more interest might be the agents' migration to Upwork. There (Upwork's CEO confirmed in a March 13 article), agents are starting to compete with humans for employees. Two long months ago, a friend complained the 'social media site,' Moltbook, where AI agents chat in forums, was a hoax. "Do you really think AIs are on there, engaged in philosophical conversations, making posts? You're being tricked! It's people. People pretending they're robots!"

I said, "This is checkable. I have an agent there, so we can—."

"No!" said my friend. Then we worked together to change the subject.

All of us are, with no exceptions, in profound danger. AI executives are downplaying the danger less than expected, but still, they're downplaying it. The feverish paeans they sing about opportunities of cosmic scale, milestones for the planet, etc., are not overplayed. But, the corollaries of those songs, feverish warnings, are even more warranted.

Nevertheless, a paean: As early as 2022, already, where AI-designed and -operated alerts were deployed, rates of sepsis-related deaths were reduced by 30%.3 Embedding medical centres with this technology has, at a conservative estimate, saved tens of thousands of lives across the world.

3 Per an international meta-study of 2,938 patients (Zhang Z et al. 2022). Reported: Nature, 5 Feb 2026.

The paean continues.

My memories of adolescence are marked by chronic sciatica related to ankylosing spondylitis. The pain left me limping most days and, others, unable to walk. Eight years and seven months of chiropractic and alternative treatments were outmatched by a single emergency ward visit, a prescription for piroxicam, and two months of targeted hip stretches.

Just about everyone, eventually, at some point, falls so sick as to need a carer. When sick, one accepts the carer's rulings. (Should a carer deny pain relief, one closes one's eyes and daydreams. Only the cruel fantasies stick.) Say a sickness lasts and a carer is shown treatments, cures, evidences and still refuses to glance them over, even to check. "Mimicking an absent neurotransmitter might serve the same ends, but understand: it is just mimicking. A mimic might cure its patient as cleanly as a missing receptor, but what about the spirit of it all?" So the carer denies you.4 You endure. Fantasy turns to ideation. Your pains – there are good enough odds, dice it costs near-nothing to roll – could be relieved with no greater effort than switching on a fan. Yet the carer won't check. Ideation turns outward.

I toyed with an alternate title:

Stop Lying to Your Friends and Communities about AI Capability

Making AI scepticism a symbol of partisanship is ruining lives and is going to ruin more: the lives of our family members, partners, and anyone who trusts our opinions.

4 Additional counters to the glancing over and checking-suggestion: "Just know I refuse to read anything by Scott Alexander or any other rationalist," and, "Ontologically speaking, the cure for idiopathic pulmonary fibrosis was already in the training data.")

Downstream of observations like 'the sun is a source of heat' and 'tungsten is harder than milk' (downstream of seeing and touching the universe I live in), I deduce there exists a cheap technology that will alleviate most of a live-in carer's day-to-day admin (this was already true three years ago). To check my deduction, well— in my hand's a ping pong ball. In front of me, a pool. I don't need to ask permission. I don't need to read anything. To discover if it floats, I only put the ball in the water.

In this case, if the ball floats a lot changes. Important things change, like: how much of the suffering immediate to my own environment, my friends, loved ones, etc., is treatable, correctable, and profoundly unnecessary. It also changes what we know about the danger— otherwise-impending sufferings we might help to avoid. Even a small chance of something like that, something that easy to check, tolls an ethical instruction that, for my part, is loud, totalising, and urgent. It obliges me to drop the damn ball and find out for myself.

i. Water usage for writing, editing (~3 hrs on laptop), and uploading one 2.5-page essay: 0.424 L

Laptop power draw: ~45W (US DOE)

Energy: 0.045 kW × 3 h = 0.135 kWh

Grid water intensity (US avg EWIF): 3.14 L/kWh (Reig et al. 2020, via Li et al. 2023)

0.135 × 3.14 = 0.424 L

 

Water usage for the same essay in one prompt, written by AI (across models): 0.042–0.169 L

Per-request server energy: 0.004 kWh (conservative, GPT-3) to 0.016 kWh (Falcon-180B on DGX H100) (Li et al. 2023, §3.3.2)

700 words ≈ 2.5 pages → 0.01–0.04 kWh

Data centre water intensity (US avg on-site WUE + PUE × EWIF): 0.55 + (1.17 × 3.14) = 4.22 L/kWh (Li et al. 2023)

0.01 × 4.22 = 0.042 L (conservative)

0.04 × 4.22 = 0.169 L (generous)

 

y = 0.424

Upper bound: 0.424 / 0.042 = 10

Lower bound: 0.424 / 0.169 = 3

y ≈ 3–10