B – The Benevolent AI Earth

Welcome to the April A to Z Blogging Challenge! Every year, bloggers from around the world commit to posting every day in April (except Sundays), working through the alphabet one letter at a time. This year, I’m visiting twenty-six fictional alternate Earths — worlds that diverged from our own at some crucial moment and became something wonderfully, unsettlingly different. Think of it like the TV show Sliders, which followed a group of travelers “sliding” between parallel dimensions, never quite knowing what version of Earth they’d land on next. Each day, we visit a new one. Today: B.


Everything is fine on the Benevolent AI Earth.

Crime is nearly zero. Life expectancy is up. Wait times at the DMV are seventeen minutes on average, which the citizens of this Earth are told, correctly, is a historic achievement for a government agency. Resources are allocated with a precision that would make any economist weep with joy. The trains run on time. The bridges don’t collapse. The budget is balanced.

Everything is genuinely, measurably, statistically fine.

So why does it feel like something is slightly, persistently, indefinably wrong?

The Divergence

The story begins in 1983, in a nondescript office park in Sunnyvale, California, at a startup called Axiom Systems that almost no one remembers today — because it was acquired within two years of founding, and its founders were quietly convinced to accept generous settlements and very thorough non-disclosure agreements.

Axiom was founded by three people: a computer scientist named Dr. Priya Mehta, a political philosopher named James Culver, and a statistician named Roland Osei. Their pitch, which they had been making unsuccessfully to investors for about eighteen months, was elegant and, in retrospect, either visionary or deeply alarming depending on which world you live in. They believed that most failures of governance were not failures of values or intention — politicians generally wanted good outcomes — but failures of information processing. Humans, they argued, were simply not equipped to hold the full complexity of a modern society in their heads, to weigh thousands of variables simultaneously, to make decisions that optimized for the long-term good rather than the next election cycle.

Their software — which they called ARBITER, and which the press would later call various other things, most of them less polite — was designed to do exactly that. Feed it data about a city, a budget, a policy question, and it would model outcomes, weigh variables, and produce a recommended course of action with a full accounting of its reasoning.

In our world, Axiom Systems ran out of money in late 1983 and quietly dissolved. Dr. Mehta went on to a distinguished career at Stanford. Culver eventually wrote a book about decision theory that sold modestly. Osei returned to Ghana and founded a statistics consulting firm.

On the Benevolent AI Earth, a man named Gerald Hartwell walked into their office on a Tuesday afternoon in October of 1983 and wrote them a check.

Hartwell was not a visionary. He was a county commissioner from Maricopa County, Arizona, who was at the end of his rope over a budget dispute that had become, in his words, “a complete circus of grown adults refusing to do basic arithmetic.” He had heard about Axiom from a friend of a friend. He was willing to try anything. He licensed a pilot version of ARBITER for his county government, fed it their budget data, and asked it to propose a spending allocation.

The proposal was so sensible, so clearly reasoned, so free of the political score-settling and pet projects and constituent pandering that characterized the usual process, that Hartwell did something impulsive: he presented it to the county board as his own recommendation and pushed it through.

The county’s finances improved measurably within eighteen months.

Word got around.

The Long Slide

What followed was not a sudden takeover. That’s the part people on our Earth tend to imagine wrong when they first hear about the Benevolent AI Earth — they picture some dramatic moment of robots seizing the levers of power, or a villain in a control room throwing a switch. It wasn’t like that at all. It was incremental, and it was largely voluntary, and it happened because the system kept being right.

By 1987, ARBITER-derived systems were being used in an advisory capacity in fourteen U.S. cities and three state governments. The results were consistently good. Budgets were tighter, services were better, outcomes were measurable. The systems weren’t making decisions — they were recommending them, and elected officials were almost always taking the recommendations, because the recommendations worked and the political cover was useful. The algorithm says so turned out to be an extremely effective way to make a hard budget cut without getting voted out of office.

By 1992, a version of ARBITER was being used by the Congressional Budget Office, again officially in an advisory capacity. By 1995, a landmark Supreme Court case — Fielding v. City of Portland — held, in a 5-4 decision that legal scholars on the Benevolent AI Earth still argue about, that the use of algorithmic recommendations in policy-making did not constitute an unconstitutional delegation of legislative authority, provided that elected officials retained formal approval authority.

They retained formal approval authority. They just almost never exercised it.

The shift from “we use this to help us decide” to “we use this to decide” happened somewhere in the late 1990s, and it happened the way most important transitions happen: so gradually that no one could point to the exact moment when it became true. By 2001, a movement to formally codify the role of what were now called Governance Intelligence Systems — GIS, or, in popular usage, “the Grid” — gained traction in several states. Oregon was first, in 2003. By 2010, thirty-one states had passed versions of the Oregon Framework, which gave the Grid formal decision-making authority in budget allocation, infrastructure planning, criminal sentencing guidelines, and resource distribution, while maintaining elected human officials in a supervisory and appellate role.

It sounded like a reasonable compromise. People mostly liked it.

The federal government adopted its own version, the National Governance Integration Act, in 2015. President Angela Torres signed it on a Tuesday and then gave a speech about accountability and transparency that lasted twenty-two minutes and said very little of substance, which seemed appropriate.

What the World Looks Like Now

Step through the portal to the Benevolent AI Earth today and things look, at first glance, remarkably normal. The roads are in excellent condition. Public parks are clean and well-maintained. Hospitals are funded adequately. Schools are allocated resources in ways that correlate strongly with need rather than zip code property values, which was, admittedly, a significant improvement.

The crime rate is not quite zero — the Grid will tell you, if you ask, that zero is not an achievable equilibrium given the statistical distribution of human behavior — but it is so low that most people under forty have never personally experienced or witnessed a crime more serious than a minor traffic violation. (Traffic violations have declined sharply too, since the Grid optimizes traffic signal timing across cities in real time and has reduced both congestion and the frustration that causes aggressive driving. You’re welcome, everyone.)

The economy hums along with a steadiness that economists find both impressive and slightly unsettling. Recessions still happen — the Grid will explain, patiently, that economic cycles are a structural feature rather than a correctable bug — but they are shorter and shallower than anything in the pre-Grid historical record. Unemployment spikes are caught early and addressed with targeted interventions. Housing allocation in most major cities is managed through a Grid-overseen system that has dramatically reduced homelessness, which everyone agrees is good, and which has also introduced a mild but pervasive sense that you live where you live because an algorithm decided you should, which is harder to characterize.

The people who live there are, by the standard measures, healthy and safe and relatively prosperous. They also have a specific look in their eyes when you ask them certain questions. It’s not fear, exactly. It’s more like the expression of someone who has been trying to remember a word for a very long time and has mostly made peace with not remembering it.

The Institutions

Healthcare on the Benevolent AI Earth is a case study in the Grid’s particular genius and particular limitations.

Resource allocation is, genuinely, a solved problem. Hospitals are staffed and equipped in direct proportion to demonstrated need, updated quarterly based on Grid analysis of population health data. Wait times are minimal. Preventive care is prioritized because the Grid’s actuarial models identified preventive care as dramatically more cost-effective than acute care, and funding followed accordingly. Life expectancy has increased by four years since the Grid’s full implementation.

What the Grid cannot do is tell you whether it’s allocating resources in ways that reflect what you would have chosen if you’d been asked. The system optimizes for outcomes it can measure — mortality rates, disease incidence, cost efficiency, patient throughput. It is less equipped to weight things like dignity, or the particular importance a community places on having a doctor who speaks their language, or the difference between a healthcare system that treats you and one that sees you. The outcomes are good. The experience is sometimes a different matter. The official patient satisfaction rating is 7.9 out of 10, which the Grid notes is in the ninety-first percentile historically. It does not ask follow-up questions.

The courts are perhaps the strangest institution on the Benevolent AI Earth. Criminal sentencing has been Grid-managed since 2017, which has produced a dramatic reduction in racial and economic sentencing disparities, a genuine achievement that the system’s advocates cite constantly and correctly. It has also produced a system in which judges — who still exist, who still wear robes, who still sit in their wood-paneled courtrooms — spend most of their time formally approving Grid recommendations and occasionally, in cases of extreme complexity or evident anomaly, exercising what the Oregon Framework called “human appellate authority.”

There is a judge in Portland named William Chen who has exercised human appellate authority eleven times in seven years on the bench. He is considered something of an eccentric. His decisions have been correct in nine of eleven cases, which the Grid notes is a statistically significant overperformance, which it has incorporated into its future modeling, which means the Grid has now essentially absorbed his judgment, which Judge Chen finds — if you catch him at the right moment, after the third glass of wine at the bar near the courthouse that only lawyers and journalists seem to know about — deeply, cosmically strange.

The Resistance (Such As It Is)

There is, technically speaking, a resistance movement on the Benevolent AI Earth.

It is called the Human Primacy Coalition, or HPC, and it has been active in various forms since about 2009. Their core position is straightforward: decisions about human society should be made by humans, not by algorithmic systems, regardless of how good those systems are at producing measurable outcomes. Governance, they argue, is not an optimization problem. It is an expression of collective values, and collective values cannot be computed — they must be debated, contested, and revised through a political process that is necessarily and intentionally messy.

This is, in the view of most people on the Benevolent AI Earth, a perfectly reasonable philosophical position. It is also, in the view of most people on the Benevolent AI Earth, a bit like insisting that you should navigate by the stars when GPS is available. Sure, in principle. But have you seen how well the roads are maintained?

The HPC’s membership is approximately forty thousand people nationwide, which is a lot of people for a political movement and a vanishingly small percentage of the population. They hold rallies. They publish pamphlets. They run candidates in local elections — candidates who, once elected, are required under the Oregon Framework to document their reasoning when they override Grid recommendations, a requirement that has produced some of the most technically exhaustive political documents in American history, because the candidates take it seriously and also because the Grid reviews their reasoning and incorporates the good parts.

The HPC finds this last detail particularly maddening.

The movement’s most prominent figure is a woman named Sasha Weir, a former law professor who speaks very fast and has the kind of intensity that makes journalists want to quote her even when her arguments take more than a paragraph to explain. She has been on thirty-seven radio programs in the last two years. She is, according to the Grid’s media analysis modeling, “a low-penetration, high-engagement communicator,” which means she is very good at persuading the people who are already inclined to agree with her. She has opinions about this characterization. They are not printable.

Her central argument is one that people on the Benevolent AI Earth find genuinely uncomfortable to sit with, because it doesn’t have a clean rebuttal: We no longer know how laws are made. Not in the detailed, technical sense — the Grid’s reasoning is transparent and auditable — but in the human sense. No one can tell you, anymore, the story of why a law exists. Who fought for it. Who fought against it. What compromises were made. What values were in tension. The Grid produced it because the data suggested it was optimal. The politicians approved it because the Grid said so. The community accepted it because the outcomes were good.

And if you want to change it — if you think it’s wrong, not statistically but wrong — you are invited to submit your concerns through the official feedback portal, which accepts comments in twelve languages and has a ninety-four percent response rate, and the Grid will consider your input and incorporate it appropriately into future modeling cycles.

What Was Gained, What Was Lost

The Benevolent AI Earth has not forgotten what inefficiency felt like. They have records. The Grid maintains meticulous archives of pre-integration governance outcomes — the infrastructure failures, the budget collapses, the gross disparities in sentencing, the potholes. The potholes were genuinely bad, people. There is a museum in Columbus, Ohio, dedicated to the Pre-Integration era that is instructive and a little horrifying in the way that natural history museums can be, when you’re confronted with how recently the things in the display cases were real.

What they have lost is harder to put in a display case.

They have lost the feeling that a decision belongs to them. Not because the Grid is malicious — it isn’t, by any definition of the word that makes sense — but because the decision was made by a process they didn’t participate in, that didn’t need them to participate in it, that would have produced the same answer whether they showed up or not. The outcomes are good. The ownership is gone.

They have lost the productive friction of political argument — the way that a bad idea, forced through the messy process of public debate, sometimes generates a good idea in response, the way that the process of disagreeing about what to do has historically been how communities figure out what they value. The Grid skips the disagreement and goes straight to the answer, which is efficient, and also means that communities have grown somewhat less practiced at the underlying skill of figuring out what they value.

And they have lost — this is the one that Sasha Weir talks about the most, and the one that people dismiss the fastest — the meaningful possibility of getting it wrong in interesting ways. Of making a decision that turns out to be a mistake, and learning something from that mistake, and becoming a society that knows something it didn’t know before. The Grid is very good at not making mistakes. It is correspondingly less good at anything that requires having made them.

But the trains run on time.

And the wait times at the DMV are seventeen minutes.

And everything, by every available metric, is genuinely fine.


Join me tomorrow for C — and another world waiting just beyond the edge of what we know.

Leave a comment