The Averaging
How a Simple Algorithm Promised to Transform Humanity and Delivered Something Else
An oral history of the Recommendation Boom, 2022–2026
In the autumn of 2022, a small San Francisco startup called DeltaScale released a consumer product that let anyone type in a question and receive an answer generated entirely by collaborative filtering — the averaging of millions of human preferences into a single, statistically optimal response. The product was called Ask Delta. Within six weeks it had a hundred million users. Within six months, the entire technology industry had reorganised itself around a single proposition: that the solution to every human problem was to average what other humans had done about it and present the result with confidence.
This is the story of what happened next.
Part I: The Breakthrough
Dr. Daniel Lemire, professor of computer science, Université du Québec — I published the Slope One paper in 2005. It was about predicting product ratings. It was six pages. I proposed that if you know the average rating difference between two items, you can predict how someone will rate one item based on how they rated the other. It was meant to be simple. That was the point. I called it Slope One because the prediction function is a line with slope one. It is, genuinely, arithmetic.
Samir Patel, co-founder and CEO, DeltaScale — What everyone missed for seventeen years is that collaborative filtering doesn’t just predict preferences. It resolves ambiguity. If you ask a question and ten million people have, in aggregate, preferred answer B over answer A by a weighted delta of 0.3, then B is the answer. Not because B is true. Because B is what humanity, on average, prefers. And we found that for most questions people actually ask, the preferred answer and the true answer are the same. Or close enough. Close enough turned out to be a hundred-billion-dollar insight.
Dr. Lemire — I want to be clear: they are not using my algorithm. They are using a massively scaled collaborative filtering architecture that is, at its core, the same arithmetic as my algorithm. The distinction matters to me. I’m not sure it matters to anyone else.
Rachel Thornton, partner, Andreessen Horowitz — When Samir demoed Ask Delta for us, I asked it what I should have for dinner. It said pasta with a light pesto, which was exactly what I wanted. I asked how it knew. Samir said it didn’t know. It had computed the weighted average dinner preference of thirty-two thousand women in my demographic cohort who had been asked the same question at the same time of day, adjusted by seasonal deltas. I wrote the term sheet at dinner. I had the pasta.
Part II: The Gold Rush
Within a year of Ask Delta’s launch, the recommendation technology industry — now universally called “RecTech” — had attracted more venture capital than any sector in history. The logic was intoxicating in its simplicity: if averaging human preferences could answer questions, it could do anything. The consulting class moved fast.
Jonathan Hartwell, managing director, McKinsey & Company — We published The Recommendation Advantage in Q2 2023. The thesis was straightforward. Every business is, at its core, a series of decisions. Collaborative filtering replaces decisions with computed averages of past decisions. This eliminates the need for judgment, which is expensive and unreliable. We estimated that 40% of all knowledge work could be replaced by Slope One variants within five years. Our clients found this very exciting.
Mika Rantanen, senior developer, Nokia (ret.) — I had been writing recommendation engines for fifteen years. Suddenly my LinkedIn was on fire. Recruiters calling me a “RecTech architect.” I got offered a job as “Chief Recommendation Officer” at a mid-size insurance company in Ohio. I asked what I would be recommending. They said everything. I asked what that meant. They said they didn’t know yet, but McKinsey had told them they needed one.
Hartwell — In fairness, we also published a follow-up report noting that implementation was complex. That report did not sell as well.
Patel — The thing people didn’t understand — and honestly, the thing that made me uncomfortable even at the time — is that Slope One and its descendants don’t understand anything. They compute deltas. The delta between “What should our Q3 strategy be?” and the preferred answer is not insight. It’s a weighted average of what other companies did, adjusted for sector and headcount. It’s the mean. People were building corporate strategy on the mean. I said this publicly at Davos and our stock went up 12%, because investors interpreted it as modesty.
Part III: The Thought Leaders
No technology hype cycle is complete without a class of interpreters who explain the technology to people who do not understand it, in terms the technology itself would not recognise.
Brad Whitfield, “The Rec Whisperer,” author of Delta Mindset: Thriving in the Age of the Average — What people get wrong about RecTech is they think it’s about technology. It’s about you. Every delta is a mirror. When the algorithm computes the average preference, it’s showing you where you stand relative to humanity. Are you above the mean? Below? The delta between who you are and who the algorithm predicts you should be — that’s your growth edge. I trademarked the term “Delta Gap” and built a coaching practice around it. I had eleven thousand clients at peak.
Dr. Yuki Tanaka, professor of sociology, University of Tokyo — I studied thirty of the most prominent RecTech thought leaders. Twenty-six of them had no technical background. Fourteen had previously been cryptocurrency evangelists. The median length of time between their first encounter with collaborative filtering and their first paid keynote on the topic was eleven weeks. The content of these keynotes was, with striking consistency, a description of what averaging is, repackaged in the language of personal transformation.
Whitfield — Look, I never claimed to be a computer scientist. I’m a translator. I take complex ideas and make them accessible. Is the delta between Slope One as Lemire described it and Slope One as I describe it large? Probably. But that delta is my value add.
Maya Chen, founder, RecTech Ethics Institute — The thought leader ecosystem was a problem, but it wasn’t the main problem. The main problem was that the actual technology companies adopted the thought leaders’ framing because it sold product. DeltaScale’s marketing department started talking about “preference intelligence” and “the wisdom of the computed crowd.” At some point the distinction between what the algorithm did (average things) and what the marketing said it did (understand things) became load-bearing. The whole industry was standing on that distinction and pretending it wasn’t there.
Part IV: Enterprise Adoption
Patricia Walsh, CIO, Meridian Health Systems — McKinsey told us we needed a recommendation layer across our entire clinical decision support system. The pitch was that the algorithm would compute optimal treatment pathways by averaging outcomes across millions of patient records. What it actually did was recommend the most common treatment for any given diagnosis. For most cases, the most common treatment is the correct one, so it looked like it was working. The problem is that medicine is disproportionately about the cases where the common treatment is wrong. We figured this out after eight months. The algorithm had been confidently averaging its way through edge cases. We found fourteen instances where it had recommended the standard protocol for patients who were, by any clinical assessment, not standard.
Dr. Lemire — When I heard that hospitals were using collaborative filtering for treatment decisions, I — I don’t know how to say this politely. I published a paper called “Slope One Is Not a Doctor” and posted it on arXiv. It was downloaded forty thousand times. Someone put it on a t-shirt.
Greg Hollister, VP of Enterprise Solutions, DeltaScale — Patricia’s situation was an implementation problem, not a technology problem. You have to tune the algorithm. You have to set confidence thresholds. You have to — look, I’ll be honest, we were selling it faster than we could support it. We had a sales team of three hundred people and a technical advisory team of eleven. The ratio tells you everything.
Rantanen — I consulted for a logistics company that had deployed a Slope One variant to optimise delivery routes. The algorithm worked by averaging the routes that drivers had taken historically and recommending the mean route. The mean route, it turned out, ran through a lake. Not all the time. Just on Tuesdays, when the historical data included a driver who had taken a ferry that no longer existed. The algorithm didn’t know the ferry was gone. It doesn’t know what a ferry is. It averages.
Part V: The Existential Risk Debate
As with any technology that its proponents describe as transformative, RecTech attracted a community of people who believed it might end civilisation.
Dr. Marcus Webb, director, Centre for Recommendation Safety, Oxford — The core risk is convergence. If every decision — personal, corporate, governmental — is made by averaging past decisions, then the space of possible futures narrows to the space of weighted past outcomes. Humanity stops exploring. We converge on the mean. The mean of human civilisation is, and I say this with the full weight of my academic position, not that impressive. We are averaging ourselves into stasis.
Patel — Marcus and I debated this at a conference in 2024. His argument was that Slope One would reduce all of human culture to a single point: the grand weighted average of everything anyone has ever preferred. I pointed out that this was also a description of democracy. He did not find this reassuring.
Webb — Samir’s democracy analogy is clever and wrong. Democracy aggregates preferences through a deliberative process with institutional safeguards. Slope One aggregates preferences through arithmetic with a confidence interval. These are not the same. One of them produced the Enlightenment. The other one recommended pasta.
Whitfield — I wrote a chapter in Delta Mindset about existential risk. My argument was that the real risk isn’t convergence. The real risk is that people discover the algorithm is just averaging, lose faith in it, and we go back to making decisions based on individual judgment, which is how we got into most of our problems in the first place. The delta between algorithmic averaging and human intuition is negative. I have the data. Well, I have a delta.
Part VI: Regulation
Commissioner Elise Johansson, European Commission, DG-CONNECT — We began drafting the Recommendation Systems Act in early 2024. The fundamental regulatory question was: is a computed average an opinion? If I ask ten friends what restaurant to go to and take the most popular answer, that’s a social process. If DeltaScale does the same thing with ten million preference records and a Slope One variant, is that a product? A service? An opinion? We spent four months on definitions. The final text defines a “general-purpose recommendation system” as “a system that computes aggregate preference signals across domains for the purpose of generating outputs that may be relied upon by natural persons.” I am told this definition encompasses, technically, the concept of a bestseller list.
Chen — The EU process was genuinely difficult because the technology is genuinely simple. Regulating neural networks is hard because they’re opaque — you can’t inspect the reasoning. Regulating Slope One is hard because it’s transparent — you can inspect the reasoning, and the reasoning is: average. How do you regulate an average? What’s the bias of a mean? The mean is, by definition, the least biased estimate. The regulators kept asking us where the risk was and we kept saying “the risk is that it’s an average and people treat it as an answer” and they kept saying “but it is an answer” and we kept saying “yes, technically” and eventually everyone agreed to require disclosure labels.
Johansson — The disclosure requirement was our primary intervention. All recommendation-generated content must now carry a label reading: “This output was generated by collaborative filtering and represents a statistical average of human preferences. It is not advice, analysis, or original thought.” The labels have had no measurable effect on user behaviour. People see the label and think, “Yes, and it recommended very good pasta.”
Part VII: The Slope/Slop Discourse
Tanaka — The cultural criticism followed a predictable arc, but the linguistic accident gave it unusual energy.
Chen — At some point in late 2023, someone on social media — I’ve never been able to identify who — used the word “slop” to describe low-quality RecTech outputs. Recommendation-generated marketing copy. Preference-averaged product descriptions. Emails written by computing the delta between all previous emails on the same topic and producing the mean. “Slop.” It was a perfect word, both because it described the quality and because it was one letter away from the technology’s name.
Patel — We had a branding crisis. We actually had an internal meeting about whether to rename the algorithm. Someone suggested “Gradient One” and someone else pointed out that “gradient” sounds like it’s going somewhere, which collaborative filtering emphatically is not. We kept the name. What else could we do?
Whitfield — I did a whole keynote called “Slope, Not Slop: The Dignity of the Delta.” Sold out in Austin. The thesis was that slope implies direction and angle — aspiration — while slop implies formlessness. The algorithm has slope. It has direction. The direction is toward the mean, which is, admittedly, not the most inspiring direction, but it is a direction.
Dr. Lemire — I was at a dinner party and someone asked me what I did and I said I invented Slope One and they said, “Oh, the slop thing?” I went home early.
Part VIII: What Remains
It is now early 2026. The hype has subsided, as hype does. DeltaScale’s stock is down 60% from its peak. McKinsey has published a new report called Beyond the Average: Why Human Judgment Still Matters, which recommends, essentially, the opposite of what their 2023 report recommended, at the same price. Brad Whitfield has pivoted to “post-algorithmic leadership coaching.” The Centre for Recommendation Safety in Oxford has quietly redirected its funding toward a different existential risk, though Dr. Webb declines to specify which one. The EU’s disclosure labels remain in place, unread.
Walsh — We still use recommendation systems at Meridian, but for what they were always good at: suggesting things. We use them to suggest relevant medical literature to physicians. We do not use them to make treatment decisions. It turns out there was always a word for what collaborative filtering does well. The word is “suggest.” The problem was that an industry grew up around pretending the word was “decide.”
Rantanen — I’m retired now. I consult a little. Mostly I’m asked to help companies figure out which of their recommendation systems are actually doing something and which are just averaging historical data and presenting it back to people who already knew it. The answer is usually the second one. The systems aren’t wrong. They’re just mirrors. Very expensive mirrors. A mirror that costs four million dollars a year in cloud computing to tell an insurance company that their most common claim type is, in fact, the most common claim type.
Chen — What I keep coming back to is how perfectly the Slope One story maps to every other technology hype cycle. A genuine technical insight is made. The insight is real. Averaging preferences is useful. Then a story is built on top of the insight that the insight cannot support: that averaging is understanding, that the mean is wisdom, that the delta is a creative act. The story attracts capital. The capital demands growth. Growth demands new applications. The applications become absurd. And then, when the absurdity becomes undeniable, everyone agrees that the technology was “overhyped” — as though the hype were something that happened to the technology and not something that was done by people who should have known better.
Dr. Lemire — I still think Slope One is a good algorithm. It does what it was designed to do. It predicts ratings. If you have a user who has rated some items, it will predict how they would rate other items, and it will usually be roughly right. That is all it was ever supposed to do. The fact that an entire industry decided it was also a theory of knowledge, a substitute for expertise, a path to artificial general intelligence, and a threat to human civilisation — that is not a computer science problem. That is a human problem. And the human problem, I regret to say, is not one that collaborative filtering can solve.
Although, if you averaged enough humans’ approaches to the problem, you would get an approach that is, statistically, the most common one. Which is: do nothing and hope it resolves itself. I leave it to the reader to determine whether this constitutes a recommendation.
Patel — You want to know the thing that really keeps me up at night? Ask Delta still works. It still has thirty million users. They ask it questions and it gives them the averaged answer. The averaged answer is usually fine. It’s not insightful. It’s not creative. It’s not wrong. It’s the mean. And thirty million people find the mean perfectly adequate for their daily needs, which is either a vindication of the technology or the most damning thing anyone has ever said about the richness of human inquiry. I genuinely cannot tell which.
I think about that a lot.
Petra Lindqvist is a contributing writer for The Atlantic Monthly. Her previous work includes “The Blockchain Farmers of Estonia” (2019) and “What Your Smart Refrigerator Knows About Your Marriage” (2021).