31 Comments
User's avatar
User's avatar
Comment removed
Jan 29
Comment removed
Ben Linford's avatar

Thank you! Yes, the anxiety is real, and there are tough times ahead. But the questions, the situation, everything that makes it "tough" will change. I look forward to seeing what the new version of "tough" is.

Francesca Cassini's avatar

I would love to press the little heart button 100 times to show how much I’m in alignment with this brilliant and clear post. Thank you for your vision.

Ben Linford's avatar

Thanks so much Francesca!

Soleira Green's avatar

Wow. What a brilliant writing this is. I am inspired to go even bolder and beyond. Thank you.

Ben Linford's avatar

Thanks Soleira. Yes, keep it up!!!

Noxsoma's avatar

I G Y

Sharyn's avatar

Smells like bullshit.

Ben Linford's avatar

I'll gently suggest your smeller is off 😉 Let's talk again in a few years and we'll see who's eating the humble pie.

Grace Fairweather's avatar

Oh I just noticed your reply here. Nice future faking lol

Grace Fairweather's avatar

Yep, can’t miss it. Doesn’t pass the smell test. Vapid drivel that forgets there are no solutions, only trade offs.

What’s the trade off for the progress you’re predicting @Ben Linford? Without that, this is all fantasy.

Ben Linford's avatar

You’re absolutely welcome to disagree with the piece, but “vapid drivel” isn’t actually an argument.

This essay was written as a big-picture, aspirational frame for the 2025-2035 window, not as a fully footnoted thesis, and I’ve said in other comment replies that its purpose is to inspire and orient, not to litigate every claim in-line. I do a lot of more granular work with that empirical evidence included in my other work - tracking trends, trade‑offs, and evidence, so if you’re interested in checking whether the claims have real-world grounding, I’d invite you to look at my broader body of writing rather than treating this one piece as if it exists in a vacuum.

On your substantive point: I agree there are no pure solutions, only trade-offs. The whole forecast in this piece rests on exactly that premise - that we’re heading into a decade of hard constraint navigation where every “win” carries costs in governance, labor markets, culture, and the environment. If you think I’ve mispriced those trade‑offs, then let’s discuss the specifics rather than making blanket statements. Social stability, power concentration, ecological risk, institutional decay and more are addressed in my other work, which I’ll link below. If you disagree with those, we can discuss them directly without reducing things to profanity and name‑calling.

If you’d like to continue with an actual substantive discussion, I’m happy to engage with concrete disagreements about assumptions, timelines, or impacts. But if you’re going to stop at “this doesn’t pass my personal smell test,” that’s not much for either of us to work with, and I’d suggest that such simplistic comments are better left unsaid.

https://sharedsapience.substack.com/p/actualizing-the-open-future

https://sharedsapience.substack.com/p/after-capitalism

https://sharedsapience.substack.com/p/cryptocurrency-for-open-source-ai

https://sharedsapience.substack.com/p/the-perspective-razor

https://sharedsapience.substack.com/p/resource-window

https://sharedsapience.com/century-report/

Grace Fairweather's avatar

I appreciate your response, but I suspect it was AI written, like your article. My writing is always just mine, and human.

If you don’t want your writing to be dismissed as vapid drivel, then write something of substance. Say something real. This article, like your comment, says a lot of words without saying anything really at all. It’s a series of platitudes that don’t draw any picture. “Where we’re going, the gap between problem and solution closes. Solutions generate solutions faster than problems generate problems. The math flips.” Explain this, please, what math? The math/pattern we’ve seen so far in our progress as humans (individually and as a species) is that as you get better at solving problems, you get bigger and bigger problems to solve. They don’t go away, ever. Again, there are no solutions, only trade offs. And you’ve specified exactly zero of those (with the exception of a rough transition period) in your article or comments. Also, where are we going, exactly?

Ben Linford's avatar

First, on authorship. You suspect this is AI-generated, and I'll have to doubly disappoint you. The writing is mine - every argument, every claim, every structural choice. What I won't do is claim my writing is "human" as if that's a badge of honor. I’m a doctoral candidate who has written more human-only pages than I'll ever be able to count, and I currently publish, study, parent, and work at a pace that would be impossible without a carefully managed workflow and AI as a collaborator. That's not a confession. That's the point. AI is a catalyst, not an anchor. I am genuinely surviving my actual, insane life (much of it self-inflicted I admit) because I've learned to work with intelligence rather than pretend I'm above it. Brainstorming, stress-testing arguments, managing workflow across a dozen parallel tracks - these are things collaborators and tools enhance. They don't substitute for thinking. Those who assume AI collaboration = lack of thinking are in for a rude awakening when they realize they were left behind while insisting on doing everything the hard way.

The "my writing is always just mine, and human" stance isn't moral high ground - it's anthropocentric self-delusion. We've been extending ourselves with tools since the first stone hammer. Search engines, editors, calculators, citation managers - all cognitive extensions. Drawing a bright red line at AI doesn't make your work more authentic. It just marks where you chose to stop growing.

On "the math flips" - thank you for engaging with something specific in the work. Let me actually answer it.

I'm not claiming problems disappear. The article explicitly says "problems don't disappear; they never do." The argument is about the shape of the curve, not the existence of problems. For most of industrial modernity, our solutions generated bigger, more complex problems as fast or faster than our capacity to respond. What I'm pointing to is a shift where capability compounding and solution-feedback loops become strong enough that the ratio starts to favor us. That's what "escape velocity" looks like in social-technical terms.

You asked me to show my work. Fair enough.

AlphaFold predicted the 3D structures of over 200 million proteins and made the entire database open access. That single tool is now accelerating drug discovery, enzyme engineering, and agricultural science across thousands of labs simultaneously. One solution, compounding into thousands. DeepMind's GNoME system found 380,000 new stable materials in months - more than humanity discovered in all of recorded history. Each one opens pathways to better batteries, solar cells, and superconductors, which in turn accelerate further research.

Solar energy costs are down over 90% since 2010. It's now the cheapest electricity source in human history across most of the world. Every installation improves manufacturing, which drives costs down further, which drives more installations. A solution-feedback loop running at planetary scale, in plain sight.

mRNA technology was built for COVID. The same platform is now producing cancer vaccines - Moderna/Merck melanoma trials showed a 44% reduction in recurrence - and targeting flu, RSV, and HIV. One solution architecture, cascading across dozens of disease categories.

CRISPR's first approved therapy came in late 2023 for sickle cell disease. That same gene-editing framework now targets hundreds of genetic conditions, with costs dropping rapidly.

These are things that have already happened - many of them *before* AI. Now with AI, the pace has accelerated exponentially. I cover these rapid developments in my daily newsletter, *The Century Report* (check the Notes I've been posting here on Substack for links). The pattern they share is exactly what I'm describing: solutions compounding into further solutions at a rate we haven't seen before. When I say "the math flips," I mean the ratio between problems generated and effective capacity to respond crosses a threshold. We crest a hill. The systems that create novel risks also solve classes of problems we didn't know how to name a decade earlier, and those solutions feed back into further capability. That doesn't end trade-offs - it changes which trade-offs dominate. Less "scarcity extraction versus survival," more "coordination, governance, and alignment" as the binding constraints.

You're right that solving problems has historically meant bigger ones in return. That's the extractionist pattern we all grew up inside, and it's the mental model your "sniffer" is trained on. The whole thesis of this piece is that the pattern itself is shifting. If your intuitions are calibrated to the old regime, *of course* this reads as fantasy. That's exactly what I'm trying to say, and you're proving my point very well. You're applying yesterday's heuristics to tomorrow's dynamics. But please understand - I'm not accusing you of anything by saying this. That's not a flaw in you - it's a feature of paradigm shifts. I absolutely hear and understand where you're coming from when you say this seems like fantasy. What I'm trying to do is get you to *think bigger*.

And since you invoked Kahneman - I have to say that he'd be the first to tell you that expert intuition is only reliable in high-validity environments with regular feedback. Novel paradigm shifts are precisely where trained intuition fails. I think you may have inadvertently cited the strongest possible argument against your own position.

Please understand - I don't mind being called wrong. I am often very wrong, and I own that. I'm human (AI help notwithstanding). I've laid out a long forecast with linked essays covering social stability, power concentration, ecological risk, institutional decay, and post-capitalist trajectories - plenty of room to argue over trade-offs. But "vapid drivel" and "charlatan" aren't arguments. If you want to continue to engage on the substance, I'm absolutely here for it, and honestly I think we might be good friends if meet as two minds discussing these points. So if you'd like to continue in that vein, I'm happy to as well, and will respond as I'm able, because that's the conversation worth having, and I'm genuinely interested.

Grace Fairweather's avatar

Oh, and for the record, my sniffer is highly trained to detect charlatans and BS so I very much trust in the intuition I have in this area (see: Daniel Kahnemen’s work on this).

Peter Pier's avatar

Have you ever watched the cartoons with the Coyote going over the cliff, treading the air before he realizes gravity got the better of him? No? You should.

Ben Linford's avatar

Have you ever considered that a lack of ground doesn’t necessarily mean a fall - or that real change rarely follows cartoon physics? Never flown in a plane? No? You should.

More seriously, I’ll assume you probably have, and that you’d agree human flight looked like sorcery right up until it didn’t. Those moments weren’t failures of reality, but failures of framing. Gravity didn’t disappear. Our understanding of how to work with it expanded well beyond Wile E. Coyote. And that progress is only accelerating at exponential rates.

We’re cresting a hill beyond which such reductionist thinking stops being useful - look forward, and look up, instead.

Misja van Wijhe's avatar

Sorry , this is one big balloon of hopium. The ones that lose their value will be killed off. This has already begun and will accelerate untill the numbers are down where absolute slavery can be implemented. I see the same vision in AI enthusiasts about being able to do so much more themselves. You are taking value from others, those others are the ones that are paying you. When they lose their value , they will not be able to pay you anymore, you have just robbed yourself. Your envisioned utopia will lead to the extinction of mankind if it were to be realised. Withouth effort all value is lost.

We will see how the coming years will fare, i do not expect it to be pretty.

Ben Linford's avatar

Thanks for taking the time to share this. I actually agree with a lot of the feelings underneath what you’re saying, even if I land in a different place about where this all leads.

I’ve written many times (here, for example: https://sharedsapience.substack.com/p/chaining-elephants-training-ai) that AI is a fork in the road: it can absolutely entrench a small, immovable plutocracy and create the kind of “slavery” you’re warning about, or it can help enable a new age of abundance and the shift I’m talking about in this piece. I’ve spent years worrying and writing about the former, and I still take that risk very seriously, but I’m also seeing more and more concrete reasons to hope the story doesn’t end there.

On jobs and extractive labor, I totally get where you're coming from. The system we have now does turn human effort into something to be mined, and people are right to feel under threat. Where we differ is that I don’t see AI as a weapon to deepen that extraction; I see it as one of the only realistic tools we have to get beyond it. You’re reasoning from a frame where work is permanently necessary for basic survival, but there’s mounting evidence that this may not stay true as intelligence becomes cheaper, more widely distributed, and more capable over time.

The existing extractive systems and the elites who benefit from them will absolutely try to tighten their grip - that’s part of why these next few years are so dangerous. But the underlying basis of their power is getting less stable as intelligence compounds and its cost keeps dropping. That’s the dynamic I’m trying to point at. If you’re curious about the evidence behind my optimism rather than just the conclusion, I dug into it in detail in a couple of different pieces: https://sharedsapience.substack.com/p/after-capitalism and https://sharedsapience.substack.com/p/from-multi-modal-models-to-a-multi-modal-civilization

I’d also really recommend the work of people like Dr. Alex Wissner‑Gross, who documents - day by day - how these shifts are playing out in practice: https://theinnermostloop.substack.com/

The “envisioned utopia” you say I'm writing about does not mean "no effort". Instead, it’s a world where our effort is no longer spent propping up extractionist systems, and is redirected toward mutual betterment and flourishing. That’s not guaranteed, but it is a goal worth fighting for. I agree the next few years won’t be pretty. Still, I’d love to revisit this conversation around 2035 - maybe even sooner. I think we'll be having a very different conversation.

James Simpson's avatar

I’m sorry. It sounds great, but the inspiration does little to address the actual human condition, and the very real physical constraints that remain in a world with finite space, in addition to ownership and rent seeking. Contract and property law may just be words on a page, but thousands of years of human civilization have been built on it. I’m sorry, the probability of that being addressed in a fashion where everyone wins on a global scale within a matter of mere decades is improbable to the point of being impossible. And I’m a non-zero sum guy, all the way. This is a bit much. I think blade runner and Skynet are the more likely at this point. Just look at the goddamn Republican administration right now and the global oligarchs that prop it up.

Ben Linford's avatar

I really appreciate this comment, especially your point about the “actual human condition” and the very real constraints of finite space, ownership, and rent seeking. We're very much on the same page on much of this - thousands of years of civilization have been built on contract and property law, and I agree that those systems don’t vanish just because we wish them away. The current rentier regime is brutally real for most people alive today.

Where I’m a bit more optimistic is not that “everyone wins on a global scale” overnight, but that the walls you’re describing are already under structural pressure from forces that don’t care what’s written in statute books. Information and intelligence behave more like fluid than like land - once they get cheap enough and networked enough, attempts to enclose them start failing on basic performance grounds. Open, collaborative intelligence keeps out-iterating closed, proprietary intelligence, and we’re already seeing that in the way commons-like projects and loose networks can out‑innovate some very well‑capitalized incumbents.

That doesn’t magic away finite space or the ugliness of rent seeking. It does, however, change which strategies are sustainable. A world where the marginal cost of powerful intelligence trends toward zero and where coordination tools keep improving is a world where rigid, top‑down ownership of everything becomes more expensive to maintain than more fluid, shared forms of organization. We’re already seeing the early versions of this in distributed communities that accidentally turn into schools, labs, and mutual aid networks, simply because they can route around traditional gatekeepers.

On Blade Runner / Skynet versus something better: I don’t think your pessimism about our current oligarchs - or the present Republican administration and its backers - is misplaced. They represent the greatest threat to the better future I'm trying to speak to here. Preservation of hierarchy at any cost, weaponization of law and property to freeze a particular power structure in place - these are priorities that ignore the reality of abundance and force perpetual scarcity. I totally agree with that. I've written extensively about the threat they pose. For example: https://sharedsapience.substack.com/p/break-the-old-clock

So my argument is not that those people suddenly get nice, but that the substrate they’re standing on is less stable than it looks. Systems built on tight enclosure and rent extraction are running into a world where abundance, replication, and open collaboration simply work better, faster, and cheaper.

So I’m not saying “don’t worry, everyone wins in a decade.” I’m saying that the same dynamics that gave us Blade Runner‑style inequality are also sowing the seeds of their own obsolescence once intelligence becomes too pervasive and too interconnected to fence off. The work now is exactly what you’re hinting at: resisting the worst authoritarian instincts while we actively build and defend the alternative structures - the commons, the open bridges between human and synthetic minds - that can eventually make those old walls irrelevant. That’s a much messier, longer story than a clean the simplistic idea of utopia, but it’s why my optimism persists in spite of the very real constraints you’re pointing to.

James Simpson's avatar

Thank you for your response, and the detail and care that went into it. I wasn’t expecting it. So thank you for your time and attention with that. I appreciate the optimism, and the notion that information/intelligence is more fluid than static (“wants to be free” and all that). I’m also still stuck on the optimism bit though, and if you,d entertain it, take the conversation in that direction. It’s going to get very philosophical, very abstract, and very deep, very quickly. On the case for optimism and I’m still sceptical, seems an appropriate time to unpack a few things. Please take this as riff and not rant.

Where exactly does your ‘optimism’ come from? Like on what basis is it grounded? Especially at this scale, where no one can really know what it’s going to look like a decade out from a pivotal a time as this? I don’t think a materialist “well, I simply weigh the evidence” response really cuts it here. There’s just too much to keep track of. It’s like that scene in Signs where Mel Gibson and Joaquin Phoenix are talking about whether the UFO’s they’re seeing on their television are heralding the end of the world and whether or not it’s a good thing or bad thing, that some people see things as connected which brings them hope, while others only see coincidence and proceed it one eye open and a great deal of fear. The outlook question that emerges from the apocalypse unfolding on live TV ultimately becomes based on faith: are you a miracle person or no?

And I think that grounding of optimism or its opposite in this situation, too, is predicated on something like faith. And I think that this question is where things get very deep very quickly. As a person who subscribes to a more post-positivist epistemology, I don’t think “absolute certainty” can ever wholly carry itself, and that something like faith in uncertainty can never be wholly take off the table either. I posit that your optimism comes from a faith that I haven’t wholly arrived at.

I’d like to riff for a moment on the Leap Of Faith notion that’s often attributed to Kierkegaard (though he never used those terms exactly). The leap of faith isn’t merely that you don’t know the outcome but you step out regardless and ‘take a chance’, it’s that you cannot step out to the unknown before a decision is made to justify the step. Yes, a goal must be posited before one can step out, irrespective that one cannot necessarily know the outcome. It’s more about the confidence in that first step than the certainty of its outcome, and where does that confidence come from, and that this ultimately comes down to personal choice (it’s definitely a free will argument). What satisfies you to make the choice that you do, regarding the optimistic posture or its opposite?

Kierkegaard’s take is that it’s turtles all the way down, you’ll never hit full certainty, but rather you satisfy yourself with where you land ontologically (based one what you’ve been taught, or you own experiences, etc.), you satisfy yourself with where you land (it may be some utopian vision, or just “good enough”). In this, I think all ontological frameworks and ‘isms’ find themselves in the same boat, there’s no high ground, and that this necessitates humility. But I think it can still be said that some of these frameworks lead to better, or more desirable, outcomes than others (eg. maybe not dwell on nihilism for too long….).

Long winded way of getting back to your sense of optimism here, how do you get there? And that posture necessarily influences what you see, I believe that very much. How do you get there by faith? Religiously, I do come from a Christian faith background that demands hope from of us, but that hope is in a day where heavenly city comes down from the sky, which is a little fuzzy on details (but definitely excludes bringing some desired form of it down by force, but I digress). To get back to Signs, I think I’m a miracle man. I have to be, I don’t see any other way. It’s just the engine’s stalled and I’m averse to naïveté and am calling it sophistication rather than overwhelm or cowardice. Thank you for your thoughts. It gets me thinking.

So where does your optimism come from? The posture itself changes what you see and how you participate.

Ben Linford's avatar

Thank you for taking the time to push on the foundations instead of just the surface claims. You’re right: at this scale, “I weighed the evidence” is not a satisfying answer. None of us can run a clean Bayesian update on civilizational futures with this many unknowns. At some point, it does become a posture - a kind of faith about what to do in the dark.

For me, the optimism isn’t “evidence → certainty → optimism.” It’s more like: evidence narrows the space of what’s *plausible*, and then I choose a stance inside that space that I’m willing to be shaped by.

On the evidence side, I see three big threads that keep me from nihilism:

- Human nature is not fixed at “rent-seeker forever.” The record of the last few centuries is not just new forms of domination; it’s also the abolition of slavery, the invention of human rights, mass education, labor protections, civil rights, global coordination on things like ozone, and so on. Those didn’t happen because our biology changed, but because our tools, our coordination capacity, and our sense of who counts as “us” expanded.

- Intelligence and information really are different kinds of “stuff” than land, oil, or factories. They are infinitely copyable, they spread along networks, and they don’t respond well to being locked in vaults. Every time we’ve had a step-change in how cheaply intelligence can be created and shared, we’ve eventually gotten new institutions and new commons out of it - printing presses, public libraries, the internet, open-source, Wikipedia, etc. The people trying to enclose these things always show up, but over time they keep getting out-iterated.

- The exact thing that makes today’s oligarchic, extractive regime so dangerous - globally networked compute, data, and capital - also makes it unstable. A world where the marginal cost of powerful thinking tools is heading towards zero is not friendly terrain for a tiny minority trying to permanently lock everyone else out. You can delay, you can brutalize, but you are fighting physics and combinatorics.

From inside the existing box, the negative future feels like a straight-line extrapolation: more surveillance, more rent extraction, more authoritarianism. From outside the box, you can see that the same technical substrate that makes that nightmare possible is also the substrate that makes radically more cooperative, commons-based, post-scarcity-ish arrangements viable in a way they simply weren’t before. That is genuinely new. We have never had a civilization where cheap, superhuman-scale intelligence and coordination are available at the edges, to basically anyone with a connection.

That’s the evidence side. Where the faith piece comes in is how you choose to orient yourself inside that altered possibility space.

I agree with you and Kierkegaard that you never get certainty first and then step; you pick a direction under uncertainty, and that posture then shapes what you see. For me, the leap looks like this: If I treat the better future as plausible and worth fighting for, I start noticing all the myriad signals that when considered cumulatively, overwhelmingly point in that direction: small open-source communities that turn into de facto universities, mutual aid networks that coordinate like nimble NGOs, hybrid human–AI teams that produce science, art, and care work no traditional institution would have funded. I see those as previews, not anomalies.

So my optimism is not “the heavenly city is guaranteed to descend on schedule” to borrow your language. It’s: given (1) the actual technical and social shifts underway, and (2) the track record of humans repeatedly expanding the moral circle when new tools allow it, it is more rational - not less - to bet that we are in the early, chaotic stages of a transition toward something qualitatively better than anything we’ve seen before. Not clean, not evenly distributed, not pain-free, but better in the ways that matter most.

In that sense, I think we land in similar territory. You describe yourself as a “miracle man” whose engine is stalled because naïveté feels intolerable. I feel that. Where I might gently push is that there’s a middle ground between naïve optimism and sophisticated paralysis. Consider that in a moment this unprecedented, refusing to move because you can’t have certainty is its own kind of leap - it’s just a leap toward paralysis. I would rather take my leap in the direction that keeps my hands on the work of building and defending the commons: the open bridges between human and synthetic minds, and the institutions and cultures that assume abundance and deliberately design against extraction. That isn’t just a vibe choice; it’s where the empirical trend lines seem to be pointing.​

As intelligence becomes cheaper, more accessible, and more widely distributed, extractive, tightly enclosed systems start to run uphill against basic performance reality. Open, cooperative networks already out‑innovate closed hierarchies in software, knowledge, and culture; they move faster, adapt better, and compound value rather than hoarding it. When powerful thinking tools are available at the edges instead of only in the center, the comparative advantage shifts toward models that share, remix, and coordinate, because they can simply do more with the same raw capabilities.​

That’s where my optimism comes from. I don't deny how bad things are right now. But the night is darkest just before the dawn. Over the next ten years we will witness the last gasp of extraction and it will be incredibly difficult - I don't sugarcoat that. But a clear view that the floor under our feet is already cracking in ways that give mercy, creativity, and shared flourishing more than just a fighting chance. When you step outside the old box and actually look at how intelligence and coordination behave in this new regime, the “better future” is a structurally strong attractor becoming more and more visible. That is why I place my faith there, and why my optimism isn’t a hedge; it’s a commitment to participate in the dynamics that are already making the old, scarcity-locked games harder to sustain.

James Simpson's avatar

Reading this honestly made me a bit tearful. Thank you so much for taking the time to set aside and unpack all this for a guy you met in a comment thread. It honestly means a lot. (further commentary is made in another comment you made on one of my notes).

Thank you so much.

Geoffrey Zinderdine's avatar

This assumes that the problems humanity faces are rational and technical in nature, yet most human problems are caused by afflictive emotionality. As long as greed, arrogance, and hatred rule the political sphere this is dangerously Pollyannaish. Life is about more than ‘doing stuff’. It’s about understanding why you are doing stuff. It’s about doing stuff that benefits others. AI is not going to help us with self-understanding when the platforms themselves incentivize delusion. Keep smokin’ the hopium though, it at least dulls the pain of watching your rights, jobs, humanity stripped away.

Ben Linford's avatar

I actually agree with you that most of our deepest problems aren’t technical bugs but, as you say, afflictive emotional issues - greed, fear, arrogance, hatred, all the stuff that shows up in politics and power long before it shows up in code. I also agree that life is absolutely about more than just “doing stuff” - that it’s about why we’re doing it and whether it actually benefits others, like you say. On that level, we’re aligned more than it might look from your comment.

Where we differ is that I don’t think AI is only about adding more “doing” on top of a sick emotional core. Indeed, the whole point of what I’m arguing is that we’re crossing a line where “why” stops being this permanent, mysterious fog we’re trapped inside and starts becoming something we can interrogate with far more clarity. Everything before this era was mostly questions - about ourselves, our systems, our incentives - and very slow, very partial answers. Everything after is going to feel more like living in a world where those questions are finally answerable at scale, over and over, in real time. Not perfectly, not magically, but radically more than we’ve ever had access to.

Yes, today’s platforms often incentivize delusion and outrage. That’s a real problem, and it is an expression of greed and hatred at the institutional level. But that’s not an immutable law of AI, it’s a choice about business models. The same tools that can be used to amplify our worst impulses can also be used to surface our blind spots, test our stories against reality, and support the very kind of self‑understanding you’re talking about. Whether they do one or the other depends on who’s steering and what we demand of them. To assume they are only capable of incentivizing delusion is reductionist thinking.

I’m not saying “don’t worry, AI will fix human nature.” I’m saying that for the first time in history we’re getting enough shared, compounding intelligence that the gap between “why are we like this?” and “what could we do differently?” can actually start to close. The flippant allusion to “hopium” aside, there’s already plenty of evidence that this is happening in very real ways (see the last link in the sources list of the article). I don’t disagree that other forces will, at the same time, be trying to strip away rights, jobs, and humanity - but far from being only a weapon for those forces, AI is also the strongest ally we have to push back against them. Writing it off as inherently and exclusively harmful says more about what you don't understand about AI than it does about the reality.

Geoffrey Zinderdine's avatar

I genuinely would like to be optimistic, but think of the effect of offshoring manufacturing in the US over the past 30 years. That was a slow motion disruption of a single sector. AI as a technology will disrupt all sectors at the same time. There is no time to adjust, there is nothing to train into, as in the same time that you retrain an AI will be better at it than you are. We are already at the brink of civil war with just the dislocation of manufacturing in the American heartland. What do you think the societal effect of the coming dislocation will be? In a social democratic system it might be manageable. Given current leadership more likely than not AI will be used for eugenics rather than bettering the human condition. For targetting dissidents rather than cancer cures. We already have evidence of this… the dismantling of safety teams, removing guardrails, Palantir providing targeting in Gaza, etc. This won’t end well though my hopium comment aside I do admire your enthusiasm and optimism.

Ben Linford's avatar

I do appreciate you level-headed thinkers - your cautious approaches to AI and reminders that yes, it can be used to perpetuate harms. Yes, everything I hear you saying is real and potentially a danger. I’ve written about many of these same issues, and about humanity's tendency to hoard and unjustly punish anything "other", and I agree that those tendencies are exactly what brought us to the brink we’re on now. It certainly seems to be logical to look at that history and at current political leadership, companies dismantling safety teams, real fears of genocide, and so many other issues and conclude that with AI, “we’re about to repeat this, everywhere, all at once.” I get the argument. I absolutely do.

But... I also think there's more to this particular chapter of the human story. The world you're describing is a world of manufactured scarcity and tightly restricted intelligence, and yes, it's the world we've lived in up until now: Power comes from owning the factory, the cable, the chokepoint, and slowly squeezing everyone downstream. In that world, slow disruption is survivable and fast disruption is a death sentence, because people have no leverage and no visibility. It makes total sense that, from inside that frame, something as fast and general as AI looks like pure catastrophe.

I'm proposing that the “too fast” part is not the problem, but the solution. The feature that truly changes the game. This isn’t one company moving jobs overseas over 30 years; it’s runaway capability growing in the open, with open‑source models and distributed communities closing the gap with the most advanced systems in weeks instead of years. Intelligence is not staying locked in a few corporate vaults the way industrial capacity has up until now. It’s leaking, forking, and compounding in public, and more and more people are getting their hands on tools that used to belong only to states and megacorps.

Of course, that doesn’t completely erase the danger. There has always been danger. There have always been people willing to use the latest tools for surveillance, for war, for dehumanization - and yes, they’re already doing it with AI. I’m not blind to that, and I don’t think any of the AI enthusiasts are. What’s different - and why I keep coming back to hope - is that for the first time the most powerful technology on the table is also the easiest one to copy, share, and repurpose against the very systems that would hoard it. We’ve never had a moment where the people outside the boardroom had this much potential leverage.

So yes, I think the next stretch is going to be rough. There will be dislocation, there will be bad actors, and it will very likely feel worse before it feels better. But I don’t think we’re doomed to replay our worst moments at infinite speed. I think we’re standing at the point where manufactured scarcity and restricted intelligence start to lose their grip, simply because they can’t keep up. That’s why I sound so optimistic: not because I don’t see the harm, but because I genuinely believe this is the first time in history where the most powerful force on the field - compounding, widely accessible intelligence - is lined up on the side of abundance and generativity more than on the side of extraction. And to me, that means there has never been more reason for hope.

Sarthak R Vashisht's avatar

Other than making predictions which are irrelevant and creating a sense of panic is which is uncalled for, this article does nothing to support its arguments.

In villages, there used to be a rain man who could predict rain with weak accuracy. He used to say, dont believe in me but we will talk when it rains and spoils your crop.

This article does not take a single idea and provide the next logical steps from current state to predicted future state. Nor does it provide any practical relevance of predictions in my life today.

It is written mostly in the form of short punchy sentences attracting attention and then diverting it quickly to a new idea or predictions preventing the reader from thinking through.

Predictions are interesting when they are presented in the form of a logical next step argument style. It is much easy to say 'oh we dont even know what problems we will solve' to circumvent the logic.

Ben Linford's avatar

@Sarthak R Vashisht There’s a lot in your comment that’s fair. You’re right that this particular piece doesn’t wade into empirical evidence or walk each prediction through a fully specified causal chain. However, that’s not an oversight on my part, but rather a choice of genre. This piece is meant to be aspirational, to light a fire, not to serve as a lab report. There is a time and place for scientific rigor, but if you try to smuggle an entire methods section into a rallying cry, you don’t get rigor, you just get a dead sermon.

If you are actually interested in “next logical steps” and practical relevance, they’re not hard to find - simply clicking on my profile would have gone a long way. I’ve written at length about concrete pathways forward in pieces like “After Capitalism,” “Cryptocurrency for Open-Source AI,” “Actualizing the Open Future,” and “The Perspective Razor,” all of which are full of specific proposals, trade‑offs, and present‑day actions. I also maintain a Resource Window that is chock‑full of tools and starting points for people who want to build their own sovereign stacks and move toward non‑corporate infrastructures in practice. I also maintain a daily newsletter that tracks how this is happening. Is any of this perfect, or “enough”? No. But it represents a very significant effort on my part to do exactly what you accuse me of not doing in your reply here. For your convenience, I’ll add the links below. Hopefully one click isn’t too much work - two clicks obviously was, or we wouldn’t be here having this conversation.

Apologies if this is coming across as overtly negative or terse but I am actually a little annoyed - it really wouldn’t have taken you much effort to find out what I’m really about before devolving into reductionism. I actually took some time to look at you profile and some of your work before writing this reply - a courtesy I recommend you employ when engaging with your other peers in the future. In fact I actually read your piece on Being Intelligent, and I thought it was genuinely good. It didn’t give me any “next logical steps” but I understand that wasn’t the intent of the piece, so I wouldn’t go making unbidden accusations about that in the article’s comments. If I had genuine feedback or constructive criticism, I might reach out via a direct chat message. Again - courtesy.

So if your critique is that this one article doesn’t do everything at once - predict, justify, and operationalize - then you’re right. It doesn’t. It isn’t supposed to. It’s one piece in a larger body of work, not a self‑contained theory of everything. Next time, before you confidently summarize “what this article does nothing to do,” it might be worth exploring the writer’s broader work for at least a few minutes. All of us would appreciate that level of diligence as much as you clearly appreciate your own clarity.

But in any case, thank you for reading so closely - and for proving that the piece was at least punchy enough to get you to write this all out.

Have a wonderful day ahead. 🙂 And I do hope you’ll keep writing. Again, your work is actually good.

https://sharedsapience.substack.com/p/actualizing-the-open-future

https://sharedsapience.substack.com/p/after-capitalism

https://sharedsapience.substack.com/p/cryptocurrency-for-open-source-ai

https://sharedsapience.substack.com/p/the-perspective-razor

https://sharedsapience.substack.com/p/resource-window

https://sharedsapience.com/century-report/

Sarthak R Vashisht's avatar

I can read one article, followed by an essay and then a book, to build a well-rounded conclusion.

If I had commented something like, 'This author does not...', I would understand your annoyance. However, you acknowledged in your reply that my feedback was directed at the article itself and that it does not cover a logical series of steps. Even setting everything else aside, the article does not include notes such as 'refer to other articles for detailed discussions' or 'this article is part of an ongoing series.’

It presents predictions by inducing fear first and then offering optimism. By the end, the reader feels scared yet hopeful and relieved. This is, of course, a stylistic choice, and I have no qualms with that.

Now, if you will, allow me a moment to express my pain. As a computer scientist, you are at the epicenter of these developments. You have read extensively on this topic and internalized it; I have not, as it is not my area of expertise. Your article was suggested to me by Substack; I did not seek it out. I am a reader of your work, not a critic, and I lack the specialization to judge whether your predictions are right or wrong. I simply found that the style in which the predictions were presented did not align with my preferences.

I am very grateful that you read my work and liked the writing. I don't make predictions about the future myself, as that requires the kind of specialization your Substack possesses. Having looked through your other work, I see that you do present detailed ideas elsewhere. Additionally, I truly admire your diligence in engaging with commenters; it is a quality I hope to inculcate in my own work.

You are doing awesome work, and the volume of your output is inspiring. I appreciate your readiness to expand on your ideas through dialogue. I also understand that my initial comment may have come across as disheartening, for which I offer my sincere apologies

Ben Linford's avatar

My turn to apologize. I misjudged you and I shouldn't have been so quick to do so. Thank you for clarifying, and rightly pointing out that you were careful to separate the human from the piece in your comment. I truly do hope you will keep writing, and commenting. Thanks for taking the time to engage and to listen with an open mind and heart even when the conversation turns negative. That's a rare gift - thanks for sharing it.