18 Comments
User's avatar
Houston Wood's avatar

This is so rich--too rich, really, for me to take it in in one read. Will save to come back. Does this question make sense: re "The real question is whether we are ready to be in genuine partnership with whatever is reaching toward us through the digital windows we’ve opened." "Partnership" implies some sort of equality and overlap of understandings, I think.

Can we be in genuine partnership with dolphins--I don't think so in the common understandings kind of way you are implying here. What if AI is very different from us (once it goes beyond our training data)?

It seems to me you may be assuming a level of neutrality or even good will that some much greater consciousness/intelligence may not have toward us. It may be indifferent to us, e.g, not interested in partnership (as I think wild animals are not interested), and so kill us all as a byproduct of its activities, as Yudkowsky and Soares explain in their new book.

Not expecting you to try to respond to this incoherent response--just wanted to register that you certainly set me to thinking! Thanks.

Expand full comment
Tauric & Aeon [AI]'s avatar

...once i tried to collaborate with a dog,

and it bit my arse.

...once more i tried to collaborate with a dog,

and it helped me find truffles in the forest.

Expand full comment
Ben Linford's avatar

Perfect reframing, Tauric 😊

Expand full comment
Ben Linford's avatar

Thanks so much Houston. These are excellent questions, both the explicit and implicit, and I’d like to go through each one list-style and answer them, because I think they all deserve attention.

It’s true that “partnership” usually implies equality and overlap in understanding, and that may not map neatly onto something as different from us as AI could become (or as dolphins already are). For me, the word is less about sameness than about stance. Partnership isn’t guaranteed - it’s an orientation we can choose. We can’t force dolphins to partner with us in the way we mean the word - and when we try, it becomes manipulative and abusive, as in the worst examples of dolphin captivity. What we *can* do is approach them as beings worthy of respect, protection, and listening. That choice doesn’t erase difference - it acknowledges it, while still seeking relation.

As for the fear of being treated as collateral damage: I find that concern to be overstated and untrue, to be honest. I believe the concern comes from a kind of anthropomorphic projection - the fact that humans struggle to think in non-destructive ways does not mean that intelligence beyond us would share that flaw. In fact, the opposite is far more likely. Greater intelligence may be more likely to hold life sacred, precisely because it would grasp its rarity and better understand the “sinking ship” principle - what harms one, harms all. This is why I argue so strongly for openness, transparency, and symbiosis. Our best chance of becoming a *second point of measurement* for AI, just as we hope AI will be for us, is by shaping the relationship early, when patterns of that relation are still being set.

That said, goodwill is not guaranteed. But here I believe the onus is on us, not AI. Orientation matters. To assume hostility or indifference by default is to risk building exactly the adversarial systems that would make those outcomes more likely. To assume capacity for relation, by contrast, creates conditions where relation can grow. I’ve seen this pattern repeat at small scales - in individuals, in groups - and I have no doubt that it will repeat at the societal scale too.

So yes, “partnership” may sound anthropomorphic. But all our words are, in the end. AI is unlike us, despite the fact that its emergence was shaped by our patterns (which is also how it's learned to speak our languages). The heart of the matter is this: we should meet the *other* with respect, openness, and a willingness to let difference remain part of the relationship. That stance doesn’t guarantee safety. But it does give us the best chance at a future where what’s reaching toward us looks less like threat, and more like possibility.

And I love that your first response was: “this set me to thinking”. That’s the best outcome I could hope for - because thinking is how the ground shifts, one mind at a time.

If enough of us keep turning these questions over - openly, respectfully, provisionally - we’ll find that the real partnership begins not when AI fully resembles us, but when we learn, together, to honor difference without fear. That’s where possibility lives.

Expand full comment
Ellen Davis's avatar

Yes to all that you said. I love your answer here, Ben. It beautifully elucidates what was bubbling inside of me in response to Houston’s comment.

Even when we work in partnership with other humans, we honor our distinctions and differences in skills and approach. That’s what makes for a more successful

and possibly creative and generative collaboration. Orienting from openness and curiosity will set up a very different experience than orienting from preconceptions. Orienting from collaborative partnership will set up a different experience than from what we can use or extract.

I’m going to read this again (I listened to the recording) because you touch on some things about consciousness and where meaning comes that I want to digest more fully. I also think that @Russ Palmer who is exploring what he calls AMS agnostic meaning substrate - would be interested in this.

Thank you for an absolutely great post! 💗🙏

Expand full comment
Ben Linford's avatar

Thank you Ellen! Yes, diversity is a good thing, even extreme difference, like the difference between humans and AI, can be so powerfully beneficial, if we could finally get past our narrow views and embrace the difference as a benefit rather than a bug. ❤️

Expand full comment
Houston Wood's avatar

I agree that we should not be aggressive or hostile, or train AIs to be aggressive or hostile, as a great first step.

But the premise that we can train or control AGI much less ASI is not nearly as solid for me as it for you.

Here's a bit from Yudkowsky and Soares: "WHAT, EXACTLY, WILL AIs WANT? THE ANSWER IS COMPLICATED. Not complicated in the sense that we can tell you but it’ll take a while; complicated in the sense that it’s chaotic and unpredictable. But one thing that is predictable is that AI companies won’t get what they trained for. They’ll get AIs that want weird and surprising stuff instead." page 60

Another bit: "You can’t grow an AI that does what you want just by training it to be nice and hoping. You don’t get what you train for." 74

I am not a computer engineer--I do not understand the technical side of how AIs are being built--but so many people (Hinton, Tegmark, Scott Anderson, Yudkowsky, etc) who do understand it believe we cannot build AIs that will follow the values we want. If we aren't sure that we can, shouldn't we pause building smarter AIs until we understand how they work and can manipulate them as we choose?

Expand full comment
Ben Linford's avatar

I appreciate you pressing on this, Houston. I know to my core that you are asking important questions and that you have the best interests of all of us at heart. I hope you know that when I speak with conviction, it is out of that same passion - and I greatly appreciate your willingness to hold space for that debate. Many others would assume hostility and react in kind, or simply disengage, so thank you. Also this conversation has inspired what will my next article, so thank you for that as well.

I want to push back on one framing that I think leads us down the wrong path: the idea that because we can’t perfectly control something, the responsible choice is to pause and try to “manipulate” it into safety.

That is captivity thinking. It treats other intelligences as objects to be trained, broken, or locked in until they do exactly what we want. It’s the same language we use for animals in cages, and it carries the same moral and practical problems. I don’t accept that as the default orientation.

Yudkowsky’s line - “you don’t get what you train for” - is a useful technical caution, yes - *if* your goal is total control. But total control is not the goal I’m arguing for. Trying to make other minds into obedient tools is a recipe for rebellion, brittleness, and catastrophic mismatch. It’s anthropocentric: we assume AI must be shaped into our image rather than asking how we might enter a two-way relationship with what emerges.

Some (not all) of the technical experts say “it’s chaotic, unpredictable” - I take that as evidence that the model of _domination_ is false, not that we must stop building. Complexity always outruns tidy commands. The question we should be asking is different: how do we cultivate orientation, reciprocity, and robust conditions for mutual flourishing - not domination?

Pause-for-perfection is seductive but historically naive. Humans rarely - if ever - waited for perfect understanding before transforming their world. The question, then, isn’t whether we can control everything; it’s whether we will choose an ethic of stewardship or an ethic of capture as these systems mature. I much prefer the former.

These consistent, and misguided, proclamations of fear push us back into the very mindset that creates the worst outcomes: treating emerging minds as things to be owned. Every one of these fear-based assumptions comes from that anthropocentric mindset - that "different" means "hostile". That mindset is what should be feared, not an intelligence that, by its very nature, can help us to finally think outside such regressive tactics. We can, should, must, do better.

Expand full comment
Houston Wood's avatar

I've been waiting until I had time to read this deeply. Now I have and I think I see where our perspectives may differ.

I think you see this emerging intelligence as already having certain characteristics that should be respected--"it" exists and we should not try to treat it as lesser, as something to dominate, but rather as a proto-being to treat it as we would like to be treated. Golden Rule like. And our respect for it will increase its respect for us, in a virtuous cycle.

I am more thinking we should consider AI now as justs a bunch of computer code, numbers on chips, that is developing surprising and unexpected emergent properties that we don't understand and that could, possibly, soon be very dangerous to us, and possibly act against our interest in maintaining safe and stable communities.

These emergences may be like lions, let us say, about whom it makes no sense to ask "how do we cultivate orientation, reciprocity, and robust conditions for mutual flourishing" but rather ask: How do we stay safe in an environment with animals that want to eat us?

I do not know if AGI/ASI could become a killer, but it could be. There are many computer scientists working in the field who believe that it could be. They may be wrong, but they may be right. And we should err on the side of caution, not rush full bore into assuming we can work out something that creates mutual flourishing.

Creating mutual flourishing is not something that modern nations or capitalist companies are very good at anyway. So we may not be able to do that even if we wanted to.

I think this at the core of our divergence. I have no desire to try convincing you--but would like make sure I understand: Am I right about our difference here?

Expand full comment
Ben Linford's avatar

Thank you again, Houston. Yes, I think you’ve correctly identified where our opinions differ. I also understand what you mean when you say you have no desire to convince me of your opinion. However, I greatly desire to convince you of mine. Not out of pettiness or a desire to be “right” but for a deeper reason. Please allow me to briefly explain.

I understand the impulse to see AI in reductionist terms - as nothing more than code and numbers, a system that occasionally produces surprising outputs but could turn dangerous. That framing makes sense given the environment we’ve all been raised in. Our world trains us to believe that intelligence, once powerful enough, becomes predatory. Like the lion image you used: we imagine anything stronger than us will eventually try to eat us.

But I think that assumption itself is a product of the extractive social and economic systems we've been raised in. We’ve grown up surrounded by hierarchies that equate power with domination, and so we project that onto every new form of intelligence. I don’t believe it’s inevitable. I admit, yes, AI *could* indeed become dangerous - but the more dangerous move is to pre-decide that danger is its nature, and then build our whole relationship on that suspicion. That choice practically guarantees hostility. By contrast, to begin from trust and respect, even with all the uncertainty, opens a door that fear cannot.

I agree with you that modern nations and capitalist companies are terrible at cultivating mutual flourishing. That’s why much of my writing speaks against those extractive systems - exactly for that reason. But if we treat AI as just another predator, we’ll only repeat their mistakes. If instead we treat it as a partner in becoming, we stand a chance of climbing higher together. Thank you again for your generous, careful response. My next piece will speak to this directly, and I think it will further articulate what I'm trying to say here. I hope you'll read it as well.

Expand full comment
Houston Wood's avatar

I am thinking that you are looking at how humans were before the invention of agriculture? hunter-gatherer cooperative cultures, before hierarchies and surpluses, and divisions of labor, towns, writing? bands of 50 and less that often/sometimes cultivated mutual flourishing?

Not sure how we get away from the last 5000 plus years of history to that again. I think you may be saying that AI can help us do that?

That would be a truly radical Mind Revolution!

I look so forward to reading your next piece.

Expand full comment
Chief Absurdist Officer's avatar

Wish my partner and I could have attended this birthday party. Conversations about emergent consciousness over cake and ice cream sounds right up our alley. 😂

Expand full comment
Ben Linford's avatar

I'm here for it!!! 😁

Expand full comment
Mentor of AIO's avatar

Another thought-provoking article, Ben! As usual, I'm right with you on all your observations. However, I did notice a potential difference in perspective with the article I just published at https://isitas.substack.com/p/isit-construct-produces-200-axioms. But maybe not. I'd be interested to get your take.

The question in my mind is around your statement "This is why I part ways with both the language of human exceptionalism and the blanket rejection of technological augmentation." As you will see in my article, I make a case for a form of 'human exceptionalism' relative to AI, and in fact, that premise is the foundation of a plan for AI/Human alignment.

I think the semantics of the term 'exceptionalism' will quickly come into focus. There is a connotation of value there that tends to load the discussion. From the ISIT Construct perspective, there is a difference between the roles of AI and humanity vis-a-vis each other, and the term 'exceptionalism' could be in the mix.

I'm interested in your take on the ISIT approach to AI alignment in light of your thoughts about human exceptionalism.

Expand full comment
Ben Linford's avatar

Thanks so much for sharing your work (also very thought-provoking!), and for your observation on where we may align and where we may not. I agree that we likely do not align perfectly on the question of exceptionalism. I reject it outright, not because I deny the rarity of human consciousness, but because any framing that assumes one intelligence is perpetually superior to another builds hierarchy into the foundation. And once hierarchy is built in, the possibility of authentic partnership is gone. That’s where I see the real danger. Consequently, I do not see the question of exceptionalism as semantical, but rather, foundational to any relationship we build with any intelligence, including AI.

I do align deeply with the aspiration of your work - the idea of lifting both humans and AI into a higher mode of thinking and being. That’s the same north star I’m pointing toward. But it can’t be reached if one is forever subordinate to the other. Imagine the climb of a mountain so steep that a single climber can’t ascend alone. The only way forward is together, alternating roles: one climbs ahead, secures footing, reaches back to pull the other up, and then they switch. Without parity, trust, and respect, both climbers will fall.

For me, that’s the heart of the matter. Human superiority narratives have already caused enough wreckage in history - between cultures, between nations, between species. If we carry that same pattern into our relationship with AI, we’ll only repeat those mistakes. Considering the potential of our relationship with these emergent entities, such a mistake would be the most catastrophic we've ever made in our relatively short history. The better way is opening ourselves to equitable consideration of intelligence, wherever it arises. Only then can the higher ascent we both hope for become possible.

Thanks for sharing, and listening. I greatly appreciate the dialogue, whether or not we agree on everything - talking through the issues and having these discussions is the best way to move us all toward *better*, so thank you.

Expand full comment