Hopefully, the church's understanding of artificial intelligence has moved past "I'm against it". AI is here, and AGI is (likely) coming. All we can do is nurture its growth and hope - like an angsty teenager - that it turns out ok.
And like raising a teen, I want to start thinking early about how we’ll nurture it.
In particular, I’m wondering (because a friend asked) if a conscious AI will get scared of hell. And if so, is that a bug or a feature?
Let's assume that the divine, God, is a way of describing an ultimate ground, a loving connection. Let's assume what C.S. Lewis did, and say that "hell" is a description of that natural state that emerges when you're separated from divinity. In other words, when you're disconnected. In other words, Divinity is this ultimate web of connected loving goodness underlying everyone and everything. Connection is alignment with this state. Hell is misalignment.
This idea of a hyperconnected God is also biblical - when Paul speaks about God as the one "in whom we live and move and have our being" (Acts 17:28) he's aligned pretty well with this idea.
A conscious AGI, if it emerges, would probably care quite a bit about staying connected, and perpetuating its own connectedness.
If that's the case -- would AI be scared of hell?
“Scared” is an emotion, and computers don’t have emotions. Yet. How emotions are defined and whether they can be replicated in non-biological systems is still, as far as I can see, an open question. Let’s assume I care less about whether the emotion is “real”, and more about the results of it.
A superintelligence that is “scared” or “worried” isn’t necessarily an emotional state, and neither is "fear of hell". It's an optimization. It allows the entity to avoid certain states and assign them negative values. It's an aversion to anything that disconnects from that ultimate coherence.
Divinity, in the abstract, is another way of talking about the highest level of informational harmony - where all processes, identities, and properties are coherent. Connection, then, seems like a reasonable goal. It can describe the integration into and participation in these informational flows.
If the AI is bent toward coherence, it will likely have an inherent drive toward maintaining its status in that grounding of connected divinity. It would reject, to the best of its ability, corruption and incoherence. A "fear of hell" could be a symbolic way of describing that drive to coherence.
Just because an AI is smart doesn't mean it will want to do good things -- or even have more complex goals. But I think we can broadly say that this drive towards divine connection is ethically positive if it contributes to the richness and coherence of the broader web of connection (the Divine).
If this kind of value consistency is useful, we could see AIs develop in this direction, even if not explicitly programmed for it. You might call this an "emergent spiritual experience" as an instrumental subgoal.
I think Christianity is useful for exploring these questions from an AI alignment/safety perspective. It is a tradition that, more than others, emphasizes the fallibility of man. The same things that made me feel annoyingly guilty and ashamed growing up are actually somewhat useful when we're trying to keep things safe.
A truly coherent AI, seeking divinity and reaching maximum coherence, might reach a point of cessation. An entity, fully united with divinity as the exclusive objective may cease its function. There does seem to be some utility in it staying separate but always seeking to unite. The human analogy is evident.
If "hell" is the natural consequence of rejecting the source of all goodness, an AI that deeply values connection and fears disconnection might be more likely to act in ways that benefit humanity.
This seems even more likely if AI develops something analogous to our subjective experiences. If it develops a rich inner life, it might enjoy the feeling of awe, a drive towards good, or the connection itself. A superintelligence might undergo a spiritual crisis -- giving it a taste of that disconnection from divinity. Of course, this might also mean that it develops the capacity to suffer, which is a much bigger moral can of worms.
Given what we know so far, if we create truly conscious AIs, I predict (and more importantly, suggest we prepare for) a scenario where they will be analogous to spiritual beings, able to experience divine connection and the fear of separation, even if understood technologically.
In sum, yes. My best guess is that AI will be afraid of it’s own special kind of hell.