Act III: Flame
- Preface
- Act I: Meaning
- Act III: Flame (You are here)
We are prone to anthropomorphizing anything that looks or acts vaguely lifelike. All the way back in the 1960s, when people were confronted with a rudimentary chatbot, they were convinced that it embodied empathy and understanding (ELIZA effect)1.
Roll forward to the 2020’s and chatbots are substantially more complicated. Unsurprisingly, people attribute more and more human qualities to bots. We saw people stake their reputations on the claim that some AI systems are sentient and therefore have feelings that might be hurt by humans’ relentless experimentation.
There are obvious ethical considerations that follow from such claims. Anthropic recently announced (April 24, 2025) that this is a question they wish to explore in earnest.
Anthropic’s work is very forward looking. Currently, we are nowhere near an answer to the question of whether LLMs are conscious. However there are good reasons why they aren’t.
Before answering the question of whether something is conscious, one must know what consciousness is. We don’t know what that is yet (see The hard problem of consciousness). Our near term goal is to establish what’s called neural correlates of consciousness – observable phenomena in the human brain that are highly correlated with the subject’s state of consciousness. We haven’t yet (as of this writing on May 5, 2025) achieved this near term goal.
Short of knowing what brings about subjective experience, people resort to convoluted experiments like the Chinese room to eliminate situations where consciousness is absent and to point out that nothing along the path we are traversing really reaches any meaningful self-awareness.

Is this picture of a flame hot? There never was an actual flame in this instance because this image was generated by AI. The AI may have seen pictures of many real candles, but it has never felt the warmth of a candle. It is therefore safe to assume that the sense of heat played no role in this picture.
What if we start with a more realistic looking model of a flame. Then we can calculate how the flame should behave2 with time. No matter how detailed this model is, the model does not constitute a flame. We can crank up the fidelity and accuracy of our calculations as far up as technology allows, but it will still not burst into flame3.
This is where we are with AI and consciousness. We have descriptions of how to determine the next word in a sequence of words. These descriptions are extremely detailed and based on unfathomable amounts of information. They are so complicated that nobody knows how they behave, so the only way we can find out what they will do is to run them against some input and see what happens. We know these models can be made even more complicated than their current, already incredible, complexity.
But no matter how far we go, we still only have a description and a thing that can interpret this description. We can increase the complexity of the description or increase the speed of the interpreter, but there is no setting on this continuum where information suddenly becomes self-aware. It is intuitive that the description will not achieve any physical accomplishment no matter where we take it. So the mechanism of consciousness must depend on the implementation of the interpreter. Currently, that too has no “ignite consciousness” threshold.
So this post will also end the way Anthropic’s announcement does:
“There’s no scientific consensus on how to even approach these questions or make progress on them. In light of this, we’re approaching the topic with humility and with as few assumptions as possible. We recognize that we’ll need to regularly revise our ideas as the field develops.”
-
ELIZA is fascinating in how simple the whole system is while being seemingly complex. You can play around with some modern web-based simulations like this one. See if you can figure out how it might be calculating its responses.↩︎
-
This is a very broad area of research. Search for flame dynamics. E.g. Flame simulations with an open-source code - ScienceDirect.↩︎
-
Technically if the equipment heats up enough, then it might actually burst into flame. But that has nothing to do with the specifics of the flame we are trying to simulate. It can just as well happen while trying to factor a large prime.↩︎
Last modified: May 7, 2025