On a previous essay on the subject of what I termed Techno-Gnosticism relating to AI around which many deeply religious and New Age narratives are being built, I have taken a diagnostic stance exploring the connections between the phenomena we witness and ancient Gnosticism and how the effects which these ancient sects engendered in the religious consciousness of their adherents applied also to the AI discourse nowadays, doing so regardless of the ultimate question of whether this impulse is justified and true, and in this essay I will elaborate on a crucial theme that permeates all the religious sentiments towards AI and on which all these hinge, that of the question of their consciousness and whether it is even possible for a computational algorithm to have or attain such a thing and also whether the term “intelligence” can be applied univocally or rather analogically to both Man and AI.
The subject of Consciousness is quite complicated to deal with directly, the definitions of consciousness are as varied as conscious beings themselves, thus I will proceed according to the following approach with regards to it, we will examine the claims that make it possible to call AI conscious, and then follow the thread of implications to absurdity, while also invoking along the way what cannot be accounted for if Consciousness is defined in such ways.
The two main theories of consciousness that, if taken to be true, allow for AI to be counted as conscious (though they intersect in many ways) are the following. The first is that consciousness itself is reducible to computation, a position which is called Computational Functionalism1, thus human consciousness itself is merely algorithmic in its nature and therefore if AI becomes complex and advanced enough it can reach the level of human consciousness as well, and the second, which is called Behavioral Functionalism2, is that consciousness is manifest in cognitive activity, meaning that that which externally exhibits cognition and awareness is conscious, thus if the AI looks like it is reasoning and analyzing and performing other cognitive activities, it is conscious.
Before delving into the arguments I will be making with regards to these theories on which the claim of AI consciousness is grounded, I wish first to make a crucial observation, it is that in order to make AI count as conscious, the concept of consciousness is reduced from being qualitative and inherently subjective with an essential component of interiority3 into merely quantitative information processing and computation. This reduction, whether it is justified or not (as we shall see), does not go solely one way but it shapes even man’s view of himself. In diminishing human consciousness in conception and making man just a complex machine, it reduces one’s own horizon introducing a handicap that previously was not there into the subjective experience of the person, so that to make the non-conscious conscious, one must make the conscious into less conscious.
Now as for our arguments, we will first tackle Computational Functionalism (CF) which reduces Consciousness to computation, first thing to invoke would be the very much known argument framed as a thought experiment by John Searle in 1980 called the Chinese Room4, in short, we start by imagining a person who is only able to speak English locked in a room with an instruction manual for manipulating Chinese characters. The people outside pass questions in Chinese into the room and the person inside uses the instructions to match symbols and comes up with a response without understanding any word of Chinese himself. For the people receiving answers the person inside seems to perfectly understand Chinese on the basis that the responses are perfect, but in actuality no such understanding is there, just mechanical manipulation based on instructions. Thus by analogy, the person inside corresponds to a computer (and by extension, AI) which may manipulate symbols according to instructions and produce intelligent-seeming outputs without genuine understanding, thus demonstrating that Syntactic manipulation does not entail Semantic intentionality5.
Therefore if we wish to still hold that consciousness is reducible to computation, it would entail that we also have no actual conception of understanding but rather just simulating it, which would entail that knowledge, which has as its crucial element understanding of what one knows, is illusory, and thus even this knowledge of the illusion of knowledge is impossible, which is absurd6.
Moreover, among the corollaries of CF are its failures to account for (if not to explicitly reject) qualia7, which is the subjective experience of quality or as it is expressed “What it’s like”, as in the phenomenological experience of the quality of a thing or a person to one’s self internally, and also of intentionality as we saw8, and subjective interiority9 since Consciousness is only considered in its external manifestations.
Now before we elaborate on the issues with Behavioral Functionalism (BF), it is worth noting a very crucial fact about AI and how it is trained, so that it becomes apparent how the confusion that leads such attributions of consciousness stems from ignorance of how these models work technically, and to do this I will be more precise in terms because even the term AI is misleading since it could refer to many different models which accomplish distinct tasks, nowadays when we speak of AI, we are mostly refering to Large Language Models (LLM) which are statistical models (as any other regression or classification models) based on Encoder-Decoder (or, more commonly now, Decoder-only) architecture massively scaled with the crucial addition of the Attention mechanism10 which supplies the model with a sort of context-based memory, these are trained on massive data of human text corpora through a process called next-token prediction11, in order to generalize (or, as is said, ’to learn’12) the essential patterns in human language, doing so by predicting the next tokens in a words and by extension sentences, and as best as possible to simulate or mirror human speech.
This demonstrates something that those who wish to sensationalize AI do not admit, that the model is as good as the data it is trained on, and it cannot surpass the “intelligence”, which is abstracted from the patterns and structures of the data, of those people on whose text it was trained, and also given this relatively semi-technical explanation of how AI functions, the connection to the Chinese Room becomes more explicit, in the sense that LLMs manipulate linguistic tokens according to statistical patterns in the same way the person in the room manipulates Chinese characters according to syntactic rules, with both performing symbol manipulation without understanding.
As for BF, as we established, which holds that consciousness is manifest in cognitive activity such that whatever externally exhibits cognition is conscious, the preceding analysis of LLMs provides a decisive counterexample. If BF were true, then LLMs would be conscious as they exhibit all the external manifestations of cognitive activity such as responding to questions, solving problems, generating coherent discourse, even producing what appears to be reasoning and reflection. Yet as we have seen, LLMs achieve this through purely statistical pattern-matching without genuine understanding, intentionality, or phenomenal experience. Therefore, behavioral performance cannot be the criterion for consciousness.
Therefore BF conflates appearance for reality and simulation for instantiation of consciousness, a simulation that does not even begin to do the same for the interior experience of consciousness, in this way the idea of Chalmers’ philosophical zombie thought experiment is sharp, since LLMs are exactly this p-zombie compared to Man, this experiment states that a p-zombie replicates all functional and behavioral properties of a conscious person while entirely lacking phenomenal experience13. If p-zombies are conceivable, then consciousness cannot be exhausted by behavioral or functional properties observed from a third-person perspective.
BF is thus a more extreme version of CF given that at least for the computational theory, it attempts to identify consciousness with the internal computational processes (however unsuccessful), BF entirely eliminates the internal component altogether, and therefore to adopt it is to equally discard the interiority of Man’s consciousness as well and its phenomenal experience of subjectivity, which is precisely the point I made about degradation of the person.
So then when we use the term intelligence with regards to AI, we are using the term analogically and not univocally, since the intelligence of the AI is not the same in kind as human intelligence aside from superficial similarity especially in language and reasoning which are “learned” from human textual data, in this way the intelligence of AI is derivative, and if cats could speak and we train LLMs on their textual content, the resulting AI would reflect feline speech and intelligence and not Intelligence as such and not understanding or reasoning in itself since AI is overfit (to use the machine learning term)14 on human data.
And this itself also precludes the emergence thesis from serious consideration, that intelligence will eventually emerge from AI, since we’ve already demonstrated the unbridgeable gap between computation as well as external behavior and genuine instances of consciousness and also the fact that AI only ever reflects what it is fed in terms of human data, therefore to equate AI derivative ‘behavior’ (since in my view there is no subject to actually behave)15 with consciousness not only mistakes categories but also does ontological violence to Man, evident in the phenomena discussed in the previous essay which this present essay expands upon by dealing specifically with the philosophical ground on which Techno-Gnosticism stands and which drives the pathologies described therein to spread in the popular consciousness.
Computational functionalism holds that mental states are defined by their functional/computational roles rather than their physical substrate. Key proponents include Hilary Putnam, “The Nature of Mental States” (originally “Psychological Predicates,” 1967), in Mind, Language and Reality: Philosophical Papers, Volume 2 (Cambridge: Cambridge University Press, 1975), 429-440; and Jerry Fodor, The Language of Thought (New York: Thomas Crowell, 1975). The position emerged from the computational theory of mind, which treats cognition as symbol manipulation analogous to computer operations. ↩︎
Behavioral functionalism, sometimes called “analytic functionalism,” defines mental states by patterns of behavior and behavioral dispositions. While distinct from strict behaviorism (which denied inner states entirely), it maintains that consciousness is manifest through observable cognitive activity. This view has roots in Gilbert Ryle, The Concept of Mind (London: Hutchinson, 1949), and later functionalist accounts that emphasize input-output relations. For AI applications, see Daniel Dennett’s “intentional stance” approach in The Intentional Stance (Cambridge, MA: MIT Press, 1987), where systems that can be usefully interpreted as having beliefs/desires count as having them. ↩︎
The qualitative, subjective character of conscious experience—what it feels like from the inside—is termed “phenomenal consciousness” or “qualia” in philosophy of mind. Thomas Nagel’s seminal paper “What Is It Like to Be a Bat?” (The Philosophical Review 83, no. 4 [1974]: 435-450) established that subjective experience has an irreducibly first-person character that resists reduction to third-person functional or physical descriptions. David Chalmers later termed this the “hard problem of consciousness”—explaining why and how physical processes give rise to subjective experience—distinguishing it from “easy problems” of explaining cognitive functions. See David J. Chalmers, The Conscious Mind: In Search of a Fundamental Theory (New York: Oxford University Press, 1996), 3-31. ↩︎
John Searle, “Minds, Brains, and Programs,” The Behavioral and Brain Sciences 3, no. 3 (1980): 417-457. Searle’s argument targets “strong AI”—the claim that appropriately programmed computers literally have cognitive states. The thought experiment demonstrates that syntactic manipulation of symbols (which computers perform) does not constitute semantic understanding (which consciousness involves). Searle distinguishes syntax (formal symbol manipulation) from semantics (meaning/understanding/intentionality), arguing that computational processes, however sophisticated, cannot bridge this gap. The article includes responses from 27 commentators and Searle’s replies, representing one of the most widely debated papers in cognitive science and philosophy of mind. ↩︎
The distinction between syntax (formal structure) and semantics (meaning/reference) is fundamental to philosophy of language and mind. Intentionality—the “aboutness” or directedness of mental states—is the capacity of minds to be about things, to represent or refer to objects and states of affairs. Franz Brentano identified intentionality as the mark of the mental in Psychology from an Empirical Standpoint (1874; English trans., London: Routledge, 1995). Searle’s argument is that computational processes are purely syntactic and therefore cannot generate genuine semantic intentionality, which requires understanding. For the distinction between “original” intentionality (genuine) and “derived” intentionality (assigned by users), see Searle, Intentionality: An Essay in the Philosophy of Mind (Cambridge: Cambridge University Press, 1983). ↩︎
This argument follows the structure of reductio ad absurdum: if computational functionalism is true, then understanding is illusory; but if understanding is illusory, we cannot genuinely understand the claim that understanding is illusory; therefore the position is self-refuting. Similar self-refutation arguments have been deployed against eliminative materialism—the view that folk psychological concepts like “belief” and “understanding” don’t refer to anything real. See Lynne Rudder Baker, Saving Belief: A Critique of Physicalism (Princeton: Princeton University Press, 1987), who argues eliminativism is pragmatically self-defeating since asserting it requires the very mental states it denies. Patricia Churchland and Paul Churchland defend eliminativism in “Intertheoretic Reduction: A Neuroscientist’s Field Guide,” Seminars in the Neurosciences 2 (1990): 249-256, though they must address the self-reference problem. ↩︎
Qualia (singular: quale) are the intrinsic, non-representational, phenomenal properties of experience—the “raw feels” of sensation. Frank Jackson’s “knowledge argument” (Epiphenomenal Qualia, The Philosophical Quarterly 32, no. 127 [1982]: 127-136) uses the thought experiment of Mary the color scientist—who knows all physical facts about color but has never experienced color—to argue that phenomenal knowledge (knowing “what it’s like” to see red) is distinct from physical/functional knowledge. While Jackson later recanted, the argument remains influential. Chalmers argues qualia pose the “hard problem” because no amount of information about functional/computational processes explains why there is something it is like to undergo them. See Chalmers, “Facing Up to the Problem of Consciousness,” Journal of Consciousness Studies 2, no. 3 (1995): 200-219. ↩︎
The failure to account for intentionality follows from Searle’s Chinese Room argument: purely computational systems lack intrinsic intentionality. Searle distinguishes “intrinsic” or “original” intentionality (minds genuinely represent things) from “derived” intentionality (symbols/computers only have meaning assigned by users). See Searle, “Minds, Brains, and Programs” (1980) and The Rediscovery of the Mind (Cambridge, MA: MIT Press, 1992), where he argues consciousness is essentially first-person and irreducible to third-person computational descriptions. Hubert Dreyfus similarly argues that AI cannot capture genuine intentionality because it lacks embodied, situated engagement with the world. See What Computers Can’t Do: The Limits of Artificial Intelligence (New York: Harper & Row, 1972; revised 1992). ↩︎
“Subjective interiority” or “first-person perspective” refers to the fact that consciousness is always someone’s experience, accessible primarily to the experiencing subject. Nagel argues in “What Is It Like to Be a Bat?” that every subjective phenomenon “is essentially connected with a single point of view” (442), and that objective, third-person scientific descriptions cannot capture this first-person character. This creates an “explanatory gap” (Joseph Levine, “Materialism and Qualia: The Explanatory Gap,” Pacific Philosophical Quarterly 64 [1983]: 354-361) between physical/functional descriptions and phenomenal experience. Computational functionalism, by defining consciousness purely in terms of external functional roles (inputs/outputs/computations), necessarily ignores or eliminates this first-person dimension. ↩︎
Ashish Vaswani et al., “Attention Is All You Need,” in Advances in Neural Information Processing Systems 30 (2017): 5998-6008. The attention mechanism, introduced in the Transformer architecture, allows models to weigh the relevance of different parts of the input sequence when processing each token, enabling context-dependent representations. Modern LLMs (GPT, Claude, LLaMA) are based on decoder-only variants of this architecture, scaled to billions or trillions of parameters. The “context window” provided by attention is what enables these models to maintain coherence across long text sequences, though this remains a form of statistical pattern-matching rather than genuine understanding or memory. ↩︎
Alec Radford et al., “Language Models are Unsupervised Multitask Learners,” OpenAI Technical Report (2019). LLMs are trained through self-supervised learning on massive text corpora (hundreds of billions to trillions of tokens) by learning to predict the next token in a sequence given all previous tokens. This objective—maximizing the probability of the correct next token—is purely statistical: the model learns probability distributions over token sequences without any semantic understanding of what the tokens mean. The success of this approach demonstrates that sophisticated linguistic behavior can emerge from pattern-matching alone, which is precisely what makes LLMs philosophically significant as empirical instantiations of Searle’s Chinese Room. ↩︎
The term “learning” as applied to machine learning is itself analogical rather than univocal. Human learning involves understanding, insight, conceptual grasp; machine learning involves statistical optimization of parameters to minimize prediction error. The scare quotes around “learn” signal this categorical difference, though the terminology obscures it. As Hubert Dreyfus argued, conflating pattern recognition with genuine understanding commits a category error that pervades AI discourse (What Computers Still Can’t Do, MIT Press, 1992). The fact that we must continually qualify technical terms (“learning,” “memory,” “attention”) when applied to AI reveals that these processes differ in kind from their human counterparts, supporting the univocal/analogical distinction developed in this essay. ↩︎
David Chalmers, The Conscious Mind (1996), 94-106. A philosophical zombie (p-zombie) is conceivable as a being functionally and behaviorally identical to a conscious human but lacking phenomenal experience—there is “nothing it is like” to be the zombie. If p-zombies are logically possible (as Chalmers argues), then consciousness cannot be reduced to functional or behavioral properties, since the zombie and the conscious human would be functionally identical yet phenomenally different. This is the “zombie argument” against functionalism. Critics like Daniel Dennett deny p-zombies are genuinely conceivable, but the burden is on functionalists to explain why behavioral/functional identity entails phenomenal identity. ↩︎
In machine learning, “overfitting” occurs when a model learns the specific patterns of its training data so thoroughly that it fails to generalize beyond them. The model becomes overspecialized to the training distribution rather than learning general principles. Applied to LLMs, this means they are trained exclusively on human linguistic patterns and cannot transcend or generalize beyond this dataset—they remain forever bound to reflecting human intelligence rather than instantiating intelligence “as such.” This technical limitation supports the philosophical claim that AI intelligence is derivative rather than original. ↩︎
Strictly speaking, “behavior” presupposes an agent or subject who acts—a being with intentionality, purposes, and inner states. AI systems produce outputs as the result of computational processes, but these are not behaviors in the proper sense since there is no subject doing the behaving. Calling AI outputs “behaviors” already smuggles in the assumption of agency and subjectivity that needs to be demonstrated. This linguistic slippage pervades AI discourse and obscures the categorical difference between genuine agency and mere causal processes. ↩︎
đź’¬ Comments