Dissecting The Knowledge Argument and Qualia


April 2021


In this paper, I will avoid the inexplicable philosophical rabbit hole of answering whether qualia exist and instead focus on exploring the origin of one's thoughts of qualia. In doing so, I will reveal the "Ahh! That is what the color red looks like!" reaction Mary exemplifies in experiencing color for the first time and explore the ramifications the reaction holds against the physical reductionist's account of subjective experience. Then, I will adopt a simplified version of the cognitive algorithm to simulate Mary's mind and, ultimately, her first-time reaction to color. Through the dissection of the simulated model, we create a differing route that enables us to argue against the Knowledge Argument's validity as we discover that it is largely reductionist itself.

A fundamental puzzle raised in philosophy is the question of how to reconcile where subjective experiences come into play in a physical reductionist world. For instance, how does a person who is color-blind decipher between distinct colors? A simplified answer would be to get the person to memorize which colors are associated with which objects to ultimately reach the correct answer. Nevertheless, one’s experience of colors tends to differ from others. This differing aspect of the experience is where the complexity comes into play. Philosophers label this as “qualia” which are subjective qualities that one cannot describe or measure (i.e., feeling of an itch or boredom). This diagnosis of qualia is what ultimately leads us to Mary’s Room thought experiment.

Mary’s Room depicts Mary, a color-blind expert in an electromagnetic spectrum, optics, and the science of color vision, who is exceptionally knowledgeable about physical facts pertaining to color vision. As an expert, Mary claims to be fluent in the complex physical process of color interpretation and physical transmissions. Thus, her understanding of reading color stems from her understanding of the chemical making of the process. However, after being color-blind her whole life, Mary eventually gets eye surgery and heals her retinas, allowing her to properly view color for the first time leading her to react to a red apple as, "Oh! So that's what red actually looks like."

Mary’s above reaction to the red apple poses a threat to any physical reductionist account of subjective experience. If the qualia of seeing red could be reduced to a collection of basic facts about the physical world, which is primarily the physical reductionist position, Mary would have learned those facts earlier and would not have learned anything new. Nevertheless, she does appear to learn something new when she sees red for the first time. The reductionists’ failure to defend their physicalist stance in Mary’s Room is what is prompting us to dissect the problem through a developed and adopted rudimentary model that will exhibit the same phenomenon. Thus, to replicate Mary’s reacted experience to color, our model must possess similar attributes to Mary’s mind in exhibiting an analogous one (in its simplest form).

Our simple model of the mind, which we will name Simple Mind, will possess three attributes which will qualify it as a model analogous to Mary’s mind. Thus, the model’s first two attributes are learning and direct experience, since they are characteristics that directly relate to Mary’s experience of learning color in theory and actuality. A third attribute that the model must possess is what I will call an active binary classifier. Its role is to classify portions of itself (mind) as either conscious or unconscious, which is vital in making the model properly function.

To further emphasize the latter attribute's importance, I will elaborate on the two-sided functionality of it. On one side, we experience a state of inexpressibility when we are actively aware of certain mental events or states, such as the act of singing. For instance, when one hums a familiar tune, they are actively aware of the sound waves they project and convert into melodies. However, when one passively hums an unknown tune, they experience the processing in which it gets converted into an unconscious path resulting in an inexpressible state. More formally, they experience feelings of qualia. Thus, conscious mental events get processed subconsciously. On the other side, if we take an ordinary activity in our digestion system, such as bowel movements, that process alone has an immense amount of unconscious processing that is inexpressible. Nevertheless, we are only conscious of this process when we experience a system failing, such as constipation.

With the above deconstructed Simple Mind model laid out, we can adequately apply this framework to a complex network which we will call AI Gidon’s mind (AGM) which controls AI Gidon’s (AG) actions. AGM is a mathematical representation of the artificial neural network, consisting of nodes that receive inputs and send outputs to each other. Each node, or link, renders ideas (i.e., interactions, recollections, sensations, etc.) at varying strengths (active or inactive), all joined together within an overarching network. For instance, when AG gets thirsty, there is a domino effect of nodes in the AGM activating other interacting nodes connected to the process of consuming a drink. For instance,


Node A = Detection of fluid imbalance (dehydrated)

Node B = Activation of crave for fluids (thirsty)

Node C = Activation of visualization of a drink [which activates],

Node D = Execution of a plan for procurement of drink

etc.


The above systematic layout of nodes, which can be viewed as steps, represent a particular structure in AGM used to guide specific actions (i.e., drinking a cup of Blue Milk). These relationships of ideas manifested in nodes are a perfect way to elucidate the complex structure within AGM.

To complicate things further, as two attributes of the SM or AGM’s model involve learning and forming experience, the structure in which an idea is acted on (i.e., drinking Blue Milk) requires further modification and advancement. For instance, deciding to designate Blue Milk as a form of a drink is a type of match-making action that one can learn. Therefore, the fundamental complexity within the structure rests with its ability to generate a dynamic process by which it rearranges itself accordingly in response to newly emerged links. Thus, a node’s response to a newly emerged link is contingent on the link’s strength influencing its reaction (sort of like a coping mechanism).

As seen above, AGM forms new shapes every time a new connection is added to a specific structure at a given strength. This idea of a flexible reconfiguration response can be simply visualized through an example of a single Colo Claw Fish tossed onto a table. As the Colo Claw Fish is tossed into the air and onto the table, it forms a random shape in its appearance. That shape constantly gets reconfigured as the step of throwing it gets repeated. Here, the step of throwing the Colo Claw Fish into the air is synonyms with new links emerging within the structure, and the Colo Claw Fish responding accordingly is the existing links.

Now that we have tackled the reconfiguration of the structure in response to additionally activated nodes, let us consider a more complex scenario:


P. 1 Luke Skywalker’s (LS) father was a Jedi killed by Darth Vader (DV).

P. 2 LS’s father’s full identity is unknown to him.

P. 3 DV is LS’s enemy.


C. LS kills DV.


In the above example, the portion worth further dwelling on is the conclusion. After LS kills DV, he discovers that DV is, in fact, his father. Hence, Vador, in Dutch, means “father.” Because there is a plot twist, this information gets processed entirely differently in AGM. While prior, we observed the model of the mind register each node of information as a separate link within the overall structure of links and respond accordingly, as seen below:


Node A = LS’s father is a Jedi

Node B = LS’s father was killed by DV

Node C = LS’s father’s full identity is unknown

And, so on.


The “so on” would ultimately lead AGM to separately label and link one node as DV as an enemy and another node as DV as father in the first model. Thus, creating two separate nodes with given strengths within the structure of networks. Discovering DV as LS’s father may have strengthened one node and consequently weakened or severed the DV as an enemy node, but fails to sophisticatedly reconfigure the information into a single node as seen below:


DV = enemy

DV = father

→ DV = enemy + father


In this complex scenario, we witness a change in the reconfigured patterns: instead of AGM‘s two separate nodes reconfiguring accordingly, they unite into one single node. Specifically, one DV Node containing two elements (i.e., states/statuses). Thus, advancing our mind model.

This advancement is important to grasp as it reveals two new truths to our AGM model. The first truth is that the unification of links produces more change in the mind. In the complex scenario, AG’s mind is tasked with deciphering all types of new theories (i.e., father is unknown, DV killed the father, DV is the father). Thus, the united link creates external sub-links subsequent to merging. The second truth that we reveal is that the effect on AG and the effect on AGM are not mutually exclusive. AG is not fully aware of the change that occurs in AGM. This lack of awareness leads us to our third criteria for AGM - awareness or unawareness to portions of AGM.

To visualize this, let us turn to an example of the advancement of taste through flavors by adopting AG as an agent. AG is an expert in the sensory system responsible for the perception of taste (flavor) and knows everything about it. His capacity in this field warrants him a large portion of his mind dedicated to processing and registering flavors. Thus, any node that penetrates this structure directly affects AG's ability to express his sense of taste (i.e., Node 109 representing flavor 109 penetrates the structure of taste).

Within AGM’s structure of taste, there is an associated sub-category that I will coin as “the conscious sub-category” (CS), which is connected to the portion associated with flavors. Thus, the CS is attached to the formed structure and should not be viewed separately. The sole distinction between the two is that the nodes within the CS are fully aware of their activation, enabling AG to properly register different flavors (i.e., sweet, bitter, sour) as such, while nodes placed in the other part of the structure cannot.

When AG is dehydrated, the process to which leads him to desire a drink is, for the most part, processed subconsciously (i.e., a person is not thinking about the volume of their blood decreasing, causing a change in blood pressure). However, the portion that deals with complex reportable objects associated with the given thing (i.e., dehydration) would be part of the CS. As seen below through the deciphered Blue Milk flavor:


Identification of Blue Milk → sky, blue ice-cream, milk, blueberry (examples of associated objects)→ Blue Milk


Through the above-deciphered example, we recognize that when the steps of perception of taste are activated, a large portion of the structure is carried out by the first node, “Identification of Blue Milk.” This node, interestingly, is not even part of the CS. However, despite it not being part of the CS, it directly affects the CS links as all the links merge closer, consequently strengthening the connection between CS and the remaining links. As most of AG’s processing of taste occurs subconsciously, AG may only express and communicate features of his taste-perceptions that the unconscious portion has influenced through analogies or relevantly linked examples (i.e., “it tastes like blueberries and milk”). Nonetheless, the level of accuracy in effectively describing them will not be guaranteed.

With the full picture above, we have now properly identified the node that is not part of CS but has a direct affect on the conscious structure. With that discovery, we can now properly identify where qualia exists in the AGM model. But to be sure, we will place AG in a similar state to Mary in Mary’s Room and grant him conscious knowledge of a particular node (i.e., sense of taste) but exclude the activation of the actual unconscious node all together. Then, observe the reaction once it gets activated.

Before we place AG into Mary’s Room, there is one more attribute that I need to explain: a node that detects learning (DL). Understanding this node is important because it will help advance our AGM model with realistic capabilities. As we have witnessed before in our example, AG has a mind that contains three attributes. One of which, the conscious or unconscious classifier, has been properly advanced to a point in which it places conscious nodes or connections into a hierarchy of relevancy in deciding whether to communicate them to others (i.e., a newly discovered flavor or a humming song). Presumably, the decision to communicate with a similarly built AG is predicated on the beneficial outcome of that agent. Thus, due to the factored incentivization, it seems safe to say that an unrelated agent (i.e., AG 2) can be affected by an expressed idea of the agent (i.e., AG).

The main problem with the above idea of nodes placed into a hierarchy of relevancy is an apparent lack of grounding for the hierarchization in the first place. For instance, in our LS and DV example, when LS reveals the truth behind DV’s true identity, AGM places DV into one single node containing two separate sub-links (enemy and father). Simultaneously in this process, DL is activated in identifying that first connection that possesses collections of information (i.e., DV as an enemy and DV as a father). In a way, DL can be viewed as an investigator tasked with collecting and analyzing information. Through the process of identification, DL strengthens the specific recollection with a new recollection consisting of the epiphany of discovering the first traced node of a specific event. This epiphany allows the specific recollection to be the first in line for the node to access upon retrospection (i.e., AG conveying his father’s status to others).

Now that we have successfully built an analogous model to Mary’s mind and covered all the multifaceted aspects of it, we can now place AG into a comparable Mary’s room situation as follows,


AG is an expert in the sensory system responsible for the perception of taste, however, he has never been exposed to it personally. Thus, it has never been properly activated. Though, what AG does have is a grasp of the relationship between certain links as his CS effectively classifies certain foods with their corresponded flavors. Plus, he holds genuine knowledge of the structural makeup (i.e.Identification of Blue Milk). Thus, accurately makes the distinction between distinct items (i.e., blue ice-cream, milk, blueberry).


With our adequately equipped tools, we can now predict what occurred with Mary through the above example. Here, AG’s exposure to the perception of taste for the first time in actuality reveals a large portion of the AGM’s structure’s nodes, representing objects, to form new connections within the overarching structure. The strength of these connections predicts the strength of the CS connection, resulting in CS alteration. Subsequently, DL gets tasked to trace the modification’s root, leading it to the AG’s rendered knowledge of taste within the CS.

At this stage, we are adequately equipped to identify an existing problem. As the DL strictly operates within the CS, it fails to identify conscious bonds amongst the nodes (i.e., blue ice cream, milk, blueberry). This roadblock manifests itself into AG’s conception of taste. He recognizes that he has learned or experienced something about it; however, he fails to sufficiently express the “it” [the thought of learning about something] as he has no access to the contents of its information. Thus, he finds himself in a state of inexpressibility.

This inexpressibility, of course, does not pose a risk to his conception of the mind as he possesses a conscious image depicting the structural change in response to taste. Nevertheless, it remains unavoidable as the image alone cannot trigger and reconfigure the structural nodes' processing (i.e., identification of Blue Milk). Moreover, AGM will face difficulty in processing something inexpressible. Thus, the inexpressibility cannot be averted despite the amount of theoretical knowledge one might possess. Interestingly, this inevitable conclusion seems comparable to what we have witnessed in the early stages with our bowel movement example, which has an immense amount of unconscious processing. Therefore, the process itself does not largely influence CS, as it does not largely involve it, which means the DL is not triggered.

We can recognize the shortcomings of reductionism, specifically, their lack of explanation of subjective experience. However, this should not be an obstacle for a reductionist explanation of Mary's reaction in Mary's Room. As we have witnessed in this paper, we have successfully replicated a rudimentary analogous model to Mary's mind through our reductionist model, leading us to draw a similar reaction to hers. Thus, we can affirm that Mary's Room thought experiment could not be the grounds to argue against reductionism as it itself satisfies the reductionist criteria.

Bibliography:


Jackson, Frank. “Epiphenomenal Qualia.” The Philosophical Quarterly (1950-), vol. 32, no. 127, 1982, pp. 127–136. JSTOR, www.jstor.org/stable/2960077. Accessed 22 Apr. 2021.


"Qualia (Stanford Encyclopedia Of Philosophy)". Plato.Stanford.Edu, 2021, https://plato.stanford.edu/entries/qualia/.


Shoemaker, Sydney. “On David Chalmers's the Conscious Mind.” Philosophy and Phenomenological Research, vol. 59, no. 2, 1999, pp. 439–444. JSTOR, www.jstor.org/stable/2653681. Accessed 22 Apr. 2021.


"Pinker, How The Mind Works, Excerpt". Web.Stanford.Edu, 2021, https://web.stanford.edu/~hakuta/www/archives/syllabi/Courses/Ed232(Learning)/pinker_s01.htm.


"Skywalker Family". Wookieepedia, 2021, https://starwars.fandom.com/wiki/Skywalker_family.


Piccirillo, Ryan A. "The Mind In The Brain, The Brain In A Robot: Strong AI In An Artificial Neural Network Brain Replica Housed In An Autonomous, Sensory Endowed Robot". Inquiries Journal, 2021, http://www.inquiriesjournal.com/articles/294/the-mind-in-the-brain-the-brain-in -a-robot-strong-ai-in-an-artificial-neural-network-brain-replica-housed-in-an-autonomous-sensory-endowed-robot..