Friday, April 13, 2018

Why the Epistemology of Conscious Perception Needs a Theory of Consciousness

On a certain type of classical "foundationalist" view in epistemology, knowledge of your sensory experience grounds knowledge of the outside world: Your knowledge that you're seeing a tree, for example, is based on or derived from your knowledge that you're having sensory experiences of greens and browns in a certain configuration in a certain part of your visual field. In earlier work, I've argued that this can't be right because our knowledge of external things (like trees) is much more certain and secure than our knowledge of our sensory experiences.

Today I want to suggest that foundationalist or anti-foundationalist claims are difficult to evaluate without at least an implicit background theory of consciousness. Consider for example these three simple models of the relation between sensory experience, knowledge of sensory experience, and knowledge of external objects. The arrows below are intended to be simultaneously causal and epistemic, with the items on the left both causing and epistemically grounding the items on the right. (I've added small arrows to reflect that there are always also other causal processes that contribute to each phase.)

[apologies for blurry type: click to enlarge and clarify]

Model A is a type of classical foundationalist picture. In Model B, knowledge of external objects arises early in cognitive processing and informs our sensory experiences. In Model C, sensory experience and knowledge of external objects arise in parallel.

Of course these models are far too simple! Possibly, the process looks more like this:

How do we know which of the three models is closest to correct? This is, I think, very difficult to assess without a general theory of consciousness. We know that there's sensory experience, and we know that there's knowledge of sensory experience, and we know that there's knowledge of external objects, and that all of these things happen at around the same time in our minds; but what exactly is the causal relation among them? Which happens first, which second, which third, and to what extent do they rely on each other? These fine-grained questions about temporal ordering and causal influence are, I think, difficult to discern from introspection and thought experiments.

Even if we allow that knowledge of external things informs our sense experience of those things, that can easily be incorporated in a version of the classical foundationalist model A, by allowing that the process is iterative: At time 1, input causes experience which causes knowledge of experience which causes knowledge of external things; then again at time 2; then again at time 3.... The outputs of earlier iterations could then be among the small-arrow inputs of later iterations, explaining whatever influence knowledge of outward things has on sensory experiences within a foundationalist picture.

On some theories, consciousness arises relatively early in sensory processing -- for example, in theories where sensory experiences are conscious by virtue of their information's being available for processing by downstream cognitive systems (even if that availability isn't much taken advantage of). On other theories, sensory consciousness arises much later in cognition, only after substantial downstream processing (as in some versions of Global Workspace theory and Higher-Order theories). Although the relationship needn't be strict, it's easy to see how views according to which consciousness arises relatively early fit more naturally with foundationalist models than views according to which consciousness arises much later.

The following magnificent work of art depicts me viewing a tree:

[as always, click to enlarge and clarify]

Light from the sun reflects off the tree, into my eye, back to primary visual cortex, then forward into associative cortex where it mixes with associative processes and other sensory processes. In my thought bubble you see my conscious experience of the tree. The question is, where in this process does this experience arise?

Here are three possibilities:

Until we know which of these approaches is closest to the truth, it's hard to see how we could be in a good position to settle questions about foundationalism or anti-foundationalism in the epistemology of conscious perception.

(Yes, I know I've ignored embodied cognition in this post. Of course, throwing that into the mix makes matters even more complicated!)

5 comments:

Unknown said...

Great post! I’m interested in seeing this line developed further. A preliminary question, just to get my bearings: where do you see Kant’s views falling in your schema?

Two minor comments, first on “consciousness”. It seems like these questions about the “order of cognitive operations” can be asked in terms of “easy problems” while being an eliminativist/denialist about qualia and the “hard problem” of consciousness. So I’m not sure you’ve shown a demand for a theory of *consciousness*. You’ve just shown a demand for a general theory of how the mind works, something presumably an eliminativist can give.

A second minor comment is on the order of operations. Your post assumes that the relevant ordering issue pertains to timing, that some processes happen before or after others. Timing is important, but it probably isn’t the only kind of ordering concern. Cognitive networks will generally be organized into complex, looping configurations where some components selectively reinforce or constrain others. Building a general theory of mind requires understanding how these interdependent components interact with each other. Not just sensory perception and experiential knowledge, but also (for instance) perception of motion and motor reaction times, for instance when catching a ball. I’ve certainly had the experience of knowing that I would (or wouldn’t) catch a ball given my perception of the ball flying towards (or away from) me. So the full embodied action of catching the ball ought to involve passing through the state of experiential knowledge.

Now a major comment. Considerations along the lines you raise in your post tend to convince me that something like Integrated Information Theory will turn out to be the correct theory of mind. Tononi’s presentation of IIT isn’t great,but if you strip away a lot of the fuss the basic idea is to look at the organized arrangement of components in a complex system, and find the simplest configuration from which the target emergent behavior can arise. This seems to be the kind of thing you are demanding from a “theory of consciousness”, and (conveniently enough) I also think this is basically what network-theoretic approaches to neuroscience already do in practice. From what I understand, the Default Mode Network in the brain is interesting precisely because it is an integrated network of cognitive components when the brain is at rest, and that integration breaks down as soon as the brain does anything cognitively demanding that requires the individual components focus on different specialized tasks.

I’m not arguing that the DMN is the neural correlate of consciousness, just that it serves as a model for how a network-theoretic explanation of cognitive functions might work. It seems to me that any “theory of consciousness” that will satisfy your concerns will describe some integrated, dynamical network of cognitive components, such that the characteristic features of the mental phenomena in question can be identified directly with that dynamical pattern of networked neural activity. This is the core claim of IIT, and as far as I can tell it is one of the few live theories in the philosophy literature defending such a claim. The perspective does seem more common in the neuroscience literature, but much less so in philosophy. For what it’s worth, I also think Kant was pointing to something like this by talking about the “transcendental unity of apperception”, so the idea isn’t new to philosophy either. But the perspective never seems to come up in these discussions, no matter how well they might address these concerns.

Eric Schwitzgebel said...

Thanks for the detailed and interesting reply, Daniel!

On your minor comments: I agree with both. The question can be construed as about "access consciousness" or "easy problems" as easily as (maybe even more easily than) it can be construed as about phenomenal consciousness. And also, broadly speaking, I tend to favor complex looping configurations over neatly ordered boxes (hence my crazy spaghetti figure); so if you and I are right about that, these little arrow relations will be only simplifications. That said, I think such simplifications can be useful to a first approximation, e.g., retina to LGN to V1 is a major mostly feedforward pathway, and then there are further projections forward from V1.

On Kant: I have to confess that this is an aspect of Kant that I have a lot of trouble making sense of. I feel like I have a bit of a handle on the Transcendental Aesthetic, but the Transcendental Deduction and the unity of apperception I find very murky.

On the major comment: I like your characterization of IIT better than Tononi's! Can we please ditch the Exclusion Postulate while we're at it?
http://schwitzsplinters.blogspot.com/2014/07/tononis-exclusion-postulate-would-make.html

Although I am broadly skeptical about our ability to discern the correct general theory of consciousness (partly for reasons articulated in "The Crazyist Metaphysics of Mind" and Chapter 6 of Perplexities of Consciousness), my greatest sympathies favor some sort of organizational pattern model, either with or without some requirement of self-monitoring (the "with" and "without" versions are quite different and split into substantially different subtypes).

Anonymous said...

complicates or simplifies?
http://ensoseminars.com

Callan S. said...

With the question of where the experience arises, I would describe it like this: With the picture I would instead have a platform that is doing the initial taking in of stimulus from light. Those circling arrows, these would be swirls that happen above the platform - each swirl picks up stimulus from the platforms processing.

The circling arrow lines, to me, show why this conscious experience thing is so hard to grasp - the circling arrow lines are like a dog chasing its tail. Not flattering, but roll with me. What we take as conscious experience is actually more a fragmental memory of our prior state - but we confuse the memory as being a direct input sensory experience, for it being a memory made very soon after the sensory state actually occurred. If you've played video games where you can see a ghost recording of your previous play as you play, it might help to visualise the situation seeing a prior, memorised state of yourself. For example in a car game, you could be driving behind a memory ghost of your car.

Probably A is closest, IMO, but it's not knowledge of a tree, it's a memory of seeing a thing called a tree. Take the car game example and imagine instead of the car being recorded from a prior play, it's actually a recording from 0.3s behind where you are now. So if you were to slow down the car, you would see this ghostly car in front of you...just for a moment (0.3s to be precise) before it slows down and disappears.

Now instead of the ghost being in a position, it's a memory of a sense - what is often called conciousness is seeing the memory of seeing a thing called a tree, like that ghost car just up ahead, by slowing down thought and thinking about it. But soon enough slowing down just makes it merge again with current perception. And thinking about this is its own memory as well...thus a series of circling arrow lines, chasing the memory of their own tail.

The key issue, I think, is confusing memories for immediate sensory input. Confusing these is why remembering our prior states seems not just a memory, but a thing in itself. Consciousness. The ghost car becomes a soul.

Callan S. said...

Oddly enough I also ran across David Mitchell saying much the same thing