Ready-to-hand

Dean Eckles on people, technology & inference

unconscious

Do what the virtuous person would do?

In the film The Descendents, George Clooney’s character Matt King wrestles — sometimes comically — with new and old choices involving his family and Hawaii. In one case, King decides he wants to meet a rival, both just to meet him and to give him some news; that is, he (at least explicitly) has generally good reason to meet him. Perhaps he even ought to meet him. When he actually does meet him, he cannot just do these things, he also argues with his rival, etc. King’s unplanned behaviors end up causing his rival considerable trouble.1

This struck me as related to some challenges in formulating what one should do — that is, in the “practical reasoning” side of ethics.

One way of getting practical advice out of virtue ethics is to say that one should do what the virtuous person would do in this situation. On its face, this seems right. But there are also some apparent counterexamples. Consider a short-tempered tennis player who has just lost a match.2 In this situation, the virtuous person would walk over to his opponent, shake his hand, and say something like “Good match.” But if this player does that, he is likely to become enraged and even assault his victorious opponent. So it seems better for him to walk off the court without attempting any of this — even though this is clearly rude.

The simple advice to do what the virtuous person would do in the present situation is, then, either not right or not so simple. It might be right, but not so simple to implement, if part of “the present situation” is one’s own psychological weaknesses. Aspects of the agent’s psychology — including character flaws — seem to license bad behavior and to remove reasons for taking the “best” actions.

King and other characters in The Descendents face this problem, both in the example above and at some other points in the movie. He begins a course of action (at least in part) because this is what the virtuous person would do. But then he is unable to really follow through because he lacks the necessary virtues.3 We might take this as a reminder of the ethical value to being humble — to account for our faults — when reasoning about what we ought to do.4 It is also a reminder of how frustrating this can be, especially when one can imagine (and might actually be able to) following through on doing what the virtuous person would do.

One way to cope with these weaknesses is to leverage other aspects of one’s situation. We can make public commitments to do the virtuous thing. We can change our environment, sometimes by binding our future selves, like Ulysses, from acting on our vices once we’ve begun our (hopefully) virtuous course of action. Perhaps new mobile technologies will be a substantial help here — helping us intervene in our own lives in this way.

  1. Perhaps deserved trouble. But this certainly didn’t play a stated role in the reasoning justifying King’s decision to meet him. []
  2. This example is first used by Gary Watson (“Free Agency”, 1975) and put to this use by Michael Smith in his “Internalism” (1995). Smith introduces it as a clear problem for the “example” model of how what a virtuous person would do matters for what we should each do. []
  3. Another reading of some of these events in The Descendents is that these characters actually want to do the “bad behaviors”, and they (perhaps unconciously) use their good intentions to justify the course of action that leads to the bad behavior. []
  4. Of course, the other side of such humility is being short on self-efficacy. []

Being a lobster and using a hammer: “homuncular flexibility” and distal attribution

Jaron Lanier (2006) calls the ability of humans to learn to control virtual bodies that are quite different than our own “homuncular flexibility”. This is, for him, a dangerous idea. The idea is that the familiar mapping of the body represented in the cortical homunculus is only one option – we can flexibly act (and perceive) using quite other mappings, e.g., to virtual bodies. Your body can be tracked, and these movements can be used to control a lobster in virtual reality – just as one experiences (via head-mounted display, haptic feedback, etc.) the virtual space from the perspective of the lobster under your control.

This name and description makes this sound quite like science fiction. In this post, I assimilate homuncular flexibility to the much more general phenomenon of distal attribution (Loomis, 1992; White, 1970). When I have a perceptual experience, I can just as well attribute that experience – and take it as being directed at or about – more proximal or distal phenomena. For example, I can attribute it to my sensory surface, or I can attribute it to a flower in the distance. White (1970) proposed that more distal attribution occurs when the afference (perception) is lawfully related to efference (action) on the proximal side of that distal entity. That is, if my action and perception are lawfully related on “my side” of that entity in the causal tree, then I will make attributions to that entity. Loomis (1992) adds the requirement that this lawful relationship be successfully modeled. This is close, but not quite right, for if I can make distal attributions even in the absence of an actual lawful relationship that I successfully model, my (perhaps inaccurate) modeling of a (perhaps non-existent) lawful relationship will do just fine.

Just as I attribute a sensory experience to a flower and not the air between me and the flower, so the blind man or the skilled hammer-user can attribute a sensory experience to the ground or the nail, rather than the handle of the cane or hammer. On consideration, I think we can see that these phenomena are very much what Lanier is talking about. When I learn to operate (and, not treated by Lanier, 2006, sense) my lobster-body, it is because I have modeled an efference–afference relationship, yielding a kind of transparency. This is a quite familiar sort of experience. It might still be a quite dangerous or exciting idea, but its examples are ubiquitous, not restricted to virtual reality labs.

Lanier paraphrases biologist Jim Boyer as counting this capability as a kind of evolutionary artifact – a spandrel in the jargon of evolutionary theory. But I think a much better just-so evolutionary story can be given: it is this capability – to make distal attributions to the limits of the efference–afference relationships we successfully model – that makes us able to use tools so effectively. At an even more basic and general level, it is this capability that makes it possible for us to communicate meaningfully: our utterances have their meaning in the context of triangulating with other people such that the content of what we are saying is related to the common cause of both of our perceptual experiences (Davidson, 1984).

References

Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Lanier, J. (2006). Homuncular flexibility. Edge.

Loomis, J. M. (1992). Distal attribution and presence. Presence: Teleoperators and Virtual Environments, 1(1), 113-119.

White, B. W. (1970). Perceptual findings with the vision-substitution system. IEEE Transactions on Man-Machine Systems, 11(1), 54-58.

Unconscious processing, self-knowledge, and explanation

This post revisits some thoughts I’ve shared an earlier version of here. In articles over the past few years, John Bargh and his colleagues claim that cognitive psychology has operated with a narrow definition of unconscious processing that has led investigators to describe it as “dumb” and “limited”. Bargh prefers a definition of unconscious processing more popular in social psychology – a definition that allows him to claim a much broader, more pervasive, and “smarter” role for unconscious processing in our everyday lives. In particular, I summarize the two definitions used in Bargh’s argument (Bargh & Morsella 2008, p. 1) as the following:

Unconscious processingcog is the processing of stimuli of which one is unaware.

Unconscious processingsoc is processing of which one is unaware, whether or not one is aware of the stimuli.

A helpful characterization of unconscious processingsoc is the question: “To what extent are people aware of and able to report on the true causes of their behavior?” (Nisbett & Wilson 1977). We can read this project as addressing first-person authority about causal trees that link external events to observable behavior.

What does it mean for the processing of a stimulus to be below conscious awareness? In particular, we can wonder, what is that one is aware of when one is aware of a mental process of one’s own? While determining whether unconscious processingcog is going on requires specifying a stimulus to which the question is relative, unconscious processingsoc requires specifying a process to which the question is relative. There may well be troubles with specifying the stimulus, but there seem to be bigger questions about specifying the process.

There are many interesting and complex ways to identify a process for consideration or study. Perhaps the simplest kind of variation to consider is just differences of detail. First, consider the difference between knowing some general law about mental processing and knowing that one has in fact engaging in processing meeting the conditions of application for the law.

Second, consider the difference between knowing that one is processing some stimulus and that a various long list of things have a causal role (cf. the generic observation that causal chains are hard to come by, but causal trees are all around us) and knowing the specific causal role each has and the truth of various counterfactuals for situations in which those causes were absent.

Third, consider the difference between knowing that some kind of processing is going on that will accomplish an end (something like knowing the normative functional or teleological specification of the process, cf. Millikan 1990 on rule-following and biology) and the details of the implementation of that process in the brain (do you know the threshold for firing on that neuron?). We can observe that an extensionally identical process can always be considered under different descriptions; and any process that one is aware of can be decomposed into a description of extensionally identical sub-processes, of which one is unaware.

A bit trickier are variations in descriptions of processes that do not have law-like relationships between each other. For example, there are good arguments for why folk psychological descriptions of processes (e.g. I saw that A, so I believed that B, and, because I desired that C, I told him that D) are not reducible to descriptions of processes in physical or biological terms about the person.1

We are still left with the question: What does it mean to be unaware of the imminent consequences of processing a stimulus?

References

Anscombe, G. (1969). Intention. Oxford: Blackwell Publishers.

Bargh, J. A., & Morsella, E. (2008). The unconscious mind. Perspectives on Psychological Science, 3(1), 73-79.

Davidson, D. (1963). Actions, Reasons, and Causes. Journal of Philosophy, 60(23), 685-700.

Millikan, R. G. (1990). Truth Rules, Hoverflies, and the Kripke-Wittgenstein Paradox. Philosophical Review, 99(3), 323-53.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231-259.

Putnam, H. (1975). The Meaning of ‘Meaning’. In K. Gunderson (Ed.), Language, Mind and Knowledge. Minneapolis: University of Minnesota Press.

  1. There are likely more examples of this than commonly thought, but the one I am thinking of is the most famous: the weak supervenience of mental (intentional) states on physical states without there being psychophysical laws linking the two (Davidson 1963, Anscombe 1969, Putnam 1975). []

Definitions of unconscious processing in cognitive and social psychology

John Bargh, Professor of Psychology at Yale, and his ACME (Automaticity in Cognition, Motivation, and Emotion) Lab are doing very exciting work. I had read some articles by Bargh some time ago (e.g. Bargh & McKenna 2004) and encountered his work in the context of debates about how objects can automatically activate attitudes that apply to them. But it hasn’t been until recently (following a discussion with James Breckenridge) that I’ve begun to really engage with the larger body of research Bargh and his collaborators have produced — and the interesting reflections and arguments found in the reviews of this and related work that he and his collaborators have written.

I expect I’ll be writing more about this work, but in this and some follow-up posts I want to just say a little bit about the general character of the research and, more specifically, how this work engages with and employs definitions of ‘unconscious’ and ‘unconscious processing‘.

Bargh & Morsella (2008, in press, page numbers are to this version) highlights how cognitive psychology and social psychology have operated with different definitions and different emphasis in investigating what they call “unconscious”. For cognitive psychology, “subliminal information processing – […] extracting meaning from stimuli of which one is not consciously aware” – has been paradigmatic of the unconscious (p. 1). That is, its study of unconscious processing is the study of the processing of stimuli of which one is unaware. On the other hand, for mainstream social psychology research, including work with priming, “the traditional focus has been on mental processes of which the individual is unaware, not on stimuli of which one is unaware” (Ibid.).

This is a striking difference that, as Bargh & Morsella illustrate, has consequences for how “dumb” or “smart” and “limited” or “pervasive” unconscious processing is. If unconscious processing is limited to processing of subliminal stimuli, then it doesn’t have much to go on. But the social psychology definition — the liberal, process-awareness definition — allows us to call a lot more things unconscious processing.

I recognize shortcomings with the cognitive psychology definition — the narrow, stimulus-awareness definition. And Bargh and Morsella’s statement of the process-awareness definition does enable them to say some striking things (e.g. about automatic activation of motivations).

But I also wonder whether this redefined term can bear much theoretical weight. Specifically, I have two concerns:

  1. this definition makes what is unconscious depend on each person’s knowledge of the causes of their actions — and this can get tricky in unintuitive and highly individual ways
  2. this definition seems to count on having good identity conditions for the kinds of objects to which ‘unconscious’ is supposed to apply (e.g. events, processes), but identity conditions (which are often hard to come by in general) are tricky for this domain in particular.

These are familiar problems in philosophy of mind, and they deserve consideration when designing theoretically useful definitions of unconscious processing. I aim to take up each of these issues in more detail in another post.

Bargh, J.A., & Morsella, E. (2008, in press). The unconscious mind. Perspectives on Psychological Science.

Bargh, J.A., & McKenna, K.Y.A. (2004). The Internet and social life. Annual review of psychology, 55, 573-590.

Scroll to top