Ready-to-hand

Dean Eckles on people, technology & inference

philosophy

Being a lobster and using a hammer: “homuncular flexibility” and distal attribution

Jaron Lanier (2006) calls the ability of humans to learn to control virtual bodies that are quite different than our own “homuncular flexibility”. This is, for him, a dangerous idea. The idea is that the familiar mapping of the body represented in the cortical homunculus is only one option – we can flexibly act (and perceive) using quite other mappings, e.g., to virtual bodies. Your body can be tracked, and these movements can be used to control a lobster in virtual reality – just as one experiences (via head-mounted display, haptic feedback, etc.) the virtual space from the perspective of the lobster under your control.

This name and description makes this sound quite like science fiction. In this post, I assimilate homuncular flexibility to the much more general phenomenon of distal attribution (Loomis, 1992; White, 1970). When I have a perceptual experience, I can just as well attribute that experience – and take it as being directed at or about – more proximal or distal phenomena. For example, I can attribute it to my sensory surface, or I can attribute it to a flower in the distance. White (1970) proposed that more distal attribution occurs when the afference (perception) is lawfully related to efference (action) on the proximal side of that distal entity. That is, if my action and perception are lawfully related on “my side” of that entity in the causal tree, then I will make attributions to that entity. Loomis (1992) adds the requirement that this lawful relationship be successfully modeled. This is close, but not quite right, for if I can make distal attributions even in the absence of an actual lawful relationship that I successfully model, my (perhaps inaccurate) modeling of a (perhaps non-existent) lawful relationship will do just fine.

Just as I attribute a sensory experience to a flower and not the air between me and the flower, so the blind man or the skilled hammer-user can attribute a sensory experience to the ground or the nail, rather than the handle of the cane or hammer. On consideration, I think we can see that these phenomena are very much what Lanier is talking about. When I learn to operate (and, not treated by Lanier, 2006, sense) my lobster-body, it is because I have modeled an efference–afference relationship, yielding a kind of transparency. This is a quite familiar sort of experience. It might still be a quite dangerous or exciting idea, but its examples are ubiquitous, not restricted to virtual reality labs.

Lanier paraphrases biologist Jim Boyer as counting this capability as a kind of evolutionary artifact – a spandrel in the jargon of evolutionary theory. But I think a much better just-so evolutionary story can be given: it is this capability – to make distal attributions to the limits of the efference–afference relationships we successfully model – that makes us able to use tools so effectively. At an even more basic and general level, it is this capability that makes it possible for us to communicate meaningfully: our utterances have their meaning in the context of triangulating with other people such that the content of what we are saying is related to the common cause of both of our perceptual experiences (Davidson, 1984).

References

Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Lanier, J. (2006). Homuncular flexibility. Edge.

Loomis, J. M. (1992). Distal attribution and presence. Presence: Teleoperators and Virtual Environments, 1(1), 113-119.

White, B. W. (1970). Perceptual findings with the vision-substitution system. IEEE Transactions on Man-Machine Systems, 11(1), 54-58.

Situational variation, attribution, and human-computer relationships

Mobile phones are gateways to our most important and enduring relationships with other people. But, like other communication technologies, the mobile phone is psychologically not only a medium: we also form enduring relationships with devices themselves and their  associated software and services (Sundar 2004). While different than  relationships with other people, these human–technology relationships are also importantly social relationships. People exhibit a host of automatic, social responses to interactive  technologies by applying familiar social rules, categories, and norms that are otherwise used in interacting with people (Reeves and Nass 1996; Nass and Moon 2000).

These human–technology relationships develop and endure over time and through radical changes in the situation. In particular, mobile phones are near-constant companions. They take on roles of both medium for communication with other people and independent interaction partner through dynamic physical, social, and cultural environments and tasks. The global phenomenon of mobile phone use highlights both that relationships with people and technologies are influenced by variable context and that these devices are, in some ways, a constant in amidst these everyday changes.

Situational variation and attribution

Situational variation is important for how people understand and interact with mobile technology. This variation is an input to the processes by which people disentangle the internal (personal or device) and external (situational) causes of an social entity’s behavior (Fiedler et al. 1999; Forsterling 1992; Kelley 1967), so this situational variation contributes to the traits and states attributed to human and technological entities. Furthermore, situational variation influences the relationship and interaction in other ways. For example, we have recently carried out an experiment providing evidence that this situational variation itself (rather than the characteristics of the situations) influences memory, creativity, and self-disclosure to a mobile service; in particular, people disclose more in places they have previously disclosed to the service, than in  new places (Sukumaran et al. 2009).

Not only does the situation vary, but mobile technologies are increasingly responsive to the environments they share with their human interactants. A system’s systematic and purposive responsiveness to the environment means means that explaining its behavior is about more than distinguishing internal and external causes: people explain behavior by attributing reasons to the entity, which may trivially either refer to internal or external causes. For example, contrast “Jack bought the house because it was secluded” (external) with “Jack bought the house because he wanted privacy” (internal) (Ross 1977, p. 176). Much research in the social cognition and attribution theory traditions of psychology has failed to address this richness of people’s everyday explanations of other ’s behavior (Malle 2004; McClure 2002), but contemporary, interdisciplinary work is elaborating on theories and methods from philosophy and developmental psychology to this end (e.g., the contributions to Malle et al. 2001).

These two developments — the increasing role of situational variation in human-technology relationships and a new appreciation of the richness of everyday explanations of behavior — are important to consider together in designing new research in human-computer interaction, psychology, and communication. Here are three suggestions about directions to pursue in light of this:

Design systems that provide constancy and support through radical situational changes in both the social and physical environment. For example, we have created a system that uses the voices of participants in an upcoming event as audio primes during transition periods (Sohn et al. 2009). This can help ease the transition from a long corporate meeting to a chat with fellow parents at a child’s soccer game.

Design experimental manipulations and measure based on features of folk psychology —  the implicit theory or capabilities by which we attribute, e.g., beliefs, thoughts, and desires (propositional attitudes) to others (Dennett 1987) — identified by philosophers. For example, attributions propositional attitudes (e.g., beliefs) to an entity have the linguistic feature that one cannot substitute different terms that refer to the same object while maintaining the truth or appropriateness of the statement. This opacity in attributions of propositional attitudes is the subject of a large literature (e.g., following Quine 1953), but this  has not been used as a lens for much empirical work, except for some developmental psychology  (e.g., Apperly and Robinson 2003). Human-computer interaction research should use this opacity (and other underused features of folk psychology) in studies of how people think about systems.

Connect work on mental models of systems (e.g., Kempton 1986; Norman 1988) to theories of social cognition and folk psychology. I think we can expect much larger overlap in the process involved than in the current research literature: people use folk psychology to understand, predict, and explain technological systems — not just other people.

References

Apperly, I. A., & Robinson, E. J. (2003). When can children handle referential opacity? Evidence for systematic variation in 5- and 6-year-old children’s reasoning about beliefs and belief reports. Journal of Experimental Child Psychology, 85(4), 297-311. doi: 10.1016/S0022-0965(03)00099-7.

Dennett, D. C. (1987). The Intentional Stance (p. 388). MIT Press.

Fiedler, K., Walther, E., & Nickel, S. (1999). Covariation-based attribution: On the ability to assess multiple covariates of an effect. Personality and Social Psychology Bulletin, 25(5), 609.

Försterling, F. (1992). The Kelley model as an analysis of variance analogy: How far can it be taken? Journal of Experimental Social Psychology, 28(5), 475-490. doi: 10.1016/0022-1031(92)90042-I.

Kelley, H. H. (1967). Attribution theory in social psychology. In Nebraska Symposium on Motivation (Vol. 15).

Malle, B. F. (2004). How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. Bradford Books.

Malle, B. F., Moses, L. J., & Baldwin, D. A. (2001). Intentions and Intentionality: Foundations of Social Cognition. MIT Press.

McClure, J. (2002). Goal-Based Explanations of Actions and Outcomes. In M. H. Wolfgang Stroebe (Ed.), European Review of Social Psychology (pp. 201-235). John Wiley & Sons, Inc. Retrieved from http://dx.doi.org/10.1002/0470013478.ch7.

Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.

Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.

Quine, W. V. O. (1953). From a Logical Point of View: Nine Logico-Philosophical Essays. Harvard University Press.

Reeves, B., & Nass, C. (1996). The media equation: how people treat computers, television, and new media like real people and places (p. 305). Cambridge University Press.

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in Experimental Social Psychology (Vol. 10, pp. 174-221). New York: Academic Press.

Sohn, T., Takayama, L., Eckles, D., & Ballagas, R. (2009). Auditory Priming for Upcoming Events. Forthcoming in CHI ’09 extended abstracts on Human factors in computing systems. Boston, Massachusetts, United States: ACM Press.

Sukumaran, A., Ophir, E., Eckles, D., & Nass, C. I. (2009). Variable Environments in Mobile Interaction Aid Creativity but Impair Learning and Self-disclosure. To be presented at the Association for Psychological Science Convention, San Francisco, California.

Sundar, S. S. (2004). Loyalty to computer terminals: is it anthropomorphism or consistency? Behaviour & Information Technology, 23(2), 107-118.

Unconscious processing, self-knowledge, and explanation

This post revisits some thoughts I’ve shared an earlier version of here. In articles over the past few years, John Bargh and his colleagues claim that cognitive psychology has operated with a narrow definition of unconscious processing that has led investigators to describe it as “dumb” and “limited”. Bargh prefers a definition of unconscious processing more popular in social psychology – a definition that allows him to claim a much broader, more pervasive, and “smarter” role for unconscious processing in our everyday lives. In particular, I summarize the two definitions used in Bargh’s argument (Bargh & Morsella 2008, p. 1) as the following:

Unconscious processingcog is the processing of stimuli of which one is unaware.

Unconscious processingsoc is processing of which one is unaware, whether or not one is aware of the stimuli.

A helpful characterization of unconscious processingsoc is the question: “To what extent are people aware of and able to report on the true causes of their behavior?” (Nisbett & Wilson 1977). We can read this project as addressing first-person authority about causal trees that link external events to observable behavior.

What does it mean for the processing of a stimulus to be below conscious awareness? In particular, we can wonder, what is that one is aware of when one is aware of a mental process of one’s own? While determining whether unconscious processingcog is going on requires specifying a stimulus to which the question is relative, unconscious processingsoc requires specifying a process to which the question is relative. There may well be troubles with specifying the stimulus, but there seem to be bigger questions about specifying the process.

There are many interesting and complex ways to identify a process for consideration or study. Perhaps the simplest kind of variation to consider is just differences of detail. First, consider the difference between knowing some general law about mental processing and knowing that one has in fact engaging in processing meeting the conditions of application for the law.

Second, consider the difference between knowing that one is processing some stimulus and that a various long list of things have a causal role (cf. the generic observation that causal chains are hard to come by, but causal trees are all around us) and knowing the specific causal role each has and the truth of various counterfactuals for situations in which those causes were absent.

Third, consider the difference between knowing that some kind of processing is going on that will accomplish an end (something like knowing the normative functional or teleological specification of the process, cf. Millikan 1990 on rule-following and biology) and the details of the implementation of that process in the brain (do you know the threshold for firing on that neuron?). We can observe that an extensionally identical process can always be considered under different descriptions; and any process that one is aware of can be decomposed into a description of extensionally identical sub-processes, of which one is unaware.

A bit trickier are variations in descriptions of processes that do not have law-like relationships between each other. For example, there are good arguments for why folk psychological descriptions of processes (e.g. I saw that A, so I believed that B, and, because I desired that C, I told him that D) are not reducible to descriptions of processes in physical or biological terms about the person.1

We are still left with the question: What does it mean to be unaware of the imminent consequences of processing a stimulus?

References

Anscombe, G. (1969). Intention. Oxford: Blackwell Publishers.

Bargh, J. A., & Morsella, E. (2008). The unconscious mind. Perspectives on Psychological Science, 3(1), 73-79.

Davidson, D. (1963). Actions, Reasons, and Causes. Journal of Philosophy, 60(23), 685-700.

Millikan, R. G. (1990). Truth Rules, Hoverflies, and the Kripke-Wittgenstein Paradox. Philosophical Review, 99(3), 323-53.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231-259.

Putnam, H. (1975). The Meaning of ‘Meaning’. In K. Gunderson (Ed.), Language, Mind and Knowledge. Minneapolis: University of Minnesota Press.

  1. There are likely more examples of this than commonly thought, but the one I am thinking of is the most famous: the weak supervenience of mental (intentional) states on physical states without there being psychophysical laws linking the two (Davidson 1963, Anscombe 1969, Putnam 1975). []

Naming this blog “ready-to-hand”: Heidegger, Husserl, folk psychology, and HCI

The name of this blog, Ready-to-hand, is a translation of Heidegger’s term zuhanden, though interpreting Heidegger’s philosophy is not specifically a major interest of mine nor a focus here. Much has been made of the significance of phenomenology, most often Heidegger, for human-computer interaction (HCI) and interaction design (e.g., Winograd & Flores 1985, Dourish 2001). And I am generally pretty sympathetic to phenomenology as one inspiration for HCI research. I want to just note a bit about the term zuhanden and my choice of it in a larger context — of phenomenology, HCI, and a current research interest of mine: cues for assuming the intentional stance toward systems (more on this below).

The Lifeworld and ready-to-hand

Heidegger was a student of Edmund Husserl, and Heidegger’s Being and Time was to be dedicated to Husserl.1 There is really no question of the huge influence of Husserl on Heidegger.

My major introduction to both Husserl and Heidegger was from Prof. Dagfinn Føllesdal. Føllesdal (1979) details the relationship between their philosophies. He argues for the value of seeing much of Heidegger’s philosophy “as a translation of Husserl’s”:

The key to this puzzle, and also, I think, the key to understanding what goes on in Heidegger’s philosophy, is that Heidegger’s philosophy is basically isomorphic to that of Husserl. Where Husserl speaks of the ego, Heidegger speaks of Dasein, where Husserl speaks of the noema, Heidegger speaks of the structure of Dasein’s Being-in-the-world and so on. Husserl also observed this. Several places in his copy of Being and Time Husserl wrote in the margin that Heidegger was just translating Husserl’s phenomenology into another terminology. Thus, for example, on page 13 Husserl wrote: “Heidegger transposes or transforms the constitutive phenomenological clarification of all realms of entities and universals, the total region World into the anthropological. The problematic is translation, to the ego corresponds Dasein etc. Thereby everything becomes deep-soundingly unclear, and philosophically it loses its value.” Similarly, on page 62, Husserl remarks: “What is said here is my own theory, but without a deeper justification.” (p. 369, my emphasis)

Heidegger and his terms have certainly been more popular and in wider use since then.

Føllesdal also highlights where the two philosophers diverge.2 In particular, Heidegger gives a central role to the role of the body and action in constituting the world. While in his publications Husserl stuck to a focus on how perception constitutes the Lifeworld, Heidegger uses many examples from action.3 Our action in the world, including our skillfulness in action constitutes those objects we interact with for us.

Heidegger contrasts two modes of being (in addition to our own mode — being-in-the-world): present-at-hand and ready-to-hand (or alternatively, the occurant and the available (Dreyfus 1990)). The former is the mode of being consideration of an object as a physical thing present to us — or occurant, and Heidegger argues it constitutes the narrow focus of previous philosophical explorations of being. The latter is the stuff of every skilled action — available for action: the object becomes equipment, which can often be transparent in action, such that it becomes an extension of our body.

J.J. Gibson expresses this view in his proposal of an ecological psychology (in which perception and action are closely linked):

When in use, a tool is a sort of extension of the hand, almost an attachment to it or a part of the user’s own body, and thus is no longer a part of the environment of the user. […] This capacity to attach something to the body suggests that the boundary between the animal and the environment is not fixed at the surface of the skin but can shift. More generally it suggests that the absolute duality of “objective” and “subjective” is false. When we consider the affordances of things, we escape this philosophical dichotomy. (1979, p. 41)

While there may be troubles ahead for this view, I think the passage captures well something we all can understand: when we use scissors, we feel the paper cutting; and when a blind person uses a cane to feel in front of them, they can directly perceive the layout of the surface in front of them.

Transparency, abstraction, opacity, intentionality

Research and design in HCI has sought at times to achieve this transparency, sometimes by drawing on our rich knowledge of and skill with the ordinary physical and social world. Metaphor in HCI (e.g., the desktop metaphor) can be seen as one widespread attempt at this (cf. Blackwell 2006). This kind of transparency does not throw abstraction out of the picture. Rather the two go hand-in-hand: the specific physical properties of the present-at-hand are abstracted away, with quickly perceived affordances for action in their place.

But other kinds of abstraction are in play in HCI as well. Interactive technologies can function as social actors and agents– with particular cues eliciting social responses that are normally applied to other people (Nass and Moon 2000, Fogg 2002). One kind of social response, not yet as widely considered in the HCI literature, is assuming the intentional stance — explanation in terms of beliefs, desires, hopes, fears, etc. — towards the system. This is a powerful, flexible, and easy predictive and explanatory strategy often also called folk psychology (Dennett 1987), which may be a tacit theory or a means of simulating other minds. We can explain other people based on what they believe and desire.

But we can also do the same for other things. To use one of Dennett’s classic examples, we can do the same for a thermostat: why did it turn the heat on? It wanted to keep the house at some level of warmth, it believed that it was becoming colder than desired, and it believed that it could make it warmer by turning on the heat. While in the case of the thermostat, this strategy doesn’t hide much complexity (we could explain it with other strategies without much trouble), it can be hugely useful when the system in question is complex or otherwise opaque to other kinds of description (e.g., it is a black box).

We might think then that perceived complexity and opacity should both be cues for adopting the intentional stance. But if the previous research on social responses to computers (not to mention the broader literature on heuristics and mindlessness) has taught us anything, it is that made objects such as computers can evoke unexpected responses through other simplier cues. Some big remaining questions that I hope to take up in future posts and research:

  • What are these cues, both features of the system and situational factors?
  • How can designers influence people to interpret and explain systems using folk psychology?
  • What are the advantages and disadvantages of evoking the intentional stance in users?
  • How should we measure the use of the intentional stance?
  • How is assuming the intentional stance towards a thing different (or the same) as it having being-in-the-world as its mode of being?

References

Blackwell, A. F. (2006). The reification of metaphor as a design tool. ACM Trans. Comput.-Hum. Interact., 13(4), 490-530.
Dennett, D. C. (1987). The Intentional Stance. MIT Press.
Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. MIT Press.
Dreyfus, H. L. (1990). Being-in-the-world: A Commentary on Heidegger’s Being and Time, Division I. MIT Press.
Fogg, B.J. (2002). Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann.
Føllesdal, D. (1979). Husserl and Heidegger on the role of actions in the constitution of the world. In E. Saarinen, R. Hilpinen, I. Niiniluoto and M. Provence Hintikka, eds., Essays in Honour of Jaakko Hintikka, Dordrecht, Holland: Reidel, 365-378.
Nass, C., and Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81-103.
Winograd, T. and Flores, F. (1985). Understanding Computers and Cognition: A New Foundation for Design. Ablex Publishing Corp.
  1. But Husserl was Jewish, and Heidegger was himself a member of the Nazi party, so this did not happen in the first printing. []
  2. Dreyfus (1990) is an alternative view that takes the divergence as quite radical; he sees Føllesdal as hugely underestimating the originality of Heidegger’s thought. Instead Dreyfus characterizes Husserl as formulating so clearly the Cartesian worldview that Heidegger recognized its failings and was thus able to radically and successfully critique it. []
  3. It is worth noting that Husserl actually wrote about this as well, but in manuscripts, which Heidegger read years before writing Being and Time. []

Riskful decisions and riskful thinking: Donald Davidson and Cliff Nass

Two personal-professional narratives that I’ve been somewhat familiar with for a while have recently highlighted for me the significance of riskful decisions and thinking in academia. I think the stories are interesting on their own, but they also emphasize some questions and concerns for the functioning of scholarly inquiry.

The first is about the American philosopher Donald Davidson, whose work has long been of great interest to me (and was the topic of my undergraduate Honors thesis). The second is about Cliff Nass (Clifford Nass), Professor of Communication at Stanford, an advisor and collaborator. The major published source I draw on for each of these narratives is an interview: for Davidson’s story, it is an interview by Ernest Lepore (2004), a critic and expositor of Davidson’s philosophy; for Cliff Nass, it is an interview by Tamara Adlin (2007). After sharing these stories, I’ll discuss some similarities and briefly discuss risk-taking in decisions and thinking.

Donald Davidson is considered one of the most important and influential philosophers of the past 60 years, and he is my personal favorite. Davidson is often described as a highly systematic philosopher — uncharacteristically so for 20th century philosophy, in that his contributions to several areas of philosophy (philosophy of language, mind, and action, semantics, and epistemology) are deeply connected in their method and the proposed theories. He is the paradigmatic programmatic philosopher of the 20th century.

Despite this, Davidson’s philosophical program did not emerge until relatively late in his career. The same is true of his publications in general. Only after accepting a tenure track position at Stanford in 1951 (which was then still up-and-coming, though quickly, in philosophy) did he begin to publish (nothing was even in the “pipeline” previous to this). This began under the wing of the younger Patrick Suppes, with whom Davidson co-authored a book (1957) on decision theory. His first philosophical article appears in 1963 (which he authored alone only through an unexpected death). As Davidson puts it in an interview with Ernest Lepore, “I was very inhibited so far as publication was concerned” and was worried “that the minute I actually published something, everyone was going to jump on me” (Davidson 2004).

Then Davidson published “Actions, Reasons and Causes” (1963), twelve years after joining the Stanford faculty. It argues against the late-Wittgensteinian dogma that reasons are not also causes. It is only with this paper that there was a publication by Davidson that drew significant attention from the community (beginning with a presentation of the paper at a meeting of the American Philosophical Association). This paper has been hugely influential and alone identified Davidson as an important thinker in the field, though he was surprised the reception was not as overwhelming as he had thought: “I didn’t realize that if you publish, as far as I can tell, no one was going to pay any attention.” Many responses, both positive and critical, did eventually come, and Davidson went on to publish many highly influential papers, reaching the height of his immense scholarly influence in the 1970s and 1980s.

Clifford Nass is widely known researcher in the psychology of human-computer interaction (HCI). With Byron Reeves, he wrote The Media Equation (1996), which presents research carried out at Stanford University on how people respond in mediated interactions (e.g. with computers and televisions) by overextending social rules normally applied to other people. This hints at the (here simplified) straight, bold line of Nass’s research program: take a finding from social psychology, replace the second human with a computer, see if you get the same results. This exact strategy has been modified and expanded from, but the general consistency of Nass’s program over many years is striking for HCI: unlike in psychology, for example, in HCI there are many investigators seeking low-hanging fruit and quickly moving on to new projects.

Nass likes to refer to his “accidental PhD”, as he hadn’t intended to get a PhD in sociology. After working for a year at Intel, he was planning to matriculate in a electrical engineering PhD program, but an unexpected death postponed that. “[J]ust to bide my time and to have some flexibility, I ended up doing a sociology degree,” says Nass. He did his dissertation on the role of pre-processing jobs in labor, taking an approach that was radical in its elimination of a role for people and that connected with contemporary research by social science outsiders doing “sociocybernetics”. With such a dissertation topic (and the dissertation itself unfinished), finding a job did not seem easy at the outset: “It’s a nutty topic. I was going to be in trouble getting jobs. I had published stuff and was doing work and all that, but my dissertation was so weird” (Adlin 2007).

There was, however, a bit of luck, well taken advantage of by Nass: the Stanford Communication Department was under construction and looking to hire some folks doing weird work. So when Nass interviewed, impressing both them and the Sociology Department, he got the job, despite knowing nothing about Communication as a discipline and having been to no conferences in the field. After beginning at Stanford, Nass was seeking a research program, as clearly there was something wrong, at least when it came to getting it accepted for academic publication, with his previous work: “I was having a terrible time getting my work accepted. In fact, to this day I’ve still never published anything off my dissertation, 20-odd years later. Because again, no field could figure out who owned the material. I got reviews like, ‘This work is offensive.‘”

But Nass couldn’t settle on any normal research program. He wanted to examine how people might treat computers socially. Getting funding for this work wouldn’t have been easy, but he got a grant that the grant administrator described as the 1 of 35 given that they chose to give to the “weirdest project that was proposed”. It wasn’t all easy from there, of course. For example, it took some time to design and carry out successful experiments in this program — and even longer to get the results published. But this risk-taking in distributing this grant helped enable the work to continue.

Cliff Nass is very clear about the role riskful decisions, in admissions, hiring, and funding, played in his success:

I was very lucky. I fear that those times are gone. I really do fear to a tremendous degree that the risk-taking these people were willing to do for me, to give me an opportunity, are gone. I try to remember that. […]

I benefited from the willingness of people to say, “We’re just going to roll the dice here.”

Of course, it isn’t just Cliff who got lucky; in a big sense we all did. His work has been an important influence in HCI and has contributed to our stores of both generalizable knowledge and new lenses for approaching how we get on in the world.

What does it mean for academic research, and science generally, if this choice and ability to take these risks evaporates? There is incredible competition for academic positions now, more so in some fields than others. And the best tool in getting a job is a whole list of publications accepted in important, mainstream journals in the field. There is a lot written about the competition for academic jobs and criteria for wading through applicants to sometimes a safe option. There are case studies of families of disciplines; for example, a study of the biosciences argues that market forces are failing to create sufficient job prospects for young investigators (Freeman et al. 2001).

I won’t review them all here. Instead I suggest an article for general readers from The New York Times about state and regional colleges’ use of non-tenure track positions, which has an impact of the institutions’ bottom line and flexibility (Finder 2007). This is part of a wider trend in how tenure is used that also impacts the academic freedom and resources that scholars have to pursue new research (Richardson 1999).

Enabling riskful thinking
Hans Ulrich Gumbrecht argues that “riskful thinking” is central to the value of the humanities and arts in academia. He defines riskful thinking as investigation that can’t be expected to produce results interpretable as easy answers, but that instead is likely to produce or highlight complex and confusing phenomena and problems. But I think that this is more broadly true. Riskful thinking is critical to interdisciplinary and pre-paradigmatic sciences, or disciplines long doing normal science but in need of a shake-up. These are situations where compelling phenomena can become paradigmatic cases for study and powerful vocabularies can allow formulating new problems and theories.

What threatens riskful thinking, and how can we enable it? What is so great about riskful thinking anyway, and what makes some riskful thinking so successful, while much of it is likely to fail? At Nokia Research Center in Palo Alto, our lab head John Shen champions the importance of risk taking in industry research, but also argues that risk-taking is often misunderstood and that it is only some kinds of risk-taking that are most important to cultivate in industry research.

Finally, a list of Davidson–Nass similarities, just for fun:

  • Both were hired to tenure track positions at Stanford, where they first did and published highly influential work
  • Both are easily and widely seen as highly programmatic, having defined a clear research program challenging to currently popular approaches and beliefs in their fields
  • Both had great difficulty finding early, publishable success with their research programs, even after ceasing their early work (Davidson: Plato, empirical decision theory; Nass: information processing models of the labor force)
  • Both had other draws and distractions (Davidson: business school, teaching plane identification in WWII; Nass: being a professional magician, working at Intel)
  • Both produced dissertations viewed by others in the discipline as odd (Davison: Quine “was a little mystified by my writing on this. He never talked to me about it.”; Nass: “my PhD thesis was so bizarre”)

References

Adlin, T. (2007). An interview with Cliff Nass. UX Pioneers. http://www.adlininc.com/uxpioneers/new_pioneers/interview_cliff_nass.html
Davidson, D. (1963). Actions, Reasons, and Causes. Journal of Philosophy, 60(23), 685-700.
Davidson, D., & Suppes, P. (1957). Decision Making: An Experimental Approach. Stanford University Press.

Finder, A. (2007, November 20). Decline of the Tenure Track Raises Concerns. The New York Times.

Freeman, R., Weinstein, E., Marincola, E., Rosenbaum, J., & Solomon, F. (2001). Careers: Competition and Careers in Biosciences. Science, 294(5550), 2293-2294.

Lepore, E. (2004). Interview with Donald Davidson. In Problems of Rationality, Oxford University Press, 2004, pp. 231-266.

Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In Proc. of CHI 1994. ACM Press.

Reeves, B., & Nass, C. (1996). The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press.

Richardson, J. T. (1999). Tenure in the New Millenium. National Forum, 79(1), 19-23.
Sanford, J. (2000, November 17). ‘Elementary pleasures’ and ‘riskful thinking’ matter to Gumbrecht. Stanford Report.

Scroll to top