Ready-to-hand

Dean Eckles on people, technology & inference

cyborgs

Social and cultural costs of media multitasking

Today I’m attending the Media Multitasking workshop at Stanford. I’m going to just blog as I go, so these posts are going to perhaps be a bit rougher than usual.1

The workshop began with a short keynote from Patricia Greenfield, a psychology professor at UCLA, about the costs and benefits of media multitasking. Greenfield’s presentation struck me as representing as an essentially conservative and even alarmist perspective on media multitasking.

Exemplifying this perspective was Greenfield’s claim that media multitasking (by children) is disrupting family rituals and privileging peer interaction over interaction with family. Greenfield mixed in some examples of how having a personal mobile phone allows teens to interact with peers without their parents being in the loop (e.g., aware of who their children’s interaction partners are). These examples don’t strike me as particularly central to understanding media multitasking; instead, they highlight the pervasive alarmism about new media and remind me of how “helicopter parents'” extreme control of their children’s physical co-presence with others is also a change from “how things used to be”.

Face-to-face vs. mediated

The relationship of these worries about mobile phones and the allegedly decreasing control that parents have over their children’s social interaction to media multitasking is that mediated communication is being privileged over face-to-face interaction. Greenfield proposed that face-to-face interaction suffers from media use and media multi-tasking, and that this is worrisome because we have evolved for face-to-face interaction. She commented that face-to-face interaction enables empathy; there is an implicit contrast here with mediated interaction, but I’m not sure it is so obvious that mediated communication doesn’t enable empathy — including empathizing with targets that one would otherwise not encounter face-to-face and experiencing a persistent shared perspective with close, but distant, others (e.g., parents and college student children).

Family reunion

Greenfield cited a study of 30 homes in which children and a non-working parent only greeted the other parent returning home from work about one third of the time (Ochs et al., 2006), arguing — as I understood it — that this is symptomatic of a deprioritization of face-to-face interaction.

As another participant pointed out, this could also — if not in these particular cases, then likely in others — be a case of not feeling apart during the working day: that is, we can ask, are the children and non-working parents communicating with the parent during the workday? In fact, Ochs et al. (2006, pp. 403-4) presents an example of such a reunion (between husband and wife in this case) in which the participants have been in contact by mobile phone, and the conversation picks up where it left off (with the addition of some new information available by being present in the home).

Next

I’m looking forward to the rest of the workshop. I think one clear theme of the workshop is going to be differing emphasis on costs and benefits of media multitasking of different types. I expect Greenfield’s “doom and gloom” will continue to be contrasted with other perspectives — some of which already came up.

References

Ochs, E., Graesch, A. P., Mittmann, A., Bradbury, T., & Repetti, R. (2006). Video ethnography and ethnoarchaeological tracking. The Work and Family Handbook: Multi-Disciplinary Perspective, Methods, and Approaches, 387–409.

  1. Which also means I’m multitasking, in some senses, through the whole conference. []

Being a lobster and using a hammer: “homuncular flexibility” and distal attribution

Jaron Lanier (2006) calls the ability of humans to learn to control virtual bodies that are quite different than our own “homuncular flexibility”. This is, for him, a dangerous idea. The idea is that the familiar mapping of the body represented in the cortical homunculus is only one option – we can flexibly act (and perceive) using quite other mappings, e.g., to virtual bodies. Your body can be tracked, and these movements can be used to control a lobster in virtual reality – just as one experiences (via head-mounted display, haptic feedback, etc.) the virtual space from the perspective of the lobster under your control.

This name and description makes this sound quite like science fiction. In this post, I assimilate homuncular flexibility to the much more general phenomenon of distal attribution (Loomis, 1992; White, 1970). When I have a perceptual experience, I can just as well attribute that experience – and take it as being directed at or about – more proximal or distal phenomena. For example, I can attribute it to my sensory surface, or I can attribute it to a flower in the distance. White (1970) proposed that more distal attribution occurs when the afference (perception) is lawfully related to efference (action) on the proximal side of that distal entity. That is, if my action and perception are lawfully related on “my side” of that entity in the causal tree, then I will make attributions to that entity. Loomis (1992) adds the requirement that this lawful relationship be successfully modeled. This is close, but not quite right, for if I can make distal attributions even in the absence of an actual lawful relationship that I successfully model, my (perhaps inaccurate) modeling of a (perhaps non-existent) lawful relationship will do just fine.

Just as I attribute a sensory experience to a flower and not the air between me and the flower, so the blind man or the skilled hammer-user can attribute a sensory experience to the ground or the nail, rather than the handle of the cane or hammer. On consideration, I think we can see that these phenomena are very much what Lanier is talking about. When I learn to operate (and, not treated by Lanier, 2006, sense) my lobster-body, it is because I have modeled an efference–afference relationship, yielding a kind of transparency. This is a quite familiar sort of experience. It might still be a quite dangerous or exciting idea, but its examples are ubiquitous, not restricted to virtual reality labs.

Lanier paraphrases biologist Jim Boyer as counting this capability as a kind of evolutionary artifact – a spandrel in the jargon of evolutionary theory. But I think a much better just-so evolutionary story can be given: it is this capability – to make distal attributions to the limits of the efference–afference relationships we successfully model – that makes us able to use tools so effectively. At an even more basic and general level, it is this capability that makes it possible for us to communicate meaningfully: our utterances have their meaning in the context of triangulating with other people such that the content of what we are saying is related to the common cause of both of our perceptual experiences (Davidson, 1984).

References

Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Lanier, J. (2006). Homuncular flexibility. Edge.

Loomis, J. M. (1992). Distal attribution and presence. Presence: Teleoperators and Virtual Environments, 1(1), 113-119.

White, B. W. (1970). Perceptual findings with the vision-substitution system. IEEE Transactions on Man-Machine Systems, 11(1), 54-58.

Advanced Soldier Sensor Information System and Technology

Yes, that spells ASSIST.

Check out this call for proposals from DARPA (also see Wired News). This research program is designed to create and evaluate systems that use sensors to capture soldiers’ experiences in the field, thus allowing for (spatially and temporally) distant review and analysis of this data, as well as augmenting their abilities while still in the field.

I found it interesting to consider differences in requirements between this program and others that would apply some similar technologies and involve similar interactions — but for other purposes. For example, two such uses are (1) everyday life recording for social sharing and memory and (2) rich data collection as part of ethnographic observation and participation.

When doing some observation myself, I strung my cameraphone around my neck and used Waymarkr to automatically capture a photo every minute or so. Check out the results from my visit to a flea market in San Francisco.

Photos of two ways to wear a cameraphone from Waymarkr. Incidentally, Waymarkr uses the cell-tower-based location API created for ZoneTag, a project I worked on at Yahoo! Research Berkeley.

Also, for a use more like (1) in a fashion context, see Blogging in Motion. This project (for Yahoo! Hack Day) created a “auto-blogging purse” that captures photos (again using ZoneTag) whenever the wearer moves around (sensed using GPS).

Scroll to top