Ready-to-hand

Dean Eckles on people, technology & inference

research methods

Academia vs. industry: Harvard CS vs. Google edition

Matt Welsh, a professor in the Harvard CS department, has decided to leave Harvard to continue his post-tenure leave working at Google. Welsh is obviously leaving a sweet job. In fact, it was not long ago that he was writing about how difficult it is to get tenure at Harvard.

So why is he leaving? Well, CS folks doing research in large distributed systems are in a tricky place, since the really big systems are all in industry. And instead of legions of experienced engineers to help build and study these systems, they have a bunch of lazy grad students! One might think, then, that this kind of (tenured) professor to industry move is limited to people creating and studying large deployments of computer systems.

There is a broader pull, I think. For researchers studying many central topics in the social sciences (e.g., social influence), there is a big draw to industry, since it is corporations that are collecting broad and deep data sets describing human behavior. To some extent, this is also a case of industry being appealing for people studying deployment of large deployments of computer systems — but it applies even to those who don’t care much about the “computer” part. In further parallels to the case with CS systems researchers, in industry they have talented database and machine learning experts ready to help, rather than social science grad students who are (like the faculty) too often afraid of math.

Economic imperialism and causal inference

And I, for one, welcome our new economist overlords…

Readers not in academic social science may take the title of this post as indicating I’m writing about the use of economic might to imperialist ends.1 Rather, economic imperialism is a practice of economists (and acolytes) in which they invade research territories that traditionally “belong” to other social scientific disciplines.2 See this comic for one way you can react to this.3

Economists bring their theoretical, statistical, and research-funding resources to bear on problems that might not be considered economics. For example, freakonomists like Levitt study sumo wrestlers and the effects of the legalization of abortion on crime. But, hey, if the Commerce Clause means that Congress can legislate everything, then, for the same reasons, economists can — no, must — study everything.

I am not an economist by training, but I have recently had reason to read quite a bit in econometrics. Overall, I’m impressed.4 Economists have recently taken causal inference — learning about cause and effect relationships, often from observational data — quite seriously. In the eyes of some, this has precipitated a “credibility revolution” in economics. Certainly, papers in economics and (especially) econometrics journals consider threats to the validity of causal inference at length.

On the other hand, causal inference in the rest of the social sciences is simultaneously over-inhibited and under-inhibited. As Judea Pearl observes in his book Causality, lack of clarity about statistical models (that social scientists often don’t understand) and causality has induced confusion about distinctions between statistical and causal issues (i.e., between estimation methods and identification).5

So, on the one had, many psychologists stick to experiments. Randomized experiments are, generally, the gold standard for investigating cause–effect relationships, so this can and often does go well. However, social psychologists have recently been obsessed with using “mediation analysis” to investigate the mechanisms by which causes they can manipulate produce effects of interest. Investigators often manipulate some factors experimentally and then measure one or more variables they believe fully or partially mediate the effect of those factors on their outcome. Then, under the standard Baron & Kenny approach, psychologists fit a few regression models, including regressing the outcome on both the experimentally manipulated variables and the simply measured (mediating) variables. The assumptions required for this analysis to identify any effects of interest are rarely satisfied (e.g., effects on individuals are homogenous).6 So psychologists are often over-inhibited (experiments only please!) and under-inhibited (mediation analysis).

Likewise, in more observational studies (in psychology, sociology, education, etc.), investigators are sometimes wary of making explicit causal claims. So instead of carefully stating the causal assumptions that would justify different causal conclusions, readers are left with phrases like “suggests” and “is consistent with” followed by causal claims. Authors then recommend that further research be conducted to better support these causal conclusions. With these kinds of recommendations awaiting, no wonder that economists find the territory ready for taking: they can just show up with econometrics tools and get to work on hard-won questions that “rightly belong to others”.

  1. Well, if economists have better funding sources, this might apply in some sense. []
  2. For arguments in favor of economic imperialism, see Lazear, E.P. (1999). Economic imperialism. NBER Working Paper No. 7300. []
  3. Or see this comic for imperialism by physicists. []
  4. At least by the contemporary literature on what I’ve been reading on — IVs, encouragement designs, endogenous interactions, matching estimators. But it is true that in some of these areas econometrics has been able to fruitfully borrow from work on potential outcomes in statistics and epidemiology. []
  5. Econometricians have made similar observations. []
  6. For a bit on this topic, see the discussion and links to papers here. []

Homophily and peer influence are messy business

Some social scientists have recently been getting themselves into trouble (and limelight) claiming that they have evidence of direct and indirect “contagion” (peer influence effects) in obesity, happiness, loneliness, etc. Statisticians and methodologists — and even science journalists — have pointed out their troubles. In observational data, peer influence effects are confounded with those of homophily and common external causes. That is, people are similar to other people in their social neighborhood because ties are more likely to form between similar people, and many external events that could cause the outcome are localized in networks (e.g., fast food restaurant opens down the street).

Econometricians1 have worked out the conditions necessary for peer influence effects to be identifiable.2 Very few studies have plausibly satisfied these requirements. But even if an investigator meets these requirements, it is worth remembering that homophily and peer influence are still tricky to think about — let along produce credible quantitative estimates of.

As Andrew Gelman notes, homophily can depend on network structure and information cascades (a kind of peer influence effect) to enable the homophilous relationships to form. Likewise, the success or failure of influence in a relationship can affect that relationship. For example, once I convert you to my way of thinking — let’s say, about climate change, we’ll be better friends. To me, it seems like some of the downstream consequences of our similarity should be attributed to peer influence. If I get fat and so you do, it could be peer influence in many ways: maybe that’s because I convinced you that owning a propane grill is more environmentally friendly (and then we both ended up grilling a lot more red meat). Sounds like peer influence to me. But it’s not that me getting fat caused you to.

Part of the problem here is looking only at peer influence effects in a single behavior or outcome at once. I look forward to the “clear thinking and adequate data” (Manski) that will allow us to better understand these processes in the future. Until then: scientists, please at least be modest in your claims and radical policy recommendations. This is messy business.

  1. They do statistics but speak a different language than big “S” statisticians — kind of like machine learning folks. []
  2. For example, see Manski, C. F. (2000). Economic analysis of social interactions. Journal of Economic Perspectives, 14(3):115–136. Economists call peer influence effects endogenous interactions and contextual interactions. []

Aardvark’s use of Wizard of Oz prototyping to design their social interfaces

The Wall Street Journal’s Venture Capital Dispatch reports on how Aardvark, the social question asking and answering service recently acquired by Google, used a Wizard of Oz prototype to learn about how their service concept would work without building all the tech before knowing if it was any good.

Aardvark employees would get the questions from beta test users and route them to users who were online and would have the answer to the question. This was done to test out the concept before the company spent the time and money to build it, said Damon Horowitz, co-founder of Aardvark, who spoke at Startup Lessons Learned, a conference in San Francisco on Friday.

“If people like this in super crappy form, then this is worth building, because they’ll like it even more,” Horowitz said of their initial idea.

At the same time it was testing a “fake” product powered by humans, the company started building the automated product to replace humans. While it used humans “behind the curtain,” it gained the benefit of learning from all the questions, including how to route the questions and the entire process with users.

This is a really good idea, as I’ve argued before on this blog and in a chapter for developers of mobile health interventions. What better way to (a) learn about how people will use and experience your service and (b) get training data for your machine learning system than to have humans-in-the-loop run the service?

My friend Chris Streeter wondered whether this was all done by Aardvark employees or whether workers on Amazon Mechanical Turk may have also been involved, especially in identifying the expertise of the early users of the service so that the employees could route the questions to the right place. I think this highlights how different parts of a service can draw on human and non-human intelligence in a variety of ways — via a micro-labor market, using skilled employees who will gain hands-on experience with customers, etc.

I also wonder what UIs the humans-in-the-loop used to accomplish this. It’d be great to get a peak. I’d expect that these were certainly rough around the edges, as was the Aardvark customer-facing UI.

Aardvark does a good job of being a quite sociable agent (e.g., when using it via instant messaging) that also gets out of the way of the human–human interaction between question askers and answers. I wonder how the language used by humans to coordinate and hand-off questions may have played into creating a positive para-social interaction with vark.

“Discovering Supertaskers”: Challenges in identifying individual differences from behavior

Some new research from the University of Utah suggests that a small fraction of the population consists of “supertaskers” whose performance is not reduced by multitasking, such as when completing tasks on a mobile phone while driving.

“Supertaskers did a phenomenal job of performing several different tasks at once,” Watson says. “We’d all like to think we could do the same, but the odds are overwhelmingly against it.” (Wired News & Science News)

The researchers, Watson and Strayer, argue that they have good evidence for the existence of this individual variation. One can find many media reports of this “discovery” of “supertaskers” (e.g., Psychology Today). I do not think this conclusion is well justified.

First, let’s consider the methods used in this research. 100 college students each completed driving tasks and an auditory task on a mobile phone — separately and in combination — over a single 1.5 hour session. The auditory task is designed to measure differences in executive attention by requiring participants do hold past items in memory while completing math tasks. The researchers identified “supertaskers” as those participants who met the following “stringent” requirements: they were both (a) in the top 25% of participants in performance in the single-task portions and (b) and not different in their dual-task performance on at least three of the four measures by more than the standard error. Since two of the four measures are associated with each of the two tasks (driving: brake reaction time, following distance; mobile phone task: memory performance, math performance), this requires that ”supertaskers” do as well on both measures of either the driving or mobile phone task and one measure of the other task.

There may be many issues with the validity of the inference in this work. I want to focus on one in particular: the inference from the observation of differences between participants’ performance in a single 1.5 hour session to the conclusion that there are stable, “trait” differences among participants, such that some are “supertaskers”. This conclusion is simply not justified. To illustrate this, let’s consider how the methods of this study differ from those usually (and reasonably) used by psychologists to reach such conclusions.

Psychologists often study individual differences using the following approach. First, identify some plausible trait of individuals. Second, construct a questionnaire or other (perhaps behavioral) test that measures that trait. Third, demonstrate that this test has high reliability — that is, that the differences between people are much larger than the differences between the same person taking the test at different times. Fourth, then use this test to measure the trait and see if it predicts differences in some experiment. A key point here is that in order to conclude that the test measures a stable individual difference (i.e., a trait) researchers need to establish high test-retest reliability; otherwise, the test might just be measuring differences in temporary mood.

Returning to Watson and Strayer’s research, it is easy to see the problem: we have no idea whether the variation observed should be attributed to stable individual differences (i.e., being a “supertasker”) or to unstable differences. That is, if we brought those same “supertasker” participants back into the lab and they did another session, would they still exhibit the same lack of performance difference between the single- and dual-task conditions? This research gives us no reason that expect that they would.

Watson and Strayer do some additional analysis with the aim of ruling out their observations being a fluke. One might think this addresses my criticism, but it does not. They

performed a Monte Carlo simulation in which randomly selected single-dual task pairs of variables from the existing data set were obtained for each of the 4 dependent measures and then subjected to the same algorithm that was used to classify the supertaskers.

That is, they broke apart the single-task and dual-task data for each participant and created new simulated participants by randomly sampling pairs single- and dual-task data. They found that on this analysis there would be only 1/15th of the observed ”supertaskers”. This is a good analysis to do. However, this just demonstrates that being labeled a “supertasker” is likely caused by the single- and dual-task data being generated by the same person in the same session. This stills leaves it quite open (and more plausible to me) that participants’ were in varying states for the session and this explains their (temporary) “supertasking”. It also allows that this greater frequency of “supertaskers” is due to participants who do well in whatever task they are given first being more likely to do well in subsequent tasks.

My aim in this post is to suggest some challenges that this kind of approach has to face. Part of my interest in this is that I’m quite sympathetic to identifying stable, observed differences in behavior and then “working backwards” to characterizing the traits that explain these downstream differences. This  exactly the approach that Maurits Kaptein and I are taking in our work on persuasion profiling: we observe how individuals respond to the use of different influence strategies and use this to (a) construct a “persuasion profile” for that individual and (b) characterize how much variation in the effects of these strategies there is in the population.

However, a critical step in this process is ruling out the alternative explanation that the observed differences are primarily due to differences in, e.g., mood, rather than stable individual differences. One way to do this is to observe the behavior in multiple sessions and multiple contexts. Another way to rule out this alternative explanation is if you observe a complex pattern of behavioral differences that previous work suggests could not be the result of temporary, unstable differences — or at least is more easily explained by previous theories about the relevant traits. That is, I’m enthusiastic about identifying stable, observed differences in behavior, but I don’t want to see researchers abandon the careful methods that have been used in the past to make the case for a new individual difference.

Watson, Strayer, and colleagues have apparently begun doing work that could be used to show the stability of the observed differences. The discussion section of their paper refers to some additional unpublished research in which they invited their “supertaskers” from this study and another study back into the lab and had them do some similar tasks measuring executive attention (but not driving) while in an fMRI machine. They report greater “coherence” in their performance in this second study and the previous study than control participants and better performance for “supertaskers” on dual-N-back tasks. But this is short of showing high test-retest reliability.

Since little is said about this work, I hesitate to conclude anything from it or criticize it. I’ve contacted the authors with the hope of learning more. My current sense is that Watson and Strayer’s entire case for “supertaskers” hinges on research of this kind.

References

Watson, J. M., & Strayer, D. L. (2010). Supertaskers: Profiles in Extraordinary Multi-tasking Ability. Psychonomic Bulletin and Review. Forthcoming. Retrieved from http://www.psych.utah.edu/lab/appliedcognition/publications/supertaskers.pdf

Scroll to top