Ready-to-hand

Dean Eckles on people, technology & inference

causal inference

Political arithmetic: The Joy of Stats

The Joy of Stats with Hans Rosling is quite engaging — and worth watching. I really enjoyed the historical threads running through the piece. I think he’s right to emphasize how data collection by states — to understand and control their populations — is at the origin of statistics. With increasing data collection today, this is a powerful and necessary reminder of the range of ends to which data analysis can be put.

Like others, I found the scenes with Rosling behind a bubble plot made difficult by the distracting lights and windows in the background. And the ending — with analyzing “what it means to be human” — was a bit much for me. But a small complaint about a compelling view.

Ideas behind their time: formal causal inference?

Alex Tabarrok at Marginal Revolution blogs about how some ideas seem notably behind their time:

We are all familiar with ideas said to be ahead of their time, Babbage’s analytical engine and da Vinci’s helicopter are classic examples. We are also familiar with ideas “of their time,” ideas that were “in the air” and thus were often simultaneously discovered such as the telephone, calculus, evolution, and color photography. What is less commented on is the third possibility, ideas that could have been discovered much earlier but which were not, ideas behind their time.

In comparing ideas behind and ahead of their times, it’s worth considering the processes that identify them as such.

In the case of ideas ahead of their time, we rely on records and other evidence of their genesis (e.g., accounts of the use of flamethrowers at sea by the Byzantines ). Later users and re-discoverers of these ideas are then in a position to marvel at their early genesis. In trying to see whether some idea qualifies as ahead of its time, this early genesis, lack or use or underuse, followed by extensive use and development together serve as evidence for “ahead of its time” status.

On the other hand, in identifying ideas behind their time, it seems that we need different sorts of evidence. Taborrok uses the standard of whether their fruits could have been produced a long time earlier (“A lot of the papers in say experimental social psychology published today could have been written a thousand years ago so psychology is behind its time”). We need evidence that people in a previous time had all the intellectual resources to generate and see the use of the idea. Perhaps this makes identifying ideas behind their time harder or more contentious.

Y(X = x) and P(Y | do(x))

Perhaps formal causal inference — and some kind of corresponding new notation, such as Pearl’s do(x) operator or potential outcomes — is an idea behind its time.1 Judea Pearl’s account of the history of structural equation modeling seems to suggest just this: exactly what the early developers of path models (Wright, Haavelmo, Simon) needed was new notation that would have allowed them to distinguish what they were doing (making causal claims with their models) from what others were already doing (making statistical claims).2

In fact, in his recent talk at Stanford, Pearl suggested just this — that if the, say, the equality operator = had been replaced with some kind of assignment operator (say, :=), formal causal inference might have developed much earlier. We might be a lot further along in social science and applied evaluation of interventions if this had happened.

This example raises some questions about the criterion for ideas behind their time that “people in a previous time had all the intellectual resources to generate and see the use of the idea” (above). Pearl is a computer scientist by training and credits this background with his approach to causality as a problem of getting the formal language right — or moving between multiple formal languages. So we may owe this recent development to comfort with creating and evaluating the qualities of formal languages for practical purposes — a comfort found among computer scientists. Of course, e.g., philosophers and logicians also have been long comfortable with generating new formalisms. I think of Frege here.

So I’m not sure whether formal causal inference is an idea behind its time (or, if so, how far behind). But I’m glad we have it now.

  1. There is a “lively” debate about the relative value of these formalisms. For many of the dense causal models applicable to the social sciences (everything is potentially a confounder), potential outcomes seem like a good fit. But they can become awkward as the causal models get complex, with many exclusion restrictions (i.e. missing edges). []
  2. See chapter 5 of Pearl, J. (2009). Causality: Models, Reasoning and Inference. 2nd Ed. Cambridge University Press. []

Economic imperialism and causal inference

And I, for one, welcome our new economist overlords…

Readers not in academic social science may take the title of this post as indicating I’m writing about the use of economic might to imperialist ends.1 Rather, economic imperialism is a practice of economists (and acolytes) in which they invade research territories that traditionally “belong” to other social scientific disciplines.2 See this comic for one way you can react to this.3

Economists bring their theoretical, statistical, and research-funding resources to bear on problems that might not be considered economics. For example, freakonomists like Levitt study sumo wrestlers and the effects of the legalization of abortion on crime. But, hey, if the Commerce Clause means that Congress can legislate everything, then, for the same reasons, economists can — no, must — study everything.

I am not an economist by training, but I have recently had reason to read quite a bit in econometrics. Overall, I’m impressed.4 Economists have recently taken causal inference — learning about cause and effect relationships, often from observational data — quite seriously. In the eyes of some, this has precipitated a “credibility revolution” in economics. Certainly, papers in economics and (especially) econometrics journals consider threats to the validity of causal inference at length.

On the other hand, causal inference in the rest of the social sciences is simultaneously over-inhibited and under-inhibited. As Judea Pearl observes in his book Causality, lack of clarity about statistical models (that social scientists often don’t understand) and causality has induced confusion about distinctions between statistical and causal issues (i.e., between estimation methods and identification).5

So, on the one had, many psychologists stick to experiments. Randomized experiments are, generally, the gold standard for investigating cause–effect relationships, so this can and often does go well. However, social psychologists have recently been obsessed with using “mediation analysis” to investigate the mechanisms by which causes they can manipulate produce effects of interest. Investigators often manipulate some factors experimentally and then measure one or more variables they believe fully or partially mediate the effect of those factors on their outcome. Then, under the standard Baron & Kenny approach, psychologists fit a few regression models, including regressing the outcome on both the experimentally manipulated variables and the simply measured (mediating) variables. The assumptions required for this analysis to identify any effects of interest are rarely satisfied (e.g., effects on individuals are homogenous).6 So psychologists are often over-inhibited (experiments only please!) and under-inhibited (mediation analysis).

Likewise, in more observational studies (in psychology, sociology, education, etc.), investigators are sometimes wary of making explicit causal claims. So instead of carefully stating the causal assumptions that would justify different causal conclusions, readers are left with phrases like “suggests” and “is consistent with” followed by causal claims. Authors then recommend that further research be conducted to better support these causal conclusions. With these kinds of recommendations awaiting, no wonder that economists find the territory ready for taking: they can just show up with econometrics tools and get to work on hard-won questions that “rightly belong to others”.

  1. Well, if economists have better funding sources, this might apply in some sense. []
  2. For arguments in favor of economic imperialism, see Lazear, E.P. (1999). Economic imperialism. NBER Working Paper No. 7300. []
  3. Or see this comic for imperialism by physicists. []
  4. At least by the contemporary literature on what I’ve been reading on — IVs, encouragement designs, endogenous interactions, matching estimators. But it is true that in some of these areas econometrics has been able to fruitfully borrow from work on potential outcomes in statistics and epidemiology. []
  5. Econometricians have made similar observations. []
  6. For a bit on this topic, see the discussion and links to papers here. []

Homophily and peer influence are messy business

Some social scientists have recently been getting themselves into trouble (and limelight) claiming that they have evidence of direct and indirect “contagion” (peer influence effects) in obesity, happiness, loneliness, etc. Statisticians and methodologists — and even science journalists — have pointed out their troubles. In observational data, peer influence effects are confounded with those of homophily and common external causes. That is, people are similar to other people in their social neighborhood because ties are more likely to form between similar people, and many external events that could cause the outcome are localized in networks (e.g., fast food restaurant opens down the street).

Econometricians1 have worked out the conditions necessary for peer influence effects to be identifiable.2 Very few studies have plausibly satisfied these requirements. But even if an investigator meets these requirements, it is worth remembering that homophily and peer influence are still tricky to think about — let along produce credible quantitative estimates of.

As Andrew Gelman notes, homophily can depend on network structure and information cascades (a kind of peer influence effect) to enable the homophilous relationships to form. Likewise, the success or failure of influence in a relationship can affect that relationship. For example, once I convert you to my way of thinking — let’s say, about climate change, we’ll be better friends. To me, it seems like some of the downstream consequences of our similarity should be attributed to peer influence. If I get fat and so you do, it could be peer influence in many ways: maybe that’s because I convinced you that owning a propane grill is more environmentally friendly (and then we both ended up grilling a lot more red meat). Sounds like peer influence to me. But it’s not that me getting fat caused you to.

Part of the problem here is looking only at peer influence effects in a single behavior or outcome at once. I look forward to the “clear thinking and adequate data” (Manski) that will allow us to better understand these processes in the future. Until then: scientists, please at least be modest in your claims and radical policy recommendations. This is messy business.

  1. They do statistics but speak a different language than big “S” statisticians — kind of like machine learning folks. []
  2. For example, see Manski, C. F. (2000). Economic analysis of social interactions. Journal of Economic Perspectives, 14(3):115–136. Economists call peer influence effects endogenous interactions and contextual interactions. []
Scroll to top