Ready-to-hand

Dean Eckles on people, technology & inference

Applying social psychology

Some reflections on how “quantitative” social psychology is and how this matters for its application to design and decision-making — especially in industries touched by the Internet.

In many ways, contemporary social psychology is dogmatically quantitative. Investigators run experiments, measure quantitative outcomes (even coding free responses to make them amenable to analysis), and use statistics to characterize the collected data. On the other hand, social psychology’s processes of stating and integrating its conclusions remain largely qualitative. Many hypotheses in social psychology state that some factor affects a process or outcome in one direction (i.e., “call” either beta > 0 or beta < 0). Reviews of research in social psychology often start with a simple effect and then note how many other variables moderate this effect. This is all quite fitting with the dominance of null-hypothesis significance testing (NHST) in much of psychology: rather than producing point estimates or confidence intervals for causal effects, it is enough to simply see how likely the observed data is given there there is no effect.1 Of course, there have been many efforts to change this. Many journals require reporting effect sizes. This is a good thing, but these effect sizes are rarely predicted by social psychological theory. Rather, they are reported to aid judgments of whether a finding is not only statistically significant but substantively or practically significant, and the theory predicts the direction of the effect. Not only is this process of reporting and combining results not quantitative in many ways, but it requires substantial inference from the particular settings of conducted experiments to the present settings. This actually helps to make sense of the practices described above: many social psychology experiments are conducted in conditions and with populations that are so different from those in which people would like to apply the resulting theories, that expecting consistency of effect sizes is implausible.2 This is not to say that these studies cannot tell us a good deal about how people will behave in many circumstances. It's just that figuring out what they predict and whether these predictions are reliable is a very messy, qualitative process. Thus, when it comes to making decisions -- about a policy, intervention, or service -- based on social-psychological research, this process is largely qualitative. Decision-makers can ask, which effects are in play? What is their direction? With interventions and measurement that are very likely different from the present case, how large were the effects?3

Sometimes this is the best that social science can provide. And such answers can be quite useful in design. The results of psychology experiments can often be very effective when used generatively. For example, designers can use taxonomies of persuasive strategies to dream up some ways of producing desired behavior change.

Nonetheless, I think all this can be contrasted with some alternative practices that are both more quantitative and require less of this uneasy generalization. First, social scientists can give much more attention to point estimates of parameters. While not without its (other) flaws, the economics literature on financial returns to education has aimed to provide, criticize, and refine estimates of just how much wages increase (on average) with more education.4

Second, researchers can avoid much of the messiest kinds of generalization altogether. Within the Internet industry, product optimization experiments are ubiquitous. Google, Yahoo, Facebook, Microsoft, and many others are running hundreds to thousands of simultaneous experiments with parts of their services. This greatly simplifies generalization: the exact intervention under consideration has just been tried with a random sample from the very population it will be applied to. If someone wants to tweak the intervention, just try it again before launching. This process still involves human judgment about how to react to these results.5 An even more extreme alternative is when machine learning is used to fine-tune, e.g., recommendations without direct involvement (or understanding) by humans.

So am I saying that social psychology — at least as an enterprise that is useful to designers and decision-makers — is going to be replaced by simple “bake-off” experiments and machine learning? Not quite. Unlike product managers at Google, many decision-makers don’t have the ability to cheaply test a proposed intervention on their population of interest.6 Even at Google, many changes (or new products) under consideration are too difficult to build to them all: one has to decide among an overabundance of options before the most directly applicable data could be available. This is consistent with my note above that social-psychological findings can make excellent inspiration during idea generation and early evaluation.

  1. To parrot Andrew Gelman, in social phenomena, everything affects everything else. There are no betas that are exactly zero. []
  2. It's also often implausible that the direction of the effect must be preserved. []
  3. Major figures in social psychology, such as Lee Ross, have worked on trying to better anticipate the effects of social interventions from theory. It isn’t easy. []
  4. The diversity of the manipulations used by social psychologists ostensibly studying the same thing can make this more difficult. []
  5. Generalization is not avoided. In particular, decision-makers often have to consider what would happen if an intervention tested with 1% of the population is launched for the whole population. There are all kinds of issues relating to peer influence, network effects, congestion, etc., here that don’t allow for simple extrapolation from the treatment effects identified by the experiment. Nonetheless, these challenges obviously apply to most research that aims to predict the effects of causes. []
  6. However, Internet services play a more and more central role in many parts of our life, so this doesn’t just have to be limited to the Internet industry itself. []

6 thoughts on “Applying social psychology

  1. I couldn’t agree more with you, Dean. For a long time I’ve thought that the excessive faith in the generalizability of randomized laboratory experiments is one of social psychology’s greatest weaknesses. In general, I wish there were more formal theory that ties well-known effects together in psychology. Maybe a few economists need to jump ship to get a movement started.

  2. Glad to hear it. Not only is there the problem of weird subjects (Western, Educated, Industrialized, Rich, and Democratic — also often young), but there is an overabundance of relatively informal and underdeveloped theory. Perhaps there is too much distracting low-hanging fruit.

    I actually have some additional doubts about whether the level of analysis assumed by social psychology will end up being fruitful. So maybe formalizing would only more quickly illustrate it’s cognitive neuroscience or econ/sociology or bust.

  3. I’m not a statistician, and don’t even play one on TV, but am increasingly interested in the use and misuse of statistics, especially in science … and apparently I’m not alone.

    The Wall Street Journal ran a recent article by The Numbers Guy, Carl Blalik, on A statistical test, significance, get’s its closeup that included a survey of some significant perspectives on statistical significance.

    And, while slightly off-topic, I’m reminded of an inspiring quote I heard by Bill Liscomb in an NPR remembrance last weekend on Nobel Prize-Winning Chemist Dies At 91 (though I greatly prefer what I infer was the original title, based on the URL “Chemist’s death overshadowed by eccentric life”):

    “It’s not a disgrace in science to publish something that’s wrong. What is bad is to publish something that’s not very interesting”

  4. Here I’m just interested in how NHST keeps the focus on theories that say beta > 0 or beta < 0, rather than something more. Theories that only make this binary call aren’t very bold — and can’t reasonably claim to be explaining the phenomena.

    But I do think there are also a bunch of more statistical problems that result from the current practice of NHST. I like Andrew Gelman on this (on his blog and this paper on small effects ). He’s speaking about this at Stanford next week. Another good place to look is at work by Ioannidis and colleagues, such as his paper “Why most published research findings are false”.

  5. Generalization to the entire population may be greatly simplified for web-based companies, but translation of experimental results to different implementations and contexts remains difficult. Basing a new feature on rock solid social-psychological findings is no guarantee for success, simply because these findings might not extend to your particular case, or your implementation may be fallacious.

    In addition, deciding among an overabundance of options is not only critical when cheap testing of proposed interventions is not possible, but also when cheap testing is possible but there are simply too many options to try out. When the time-to-build alternatives is much lower than the time-to-test, idea generation and deciding where to start is just as important, if not more so, than when time-to-build is prohibitively slow.

    Throwing stuff at the wall to see what sticks is generally considered to be a not-so-effective strategy. Having solid social-psychological principles to build upon not only helps to set some direction, but also helps to relate results from different experiments, and generate possible follow-ups.

    So I think we agree, although I would generalize your stated scope. Social psychology, either quantitative or qualitative, is indeed important for design and decision-making in industries, but for all industries that interact with humans, not just those that are touched by the Internet. 🙂

  6. Part of what I’m commenting on here is the absence of “rock solid social-psychological findings” that support sufficiently detailed predictions about the effects of potential interventions. I think much of this is a consequence of the inherent complexity of social phenomena. But it is also partially because so much of social psychology only worries about beta > 0 vs. beta < 0 and mainly works with estimates of beta from unrepresentative subjects and stimuli. Put another way: there is an overabundance of vague, qualitative theories and principles, and not enough credible quantitative models. And agreed that we agree on much of this. See footnote 5 for an example of how much of this is hard even in the Internet industry.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top