Reprioritizing human intelligence tasks for low latency and high throughput on Mechanical Turk
Amazon Mechanical Turk is a platform and market for human intelligence tasks (HITs) that are submitted by requesters and completed by workers (or “turkers”). Each HIT is associated with a payment, often a few cents. This post covers some basics of Mechanical Turk and shows its lack of designed-in support for dynamic reprioritization is problematic for some uses. I also mention some other factors that influence latency and throughput.
With mTurk one can create a HIT that asks someone to rate some search results for a query, evaluate the credibility of a Wikipedia article, draw a sheep facing left, enter names for a provided color, annotate a photo of a person with pose information, or create a storyboard illustrating a new product idea. So Mechanical Turk can be used in many ways for basic research, building a training set for machine learning, or actually enabling a (perhaps prototype) service in use through a kind of Wizard-of-Oz approach. Additionally, I’ve used mTurk to code images captured by participants in a lab experiment (more on this in another post or article).
When creating HITs, a requester can specify a QuestionForm (QF) (e.g., via command line tools or an SDK) that is then presented to the worker by Amazon. This can include images, free text answers, multiple choice, etc. Additionally one can embed Flash or Java objects in it. But the easiest way of creating HITs is to use a QF and not create a Java or Flash application of one’s own. This is especially true for HITs that are handled well by the basic question form. The other option is to create an ExternalQuestion (EQ), which is hosted on one’s own server and is displayed in an iFrame. This provides greater freedom but requires additional development and it is you that must host the page (though you can do so through Amazon’s S3). QF HITs (without embeds) also offer a familiar interface to workers (though it is possible to create a more efficient, custom interface by, e.g., making all the targets larger). So when possible, it is often preferable to use a QF rather than an EQ.
For some of the uses of mTurk for powering a service, it can be important to minimize latency for specific HITs1, including prioritizing particular new HITs over previously created HITs. For example, after some HIT has not been completed for a specific period after creation, it may still be important to complete it, but when it is completed may become less important. This can happen easily if the value of a HIT being completed has a sharp drop off after some time.
This should be done while maintaining high throughput; that is, you don’t want to reduce the rate at which your HITs are completed. When there are more HITs of the same type, workers can check a box to immediately start the next HIT of the same type when they submit the current one (see screenshot). Workers will often complete many HITs of the same type in a row. So throughput can drop substantially if any workers run out of HITs of the same type at any point: they may switch to another HIT type, or if they do your HITs once more appear, then there will be a delay. As we’ll see, these two requirements don’t seem to be well met by the platform — or at least certain uses of it.
Mechanical Turk does not provide a mechanism for prioritizing HITs of the same type, so without deleting all but particular high-priority HITs of that type, there is not a way to ensure that some particular HIT gets done before the rest. And deleting the other HITs would hurt throughput and increase latency for any new high-priority HITs added in the near future (since workers won’t simply start these once they finish their previous HITs).
EQ HITs allow one to avoid this problem. Unlike with QF HITs (without Flash and Java embeds), one does not have to specify the full content of the HIT in advance. When a worker accepts an EQ HIT, you can dynamically serve up the HIT you want to depending on changing priorities. But this means that you can’t take advantage of, e.g., the simplicity of creating and managing data from QF HITs. So though there are ways of coping, adding dynamic reprioritization to Mechanical Turk would be a boon for time-sensitive uses.
There are, of course, other factors that influence latency and throughput on mTurk when (EQ) HITs are reprioritized. Here are a few:
- HIT and sub-tasks duration. How long does it take for workers to complete a HIT, which may be composed of multiple sub-tasks? A worker cannot be assigned a new HIT until they complete (or reject) the previous one. This can be somewhat avoided by creating longer HITs that are subdivided into dynamically selected sub-tasks. This can be done with an EQ HIT or an embedded Flash or Java application in a QF HIT. But the sub-task duration is always a limiting factor, unless one is willing to force abortion of the current sub-task, replacing it will still in progress (with an EQ, Flash, or Java).
- Available workers. How many workers are logged into mTurk and completing task? How many are currently switching HIT types? This can vary with the time of day.
- Appeal of your HITs. How much do workers like your HITs — are they fun? How much do you pay for how much you ask? How many of their completed assignments do you approve?
- Reliability. How accurate or precise must your results be? How many workers do you need to complete a HIT before you have reliable results? Do other workers need to complete meta-HITs before the data can be used?
- I use the term HIT somewhat loosely in this article. There are at least three uses that each differ in their identity conditions. (1) There are HITs considered as human intelligence tasks, and thus divided as we divide tasks; this means that a HIT in another sense can be composed of multiple HITs in this sense (tasks or sub-tasks). (2) There are HITs in Amazon’s technical sense of the term: a HIT is something that has the same HIT ID and therefore has the same specification. In QF HITs without embeds, this means all instances (assignments) of a HIT are the same in content; but in EQ HIT this is not necessarily true, since the content can be determined when assigned. (3) Finally, there is what Amazon calls assignments, specific instances of a HITs that are only completed once. [↩]
Expert users: agreement in focus from two threads of human-computer interaction research
Much of current human-computer interaction (HCI) research focuses on novice users in “walk-up and use” scenarios. I can think of three major causes for this:
- A general shift from examining non-discretionary use to discretionary use
- How much easier it is to find (and not train) study participants unfamiliar with a system than experts (especially with a system that is only a prototype)
- The push from practitioners in the direction, especially with the advent of the Web, where new users just show up at your site, often deep-linked
This focus sometimes comes in for criticism, especially when #2 is taken as a main cause of the choice.
On the other hand, some research threads in HCI continue to focus on expert use. As I’ve been reading a lot of research on both human performance modeling and situated & embodied approaches to HCI, it has been interesting to note that both instead have (comparatively) a much bigger focus on the performance and experience of expert and skilled use.
Grudin’s “Three Faces of Human-Computer Interaction” does a good job of explaining the human performance modeling (HPM) side of this. HPM owes a lot to human factors historically, and while The Psychology of Human-Computer Interaction successfully brought engineering-oriented cognitive psychology to the field, it was human factors, said Stuart Card, “that we were trying to improve” (Grudin 2005, p. 7). And the focus of human factors, which arose from maximizing productivity in industrial settings like factories, has been non-discretionary use. Fundamentally, it is hard for HPM to exist without a focus on expert use because many of the differences — and thus research contributions through new interaction techniques — can only be identified and are only important for use by experts or at least trained users. Grudin notes:
A leading modeler discouraged publication of a 1984 study of a repetitive task that showed people preferred a pleasant but slower interaction technique—a result significant for discretionary use, but not for modeling aimed at maximizing performance.
Situated action and embodied interaction approaches to HCI, which Harrison, Tatar, and Senger (2007) have called the “third paradigm of HCI”, are a bit different story. While HPM research, like a good amount in traditional cognitive science generally, contributes to science and design by assimilating people to information processors with actuators, situated and embodied interaction research borrows a fundamental concern of ethnomethodology, focusing on how people actively make behaviors intelligible by assimilating them to social and rational action.
There are at least three ways this motivates the study of skilled and expert users:
- Along with this research topic comes a methodological concern for studying behavior in context with the people who really do it. For example, to study publishing systems and technology, the existing practices of people working in such a setting of interest are of critical importance.
- These approaches emphasize the skills we all have and the value of drawing on them for design. For example, Dourish (2001) emphasizes the skills with which we all navigate the physical and social world as a resource for design. This is not unrelated to the first way.
- These approaches, like and through their relationships to the participatory design movement, have a political, social, and ethical interest in empowering those who will be impacted by technology, especially when otherwise its design — and the decision to adopt it — would be out of their control. Non-discretionary use in institutions is the paradigm prompting situation for this.
I don’t have a broad conclusion to make. Rather, I just find it of note and interesting that these two very different threads in HCI research stand out from much other work as similar in this regard. Some of my current research is connecting these two threads, so expect more on their relationship.
References
Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. MIT Press.
Grudin, J. (2005). Three Faces of Human-Computer Interaction. IEEE Ann. Hist. Comput. 27, 4 (Oct. 2005), 46-62.
Harrison, S., Tatar, D., and Senger, P. (2007). The Three Paradigms of HCI. Extended Abstracts CHI 2007.