How are psychographic personalization and persuasion profiling different from more familiar forms of personalization and recommendation systems? A big difference is that they focus on selecting the “how” or the “means” of inducing you to an action — rather than selecting the “what” or the “ends”. Given the recent interest in this kind of personalization, I wanted to highlight some excerpts from something Maurits Kaptein and I wrote in 2010.1
This post excerpts our 2010 article, a version of which was published as:
Kaptein, M., & Eckles, D. (2010). Selecting effective means to any end: Futures and ethics of persuasion profiling. In International Conference on Persuasive Technology (pp. 82-93). Springer Lecture Notes in Computer Science.
For more on this topic, see these papers.
We distinguish between those adaptive persuasive technologies that adapt the particular ends they try to bring about and those that adapt their means to some end.
First, there are systems that use models of individual users to select particular ends that are instantiations of more general target behaviors. If the more general target behavior is book buying, then such a system may select which specific books to present.
Second, adaptive persuasive technologies that change their means adapt the persuasive strategy that is used — independent of the end goal. One could offer the same book and for some people show the message that the book is recommended by experts, while for others emphasizing that the book is almost out of stock. Both messages be may true, but the effect of each differs between users.
Example 2. Ends adaptation in recommender systems
Pandora is a popular music service that tries to engage music listeners and persuade them into spending more time on the site and, ultimately, subscribe. For both goals it is beneficial for Pandora if users enjoy the music that is presented to them by achieving a match between the music offering to individual, potentially latent music preferences. In doing so, Pandora adaptively selects the end — the actual song that is listened to and that could be purchased, rather than the means — the reasons presented for the selection of one specific song.
The distinction between end-adaptive persuasive technologies and means-adaptive persuasive technologies is important to discuss since adaptation in the latter case could be domain independent. In end adaptation, we can expect that little of the knowledge of the user that is gained by the system can be used in other domains (e.g. book preferences are likely minimally related to optimally specifying goals in a mobile exercise coach). Means adaptation is potentially quite the opposite. If an agent expects that a person is more responsive to authority claims than to other influence strategies in one domain, it may well be that authority claims are also more effective for that user than other strategies in a different domain. While we focus on novel means-adaptive systems, it is actually quite common for human influence agents adaptively select their means.
Influence Strategies and Implementations
Means-adaptive systems select different means by which to bring about some attitude or behavior change. The distinction between adapting means and ends is an abstract and heuristic one, so it will be helpful to describe one particular way to think about means in persuasive technologies. One way to individuate means of attitude and behavior change is to identify distinct influence strategies, each of which can have many implementations. Investigators studying persuasion and compliance-gaining have varied in how they individuate influence strategies: Cialdini  elaborates on six strategies at length, Fogg  describes 40 strategies under a more general definition of persuasion, and others have listed over 100 .
Despite this variation in their individuation, influence strategies are a useful level of analysis that helps to group and distinguish specific influence tactics. In the context of means adaptation, human and computer persuaders can select influence strategies they expect to be more effective that other influence strategies. In particular, the effectiveness of a strategy can vary with attitude and behavior change goals. Different influence strategies are most effective in different stages of the attitude to behavior continuum . These range from use of heuristics in the attitude stage to use of conditioning when a behavioral change has been established and needs to be maintained . Fogg  further illustrates this complexity and the importance of considering variation in target behaviors by presenting a two-dimensional matrix of 35 classes behavior change that vary by (1) the schedule of change (e.g., one time, on cue) and (2) the type of change (e.g., perform new behavior vs. familiar behavior). So even for persuasive technologies that do not adapt to individuals, selecting an influence strategy — the means — is important. We additionally contend that influence strategies are also a useful way to represent individual differences  — differences which may be large enough that strategies that are effective on average have negative effects for some people.
Example 4. Backfiring of influence strategies
John just subscribed to a digital workout coaching service. This system measures his activity using an accelerometer and provides John feedback through a Web site. This feedback is accompanied by recommendations from a general practitioner to modify his workout regime. John has all through his life been known as authority averse and dislikes the top-down recommendation style used. After three weeks using the service, John’s exercise levels have decreased.
When systems represent individual differences as variation in responses to influence strategies — and adapt to these differences, they are engaging in persuasion profiling. Persuasion profiles are thus collections of expected effects of different influence strategies for a specific individual. Hence, an individual’s persuasion profile indicates which influence strategies — one way of individuating means of attitude and behavior change — are expected to be most effective.
Persuasion profiles can be based on demographic, personality, and behavioral data. Relying primarily on behavioral data has recently become a realistic option for interactive technologies, since vast amounts of data about individuals’ behavior in response to attempts at persuasion are currently collected. These data describe how people have responded to presentations of certain products (e.g. e-commerce) or have complied to requests by persuasive technologies (e.g. the DirectLife Activity Monitor ).
Existing systems record responses to particular messages — implementations of one or more influence strategies — to aid profiling. For example, Rapleaf uses responses by a users’ friends to particular advertisements to select the message to present to that user . If influence attempts are identified as being implementations of particular strategies, then such systems can “borrow strength” in predicting responses to other implementations of the same strategy or related strategies. Many of these scenarios also involve the collection of personally identifiable information, so persuasion profiles can be associated with individuals across different sessions and services.
Consequences of Means Adaptation
In the remainder of this paper we will focus on the implications of the usage of persuasion profiles in means-adaptive persuasive systems. There are two properties of these systems which make this discussion important:
1. End-independence: Contrary to profiles used by end-adaptive persuasive sys- tems the knowledge gained about people in means-adaptive systems can be used independent from the end goal. Hence, persuasion profiles can be used independent of context and can be exchanged between systems.
2. Undisclosed: While the adaptation in end-adaptive persuasive systems is often most effective when disclosed to the user, this is not necessarily the case in means-adaptive persuasive systems powered by persuasion profiles. Selecting a different influence strategy is likely less salient than changing a target behavior and thus will often not be noticed by users.
Although through the previous examples and the discussion of adaptive persuasive systems these two notions have already been hinted upon, we feel it is important to examine each in more detail.
Means-adaptive persuasive technologies are distinctive in their end-independence: a persuasion profile created in one context can be applied to bringing about other ends in that same context or to behavior or attitude change in a quite different context. This feature of persuasion profiling is best illustrated by contrast with end adaptation.
Any adaptation that selects the particular end (or goal) of a persuasive attempt is inherently context-specific. Though there may be associations between individual differences across context (e.g., between book preferences and political attitudes) these associations are themselves specific to pairs of contexts. On the other hand, persuasion profiles are designed and expected to be independent of particular ends and contexts. For example, we propose that a person’s tendency to comply more to appeals by experts than to those by friends is present both when looking at compliance to a medical regime as well as purchase decisions.
It is important to clarify exactly what is required for end-independence to obtain. If we say that a persuasion profile is end-independent than this does not imply that the effectiveness of influence strategies is constant across all contexts. Consistent with the results reviewed in section 3, we acknowledge that influence strategy effectiveness depends on, e.g., the type of behavior change. That is, we expect that the most effective influence strategy for a system to employ, even given the user’s persuasion profile, would depend on both context and target behavior. Instead, end-independence requires that the difference between the average effect of a strategy for the population and the effect of that strategy for a specific individual is relatively consistent across contexts and ends.
Implications of end-independence.
From end-independence, it follows that persuasion profiles could potentially be created by, and shared with, a number of systems that use and modify these profiles. For example, the profile constructed from observing a user’s online shopping behavior can be of use in increasing compliance in saving energy. Behavioral measures in latter two contexts can contribute to refining the existing profile.2
Not only could persuasion profiles be used across contexts within a single organization, but there is the option of exchanging the persuasion profiles between corporations, governments, other institutions, and individuals. A market for persuasion profiles could develop , as currently exists for other data about consumers. Even if a system that implements persuasion profiling does so ethically, once constructed the profiles can be used for ends not anticipated by its designers.
Persuasion profiles are another kind of information about individuals collected by corporations that individuals may or have effective access to. This raises issues of data ownership. Do individuals have access to their complete persuasion profiles or other indicators of the contents of the profiles? Are individuals compensated for this valuable information ? If an individual wants to use Amazon’s persuasion profile to jump-start a mobile exercise coach’s adaptation, there may or may not be technical and/or legal mechanisms to obtain and transfer this profile.
Means-adaptive persuasive systems are able and likely to not disclose that they are adapting to individuals. This can be contrasted with end adaptation, in which it is often advantageous for the agent to disclose the adaption and potentially easy to detect. For example, when Amazon recommends books for an individual it makes clear that these are personalized recommendations — thus benefiting from effects of apparent personalization and enabling presenting reasons why these books were recommended. In contrast, with means adaptation, not only may the results of the adaptation be less visible to users (e.g. emphasizing either “Pulitzer Prize winning” or “International bestseller”), but disclosure of the adaptation may reduce the target attitude or behavior change.
It is hypothesized that the effectiveness of social influence strategies is, at least partly, caused by automatic processes. According to dual-process models , un- der low elaboration message variables manipulated in the selection of influence strategies lead to compliance without much thought. These dual-process models distinguish between central (or systematic) processing, which is characterized by elaboration on and consideration of the merits of presented arguments, and pe- ripheral (or heuristic) processing, which is characterized by responses to cues as- sociated with, but peripheral to the central arguments of, the advocacy through the application of simple, cognitively “cheap”, but fallible rules . Disclosure of means adaptation may increase elaboration on the implementations of the selected influence strategies, decreasing their effectiveness if they operate primarily via heuristic processing. More generally, disclosure of means adaptation is a disclosure of persuasive intent, which can increase elaboration and resistance to persuasion.
Implications of non-disclosure. The fact that persuasion profiles can be obtained and used without disclosing this to users is potentially a cause for concern. Potential reductions in effectiveness upon disclosure incentivize system designs to avoid disclosure of means adaptation.
Non-disclosure of means adaptation may have additional implications when combined with value being placed on the construction of an accurate persuasion profile. This requires some explanation. A simple system engaged in persuasion profiling could select influence strategies and implementations based on which is estimated to have the largest effect in the present case; the model would thus be engaged in passive learning. However, we anticipate that systems will take a more complex approach, employing active learning techniques [e.g., 6]. In active learning the actions selected by the system (e.g., the selection of the influence strategy and its implementation) are chosen not only based on the value of any resulting attitude or behavior change but including the value predicted improvements to the model resulting from observing the individual’s response. Increased precision, generality, or comprehensiveness of a persuasion profile may be valued (a) because the profile will be more effective in the present context or (b) because a more precise profile would be more effective in another context or more valuable in a market for persuasion profiles.
These later cases involve systems taking actions that are estimated to be non-optimal for their apparent goals. For example, a mobile exercise coach could present a message that is not estimated to be the most effective in increasing overall activity level in order to build a more precise, general, or comprehensive persuasion profile. Users of such a system might reasonably expect that it is designed to be effective in coaching them, but it is in fact also selecting actions for other reasons, e.g., selling precise, general, and comprehensive persuasion profiles is part of the company’s business plan. That is, if a system is designed to value constructing a persuasion profile, its behavior may differ substantially from its anticipated core behavior.
 Aarts, E.H.L., Markopoulos, P., Ruyter, B.E.R.: The persuasiveness of ambient intelligence. In: Petkovic, M., Jonker, W. (eds.) Security, Privacy and Trust in Modern Data Management. Springer, Heidelberg (2007)
 Baker, S.: Learning, and profiting, from online friendships. BusinessWeek 9(22) (May 2009)Selecting Effective Means to Any End 93
 Berdichevsky, D., Neunschwander, E.: Toward an ethics of persuasive technology. Commun. ACM 42(5), 51–58 (1999)
 Cacioppo, J.T., Petty, R.E., Kao, C.F., Rodriguez, R.: Central and peripheral routes to persuasion: An individual difference perspective. Journal of Personality and Social Psychology 51(5), 1032–1043 (1986)
 Cialdini, R.: Influence: Science and Practice. Allyn & Bacon, Boston (2001)
 Cohn,D.A., Ghahramani,Z.,Jordan,M.I.:Active learning with statistical models. Journal of Artificial Intelligence Research 4, 129–145 (1996)
 Eckles, D.: Redefining persuasion for a mobile world. In: Fogg, B.J., Eckles, D. (eds.) Mobile Persuasion: 20 Perspectives on the Future of Behavior Change. Stanford Captology Media, Stanford (2007)
 Fogg, B.J.: Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann, San Francisco (2002)
 Fogg, B.J.: Protecting consumers in the next tech-ade, U.S. Federal Trade Commission hearing (November 2006), http://www.ftc.gov/bcp/workshops/techade/pdfs/transcript_061107.pdf
 Fogg,B.J.:The behavior grid: 35 ways behavior can change. In: Proc. of Persuasive Technology 2009, p. 42. ACM, New York (2009)
 Kaptein, M., Aarts, E.H.L., Ruyter, B.E.R., Markopoulos, P.: Persuasion in am- bient intelligence. Journal of Ambient Intelligence and Humanized Computing 1, 43–56 (2009)
 Lacroix, J., Saini, P., Goris, A.: Understanding user cognitions to guide the tai- loring of persuasive technology-based physical activity interventions. In: Proc. of Persuasive Technology 2009, vol. 350, p. 9. ACM, New York (2009)
 Petty, R.E., Wegener, D.T.: The elaboration likelihood model: Current status and controversies. In: Chaiken, S., Trope, Y. (eds.) Dual-process theories in social psychology, pp. 41–72. Guilford Press, New York (1999)
 Prabhaker, P.R.: Who owns the online consumer? Journal of Consumer Market- ing 17, 158–171 (2000)
 Rawls, J.: The independence of moral theory. In: Proceedings and Addresses of the American Philosophical Association, vol. 48, pp. 5–22 (1974)
 Rhoads, K.: How many influence, persuasion, compliance tactics & strategies are there? (2007), http://www.workingpsychology.com/numbertactics.html
 Schafer, J.B., Konstan, J.A., Riedl, J.: E-commerce recommendation applications. Data Mining and Knowledge Discovery 5(1/2), 115–153 (2001)
- We were of course influenced by B.J. Fogg’s previous use of the term ‘persuasion profiling’, including in his comments to the Federal Trade Commission in 2006. [↩]
- This point can also be made in the language of interaction effects in analysis of variance: Persuasion profiles are estimates of person–strategy interaction effects. Thus, the end-independence of persuasion profiles requires not that the two-way strategy– context interaction effect is small, but that the three-way person–strategy–context interaction is small. [↩]
I am not going to discuss the ethics and legal issues in this particular case. Instead, I just want to draw attention to how this issue reveals the importance of technical knowledge in thinking about privacy issues.
A common response from people working in the Internet industry is that Soghoian is a non-techie that has suddenly “discovered” referrer headers. For example, Danny Sullivan writes “former FTC employee discovers browsers sends referrer strings, turns it into google conspiracy”. (Of course, Soghoian is actually technically savvy, as reading the complaint to the FTC makes clear.)
What’s going on here? Folks with technical knowledge perceive search query disclosure as the status quo (though I bet most don’t often think about the consequences of clicking on a link after a sensitive search).
But how would most Internet users be aware of this? Certainly not through Google’s statements, or through warnings from Web browsers. One of the few ways I think users might realize this is happening is through query-highlighting — on forums, mailing list archives, and spammy pages. So a super-rational user who cares to think about how that works, might guess something like this is going on. But I doubt most users would actively work out the mechanisms involved. Futhermore, their observations likely radically underdetermine the mechanism anyway, since it is quite reasonable that a Web browser could do this kind of highlighting directly, especially for formulaic sites, like forums. Even casual use of Web analytics software (such as Google Analytics) may not make it clear that this per-user information is being provided, since aggregated data could reasonably be used to present summaries of top search queries leading to a Web site.1
This should be a reminder why empirical studies of privacy attitudes and behaviors are useful: us techie folks often have severe blind spots. I don’t know that this is just a matter of differences in expectations, but rather involves differences in preferences. Over time, these expectations change our sense of the status quo, from which we can calibrate our preferences and intentions.
Google has worked to ensure that referrer headers continue to include search query information — even as it adopts techniques that would make this not happen simply by the standard inclusion of the URL there.2 A difference in beliefs about the status quo puts these actions by Google in a different context. For us techies, that is just maintaining the status quo (which may seem more desirable, since we know it’s the industry-wide standard). For others, it might seem more like Google putting advertisers and Web site owners above its promises to its users about their sensitive data.
Maurits Kaptein and I have recently been thinking a lot about persuasion profiling — estimating and adapting to individual differences in responses to influence strategies based on past behavior and other information. With help from students, we’ve been running experiments and building statistical models that implement persuasion profiling.
My thinking on persuasion profiling is very much in BJ Fogg’s footsteps, since he has been talking about persuasion profiling in courses, lab meetings, and personal discussions since 2004 or earlier.
Just yesterday, I came across this transcript of BJ’s presentation for an FTC hearing in 2006. I was struck at how much it anticipates some of what Maurits and I have written recently (more on this later). I’m sure I watched the draft video of the presentation back then and it’s influenced me, even if I forgot some of the details.
Here is the relevant excerpt from BJ’s comments for the FTC:
Persuasion profiling means that each one of us has a different set of persuasion strategies that affect us. Just like we like different types of food or are vulnerable to giving in to different types of food on a diet, we are vulnerable to different types of persuasion strategies.
On the food example, I love old-fashioned popcorn, and if I go to a party and somebody has old-fashioned popcorn, I will probably break down and eat it. On the persuasion side of things, I know I’m vulnerable to trying new things, to challenges and to anything that gets measured. If that’s proposed to me, I’m going to be vulnerable and I’m going to give it a shot.
Whenever we go to a Web site and use an interactive system, it is likely they will be capturing what persuasion strategies work on us and will be using those when we use the service again. The mapping out of what makes me tick, what motivates me can also be bought or sold, just like a credit report.
So imagine I’m going in to buy a new car and the person selling me the car downloads my credit report but also buys my persuasion profile. I may or may not know about this. Imagine if persuasion profiles are available on political campaigns so that when I visit a Web site, the system knows it is B.J. Fogg, and it changes [its] approach based on my vulnerabilities when it comes to persuasion.
Persuasive technology will touch our lives anywhere that we access digital products or services, in the car, in our living room, on the Web, through our mobile phones and so on. Persuasive technology will be all around us, and unlike other media types, where you have 30-second commercial or a magazine ad, you have genres you can understand, when it comes to computer-based persuasion, it is so flexible that it won’t have genre boundaries. It will come to us in the ordinary course of our lives, as we are working on a Web site, as we are editing a document, as we are driving a car. There won’t be clear markers about when you are being persuaded and when you are not.
This last paragraph is about the “genrelessness” of many persuasive technologies. This isn’t directly on the topic of persuasion profiling, but I see it as critically relevant. Persuasion profiling is likely to be most effective when invisible and undisclosed to users. From this and the lack of genre-based flags for persuasive technology it follows that we will frequently be “persuasion profiled” without knowing it.
Increasing valuable annotation behaviors was a practical end of a good deal of work at Yahoo! Research Berkeley. ZoneTag is a mobile application and service that suggests tags when users choose to upload a photo (to Flickr) based on their past tags, the relevant tags of others, and events and places nearby. Through social influence and removing barriers, these suggestions influence users to expand and consistently use their tagging vocabulary (Ahern et al. 2006).
Context-aware suggestion techniques such as those used in ZoneTag can increase tagging, but what about users’ motivations for considering tagging in the first place? And how can these motivations for annotation be considered in designing services that involve annotation? In this post, I consider existing work on motivations for tagging, and I use tagging on Facebook as an example of how multiple motivations can combine to increase desired annotation behaviors.
Using photo-elicitation interviews with ZoneTag users who tag, Ames & Naaman (2007) present a two factor taxonomy of motivations for tagging. First, they categorize tagging motivations by function: is the motivating function of the tagging organizational or communicative? Organizational functions include supporting search, presenting photos by event, etc., while communicative functions include when tags provide information about the photos, their content, or are otherwise part of a communication (e.g., telling a joke). Second, they categorize tagging motivations by intended audience (or sociality): are the tags intended for my future self, people known to me (friends, family, coworkers, online contacts), or the general public?
On Flickr the function dimension generally maps onto the distinction between functionality that enables and is prior to arriving at the given photo or photos (organization) and functionality applicable once one is viewing a photo (communication). For example, I can find a photo (by me or someone else) by searching for a person’s name, and then use other tags applied to that photo to jog my memory of what event the photo was taken at.
Some Flickr users subscribe to RSS feeds for public photos tagged with their name, making for a communication function of tagging — particularly tagging of people in media — that is prior to “arriving” at a specific media object. These are generally techie power users, but this can matter for others. Some less techie participants in our studies reported noticing that their friends did this — so they became aware of tagging those friends’ names as a communicative act that would result in the friends finding the tagged photos.
This kind of function of tagging people is executed more generally — and for more than just techie power users — by Facebook. In tagging of photos, videos, and blog posts, tagging a person notifies them they have been tagged, and can add that they have been tagged to their friends’ News Feeds. This function has received a lot of attention from a privacy perspective (and it should). But I think it hints at the promise of making annotation behavior fulfill more of these functions simultaneously. When specifying content can also be used to specify recipients, annotation becomes an important trigger for communication.
See some interesting comments (from Twitter) about tagging on Facebook:
- noticing people tagging to gain eyeballs
- exhorting others not to tag bad photos (and thanks)
- collapsing time by tagging photos from long ago
- tagging by parents
Ames, M., & Naaman, M. (2007). Why we tag: motivations for annotation in mobile and online media. In Proceedings of CHI 2007 (pp. 971-980). San Jose, California, USA: ACM.
Ahern, S., Davis, M., Eckles, D., King, S., Naaman, M., Nair, R., et al. (2006). Zonetag: Designing context-aware mobile media capture to increase participation. Pervasive Image Capture and Sharing: New Social Practices and Implications for Technology Workshop. In Adjunct Proc. Ubicomp 2006.
Every person who logs into Facebook is met with the same interface but with personalized content. This interface is News Feed, which lists “news stories” generated by users’ Facebook friend. These news stories include the breaking news that Andrew was just tagged in a photo, that Neema declared he is a fan of a particular corporation, that Ellen joined a group expressing support for a charity, and that Alan says, “currently enjoying an iced coffee… anyone want to see a movie tonight?”
News Feed is an example of a particular design pattern that has recently become quite common – the activity stream. An activity stream aggregates actions of a set of individuals – such as a person’s egocentric social network – and displays the recent and/or interesting ones.
I’ve previously analysed, in a more fine-grained analysis of a particular (and now changed) interface element for setting one’s Facebook status message, how activity streams bias our beliefs about the frequency of others’ participation on social network services (SNSs). It works like this:
- We use availability to mind as a heuristic for estimating probability and frequency (Kahneman & Tversky, 1973). So if it is easier to think of a possibility, we judge it to be more likely or frequent. This heuristic is often helpful, but it also leads to bias due to, e.g., recent experience, search strategy (compare thinking of words starting with ‘r’ versus words with ‘r’ as the third letter).
- Activity streams show a recent subset of the activity available (think for now of a simple activity stream, like that on one’s Twitter home page).
- Activity streams show activity that is more likely to be interesting and is more likely to have comments on it.
Through the availability heuristic (and other mechanisms) this leads to one to estimate that (1) people in one’s egocentric network are generating activity on Facebook more frequently than they actually are and (2) stories with particular characteristics (e.g., comments on them) are more (or less) common in one’s egocentric network than they actually are.
When thinking about this in the larger picture, one can see this as a kind of cultivation effect of algorithmic selection processes in interpersonal media. According to cultivation theory (see Williams, 2006, for an application to MMORGs), our long-term exposure to media makes leads us to see the real world through the lens of the media world; this exposure gradually results in beliefs about the world based on the systematic distortions of the media world (Gerbner et al., 1980). For example, heavy television viewing predicts giving more “television world” answers to questions — overestimating the frequency of men working in law enforcement and the probability of experiencing violent acts. A critical difference here is that with activity streams, similar cultivation can occur with regard to our local social and cultural neighborhood.
Aims of personalization
Automated personalization has traditionally focused on optimizing for relevance – keep users looking, get them clicking for more information, and make them participate related to this relevant content. But the considerations here highlight another goal of personalization: personalization for strategic influence on attitudes that matter for participation. These goals can be in tension. For example, should the system present…
The most interesting and relevant photos to a user?
Showing photographs from a user’s network that have many views and comments may result in showing photos that are very interesting to the user. However, seeing these photos can lead to inaccurate beliefs about how common different kinds of photos are (for example, overestimating the frequency of high-quality, artistic photos and underestimating the frequency of “poor-quality” cameraphone photos). This can discourage participation through perceptions of the norms for the network or the community.
On the other hand, seeing photos with so many comments or views may lead to overestimating how many comments one is likely to get on one’s own photo; this can result in disappointment following participation.
Activity from a user’s closest friends?
Assume that activity from close friends is more likely to be relevant and interesting. It might even be more likely to prompt participation, particularly in the form of comments and replies. But it can also bias judgments of likely audience: all those people I don’t know so well are harder to bring to mind as is, but if they don’t appear much in the activity stream for my network, I’m less likely to consider them when creating my content. This could lead to greater self-disclosure, bad privacy experiences, poor identity management, and eventual reduction in participation.
Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1980). The “Mainstreaming” of America: Violence Profile No. 11. Journal of Communication, 30(3), 10-29.
Kahneman, D., & Tversky, A. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207-232.
Williams, D. (2006). Virtual Cultivation: Online Worlds, Ofﬂine Perceptions. Journal of Communication, 56, 69-87.