Wharton Conference on User-Generated Content Part II

Last week I blogged about a few research projects presented at the Wharton conference on user-generated content. In this second part of the conference summary series, I’d like to discuss one other interesting presentation that is not as directly related to user-generated content per se but I think can be of tremendous interest to online advertisers. Then to wrap up the series, I will list a few research questions raised by industry participants at the conference. This will probably be particularly interesting to researchers who are wondering what is on the practitioners’ mind. By the way, the conference has created a page with links to all the presentation slides.

Wharton Business School Building
http://www.flickr.com/photos/teofilo/ | CC BY 2.0

How to target ads to consumers without sacrificing their privacy?

The recent controversy surrounding Facebook’s privacy setting changes shows us that privacy issues are still very much on people’s mind these days, especially with a large amount of very personal data now available through online social networks. To advertisers, the increasing amount of social and personal information represents a great opportunity to offer very targeted ads to consumers.  But as we get closer to consumers’ personal domain of interests and friend networks, advertisers are also treading a very dangerous water of consumer privacy. This is why I find New York University Professor Foster Provost’s research to be particularly interesting, as it allows target advertising toward consumers while still protecting their privacy, or in the researchers’ term, “privacy-friendly” target advertising.

The basic idea is quite simple, although the actual implementation can become more complex and mathematical.  The underlying premise of the approach is that consumers who are more similar to each other are more likely to buy the same brands and share similar consumption habits. This is why social network information can be very powerful, because we are likely to buy the same things as our friends or at least have a good deal of influence on each other.  The problem with using such explicit social network information is the privacy issue. To circumvent this problem, Professor Provost’s approach uses anonymized browsing data instead.  It builds on two key sets of information: (1) a set of consumers who are considered brand actors; and (2) browsing data for these brand actors and other consumers whose brand affinity is not yet behaviorally demonstrated.

For the first set, one can use criterion such as having visited a brand’s website or fan page on Facebook to identify consumers who are brand actors. Notice that advertisers do not need to know who these consumers actually are in terms of names or demographics, but just that they are entities who have demonstrated certain desired behavior.  Then with this information, the brand proximity/affinity of other consumers can be calculated by analyzing how closely the content (brand and non-brand related) visited by those consumers resemble that of the brand actors.  Potential consumers can then be ranked based on this similarity to identify the ones that have the closest brand proximity. Professor Provost’s research shows that consumers picked in such a fashion have a much higher concentration of potential brand actors than random picking and that these consumers are much more likely to be linked to known brand actors.  A paper from this research project is available from Professor Provost’s website.

To me, the beauty of this research is two-fold. First, because the only data needed are browsing logs without personally identifiable information attached, it allows advertisers to selectively target consumers without having to worry about privacy issues. Second, because the approach is defined in a sufficiently general fashion, it allows for much tweaking and customization. For instance, various brand proximity measures can be used (this research itself suggests five measures), and different measures can be combined to most accurately gauge brand affinity. Moreover, the criteria used to spot brand actors can be customized based on an advertiser’s needs (e.g., visit to awareness page vs. conversion page depending on the goal of the campaign).  Such flexibility makes the approach applicable to a wide variety of situations.

What do practitioners want to know?

The conference organized a few industry panels to talk about their own experiences and their unanswered questions. Out of these industry participants, Mr. Gary Spangler, E-Marketing Manager from Dupont, spoke the most systematically about a set of research questions that need to be addressed from a practitioner’s standpoint. Many of these questions were echoed by other industry participants.  I list them here for the benefit of academics who are in search of practically relevant research questions.

  1. There are more and more ways to reach/touch consumers. Is there a way to analyze the value of each electronic touch (e.g., email, social network, etc.)?
  2. When lead time is relatively long (e.g., 1 year or more in the case of B2B marketing), how does one measure the ROI of online marketing investment?  (We all know that ROI has always been an issue, but longer lead time apparently posts an even greater challenge.)
  3. How can a company use information from web queries (similar to the browsing information used in Professor Provost’s research described above) to identify potential sales leads?
  4. When potential leads abound and resources available to respond to those leads are limited, can we develop a lead scoring system so that a company can properly filter out more important vs. less important leads?
  5. Different online marketing approaches use different types of content as input. For example, a company’s website and its social network presence most likely require different content.  How can one measure the value of each content type to different segments and different industries?
  6. Demonstrate the ROI of social media efforts to help marketers argue the value of social media participation to upper-level managers.

In us academics’ constant quest for new knowledge, questions such as these are very useful in guiding our research effort toward being more relevant and applicable to practice.  Here I send out a call to practitioners out there to supply us with more of these and to tell us the question marks in your head.  Please feel free to leave your comment here.  As the overarching goal for my blog, I would like to make Ping! an intersecting spot for practitioners and academic researchers.

This is going to be my last blog before Christmas. So here’s happy holidays to all my readers. Wish everyone a warm, safe and love-filled holiday!

Wharton Conference on User-Generated Content Part I

In between the wedding and my race against the clock to get as much research done as possible before my research leave is over in January, the year 2009 has quietly slipped away and the holiday season is already upon us.  First of all, happy holidays!  As a gift to my readers, I want to bring some new exciting research insights from the conference The Emergence and Impact of User-Generated Content (UGC) I just attended in Philadelphia last week.  The conference was co-hosted by the Wharton Interactive Media Institute and the Marketing Science Institute, and featured top-notch researchers and practitioners who work in the field of social media and UGC.

A major question addressed by quite a few presentations at the conference was the impact of user-generated content. So in Part I of this two-part conference report series, I would like to highlight three presentations that I found particularly interesting with regard to this topic.

Philadelphia

Does consumer chatter about a product affect stock return?

The answer is yes, according to the research presented by Professor Gerard Tellis from the University of Southern California. In their research, Professor Tellis and his doctoral student Seshadri Tirunillai looked at six diverse product categories with rich consumer reviews: data storage, footwear, toys, personal computers, cellphones, and PDAs/smartphones. They gathered consumer reviews in these product categories from three sources: Amazon.com, Epinions.com, and Yahoo! Shopping. These reviews were then analyzed for the overall rating, review volume, and valence (positive or negative) of review associated with each product. Using a mathematical approach called vector autoregressive, the researchers tied these review characteristics to each company’s stock return and volatility. They found that consumer reviews lead stock performance by a few weeks (meaning that consumer reviews can help predict stock performance a few weeks ahead). Specifically, the volume of review (after controlling for the valence of review) has a positive effect on stock return.  The overall rating (e.g., 3.5 out of 5) did not have any significant impact on stock performance.  But the number of negative reviews and the average percent of negative expressions in the reviews negatively impact stock return and increase stock volatility. In contrast, positive reviews did not have a significant impact.

Lessons for marketers:

  • It is justifiable not only from a marketing perspective to monitor consumer opinions in social media but it makes financial sense as well. Research such as this can help make an argument to financial managers why a company should invest in such monitoring activities.
  • Although positive reviews may make one feel warm and fuzzy, it’s much more important to pay attention to negative reviews.  In general, negative information is much more diagnostic in conveying market sentiment.

Lessons for investors:

  • Consumer reviews may seem far removed from the complex mathematical modeling that goes into stock picking and performance prediction. But this research suggests the value for investors to monitor this social space.
  • The researchers further recommended a few investment approaches. For example, as a short-term strategy, buy a stock when its product review enters top 20% and sell the stock when it drops out of the top 20%. The recommended holding period for this strategy is 6 weeks.

Do bloggers affect product sales?

Bloggers like me probably would all like to know that we are making a real impact after the time and effort we’ve put into our blogs. Some companies also invest heavily in the blogosphere and want to know whether that’s a wise thing to do. The research presented by Professor Sriram Venkataraman from Emory University found that blogger influence is geographic-specific depending on the demographics of a market.  Using movie industry data, this research finds that a movie’s first-day national sales is not associated with blog variables.  However, when looking from the DMA (designated market area) level, strong geographic influence emerges. Not surprisingly, markets with a larger portion of young people are more likely to be affected by blogs and at the same time are more likely to discount the influence of company-sponsored advertising.  For markets with a higher proportion of female consumers, the research found that they tend to be more forgiving to negative blogs.  These consumers could read quite negative blogs about a movie but still feel and act positively toward the movie.

Lessons for marketers:

  • Consumer blogs can be a worthwhile tool to integrate into a company’s marketing strategy.
  • Selectively using these tools based on each market’s demographics may be more effective than a blanket strategy.

What about user contribution in new product development?

This research first struck me as using a very clever data source to address an important question.  Partially based on Professor Matthew O’Hern’s doctoral dissertation, this project uses the well-known open source community SourceForge.net to examine if user collaboration and contribution truly lead to better and faster product development. The answer is mixed. O’Hern and colleagues classified user contributions on SourceForget into three categories: (1) user reports: reports of bugs and issues found in a piece of software; (2) user requests: requests of new functionality or modifications to be added to future software releases; (3) user revisions: user-submitted solutions (i.e., codes) for fixing certain problems or adding new functionality to a software release.  They found that:

  • User reports of problems increase release activities, indicating a positive impact on software development.
  • At the same time, such problem reports alert other users of issues with the software and reduce the download volume for a software release.
  • User requests have the most negative impact, both reducing download volume and release activities.
  • Most surprising to me, users submitting their own solutions did not have any significant impact on release activities.  The only impact it had was on increasing download amount for a given month.

Lessons for marketers:

  • Wiki-type efforts by users may not always be beneficial to a company’s new product development.  When not properly managed, it can actually prolong the development process and reduce the speed-to-market.
  • Caveat: SourceForge is a community of mostly volunteers who do not have a strong commercial interest. Therefore, the proper utilization and integration of user revisions may be limited due to the lack of human power and resources. I would not be surprised that user submission will have a more positive impact in a more closely managed environment.

* * * * * * *

Plenty of information to digest for a while.  So I’m gonna stop here for Part I of the series.  What do you think of these research insights?  I’d love to hear back from you.  If you find any of these projects particularly interesting and would like more information, I encourage you to contact the presenter.  Whenever possible, I tried to provide a link to the presenter’s homepage so that you can find his/her contact information.

In Part II of this series, I will discuss another project on a privacy-friendly target advertising approach based on social network data.  I will also share with you a few high-priority topics related to social media and Internet marketing that were identified by practitioners at the conference.  So stay tuned!

Word-of-Mouth or Traditional Marketing?

Some people may disagree with what I am about to say here: online social networks bring people closer to each other. At least that is the personal impact that they have had on me.  But what does this mean for marketing?  One answer is that word-of-mouth between consumers is carrying more weight in how we choose and consume products. Whether we love or hate a product, now it is so easy to make it known to the public that we are essentially affecting the opinions of other consumers (from total strangers to close friends) every day.

Managers are often hesitant to invest in encouraging word-of-mouth, however, as its effects are notoriously difficult to measure.  This is because word-of-mouth behavior is often unobserved, and it is difficult to tease out the concurrent impact of traditional marketing.  These are the exact problems a recent article by Michael Trusov and colleagues in Journal of Marketing tried to tackle.  Entitled “Effects of Word-of-Mouth Versus Traditional Marketing: Findings from
an Internet Social Networking Site”, this article offers a clear answer to the relative effectiveness of word-of-mouth vs. traditional PR and marketing.

Word of Mouth

What did they look at?
The impact of word-of-mouth, event marketing, and media appearance on the sign-ups for an undisclosed online social network.

Some intuitive findings:
More new sign-ups resulted in more word-of-mouth; event marketing led to more media appearance, and vice versa;  word-of-mouth was not affected by previous event marketing or media appearance, however, suggesting consumers’ relatively independent opinions and actions.

Some not-so-intuitive and very important findings:
The 3-day elasticity of sign-ups with respect to word-of-mouth was .17. In layman’s terms, this means that doubling the amount of word-of-mouth increases sign-ups by 17%. The corresponding impact from event marketing and media appearance, in contrast, was only 1.7% and 2.2%. The gap became even bigger with regard to long-term effects.  In the long run, the effect of word-of-mouth is 20 times that of event marketing and 30 times that of media appearance.  While doubling event marketing or media exposure led to 1.7% and 2.6% respective increase in sign-ups in the long run, doubling word-of-mouth increases sign-ups by a full 53%. Financially, an outbound word-of-mouth referral translates into 75 cents/year increase in advertising revenue.

What does this mean for marketing practice?
Word-of-mouth is a powerful tool for customer acquisition.  With the help of more powerful tracking tools provided by social networks and websites, it is possible for managers to measure the return from word-of-mouth activities. The mathematical approach used in this article (vector autoregressive modeling) further helps tease out the impact of other marketing and PR activities so that the true effect of word-of-mouth can be accurately measured. Together, this should reduce the hesitation to incorporate word-of-mouth into a company’s overall marketing strategy. The findings from this article also provide a strong motivation to better utilize word-of-mouth channel of communication.

Cautions
Readers should be cautioned from taking the results from the above research too literally.  Two things should especially be taken into consideration.  First, the data came from an online social network.  Customers on such websites are usually highly motivated to invite their friends, and those invited by their friends are also very likely to sign up.  If we were to change the context to, say, online banking, both the level of referral and the impact of referral are likely to be lower.  Second, the word-of-mouth activities studied in this article are all organic referrals initiated by consumers themselves. If the word-of-mouth had been stimulated by the company (say, with financial incentives), the referrals may not have been considered as genuine to other consumers and therefore may not have created as strong of an effect as reported in this study.  Although these are real limitations, the findings from this study are still quite powerful indicators of word-of-mouth effect. It is a tool managers should not ignore.

Reference
Michael Trusov, Randolph E. Bucklin, and Koen Pauwels (2009), “Effects of Word-of-Mouth Versus Traditional Marketing: Findings from an Internet Social Networking Site,” Journal of Marketing, Vol. 73 (September), p.90-102.