Archives for the month of: January, 2012

The ACM CHI 2012 programme is up. Although 1,214 papers were rejected from the conference, many interesting ones seem to have made it through. As usual (for me), I’ve glanced through all the titles and abstracts as a first round of filtering future reading. While doing so, I categorised the titles into various groups. Here they are below: what did I miss?

  • Crowdsourcing [148, 202, 117]
    • Communitysourcing: Engaging Local Crowds to Perform Expert Work Via Physical Kiosks
    • LemonAid: Selection-Based Crowdsourced Contextual Help for Web Applications
    • A Quantitative Explanation of Governance in an Online Peer-Production Community
    • Evaluating Compliance-Without-Pressure Techniques for Increasing Participation in Online Communities
    • Social Desirability Bias and Self-Reports of Motivation: A Cross-Cultural Study of Amazon Mechanical Turk in the US and India
    • Your opinion counts! Leveraging social comments for analyzing aesthetic perception of photographs
    • Human Computation Tasks with Global Constraints
    • Strategies for Crowdsourcing Social Data Analysis
    • Direct Answers for Search Queries in the Long Tail
  • Online Communities, Social Networks and Media [146, 203, 139]
    • Profanity Use in Online Communities
    • Panel on failures in social media
    • Designing Social Translucence Over Social Networks
    • Perceptions of Facebook’s Value as an Information Source
    • ReGroup: Interactive Machine Learning for On-Demand Group Creation in Social Networks
  • Twitter [148, 146, 166]
    • “I can’t get no sleep”: Discussing #insomnia on Twitter
    • #EpicPlay: Selecting Video Highlights for Sporting Events using Twitter
    • Twitter and the Development of an Audience: Those Who Stay on Topic Thrive!
    • A Longitudinal Study of Facebook, LinkedIn, & Twitter Use
    • Breaking News on Twitter
    • The Twitter Mute Button: a Web Filtering Challenge
    • Nokia Internet Pulse: A Long Term Deployment and Iteration of a Twitter Visualization
  • Mobiles, Sensing, Cities [176, 165]
    • Drawing the city: Differing perceptions of the urban environment
    • Augmenting Spatial Skills with Mobile Devices
  • Recommender Systems & Personalization [176, 182, 139]
    • Characterizing Local Interests and Local Knowledge
    • Mobile Service Distribution From the End-User Perspective – The Survey Study on Recommendation Practices
    • AccessRank: Predicting What Users Will Do Next
    • Effects of Behavior Monitoring and Perceived System Benefit in Online Recommender Systems
    • Design and Evaluation of a Command Recommendation System for Software Applications
    • To Switch or Not To Switch: Understanding Social Influence in Online Choices
    • Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent
  • Changing Behaviour [130]
    • A Transformational Product to Improve Self-Control Strength: the Chocolate Machine

Recommender systems currently work under a very specific mantra: they offer you stuff that you may like by doing computations on data that reflects what you like (e.g., star ratings), who you are (e.g., based on your social connectivity), or how you have behaved in the past (clicks, queries). One of the attention-grabbing “problems” that emerged from the way these algorithms are applied (for example, on social networks) is that personalisation will ultimately lead to echo chambers, where you only consume the content that those who are exactly like you do; the algorithm imprisons you inside an inescapable “bubble.” While it seems that we are finally moving beyond the “conventional wisdom” of the existence of these echo chambers (see the facebook study directly here), one point remains: recommender systems still do not (by design) help you to change, achieve goals, or actively work on becoming who you want to be.

However there is plenty of work on “nudging” people with technological means in order to change their behaviour. Most of them rely on turning invisible data into some kind of feedback and adding in a bit of fun: from turning stairs into a piano (to nudge you off that escalator), reflective tables (that light up based on how much each person is talking), community-level electricity consumption awareness, and mobile apps like UbiGreen and UbiFit. The magic of it all seems to be that, once people become aware of what they are doing, and feedback is placed in an appropriate context (e.g., with positive/negative emotional feedback in the form of smileys?) then behaviour starts to change.

Why don’t recommender systems build in aspects of nudging and feedback in order to help their users achieve their own goals? Even if it’s something as banal as wanting to learn more about a particular genre of music, to becoming healthier or explore restaurants. There are, instead, humorous examples of the opposite in action: take the “TiVo thinks I’m gay” example; there are people who ended up becoming frustrated by their recommendations since they didn’t feel it reflected who they are.

I think that these two (historically separate) fields of research have a lot to say to each other. Persuasive-style research seems to lack means of long-term engagement with people: How can in situ feedback and displays be turned into long-term interfaces that can promote people to change their behaviour and not slip back into their old routine once the novelty of the nudge has worn off? Recommender systems have been very successful in this area; they are routinely used online to increase engagement and augment customer retention. How can you tailor the way you nudge to who you are nudging? Maybe what works to engage someone into opting for more sustainable travel habits does not work for someone else: recommender systems are based on automatically learning this per-person tailoring. On the other hand, how can recommender systems turn into more than just “content filters?” Would a personalised nudge work?

Background

I’ve recently started a new post at Cambridge on a project (with the fancy acronym UBHave) called Ubiquitous and Social Computing for Positive Behaviour Change, and have therefore been spending time thinking about how the research I’ve done in the past may be applicable here as well. Also, tonight I went to a very interesting talk by Yvonne Rogers about her work on behaviour change: I owe all the nudging examples linked above to her great talk.

One of the key ingredients of many successful online companies has been rapid iteration and improvement of their services via A/B testing. In its essence, you split your users into two (or more) groups, serve them variants of your service (e.g., different algorithms, user interfaces) and then sit back and measure how each group behaves. Once you are operating at web-scale, the sheer size of visitors and potential for rich data collection can really inform those companies about how they are performing and what ideas work better than others. Size matters the most: (we hope that) once we are dealing with a large enough sample, we will randomize all the other confounding factors that may play a role in what we are trying to measure. In other words, the web turns the world into a living laboratory.

Unfortunately, this cool technique is not readily available for the physical world. Imagine, for example, trying to evaluate a policy that aims to reduce the number of children being killed by cars. How do you split your users (and ensure randomization)? How do you cope with the fact that people will already behave differently in different geographic areas? (Moreover, how do you come to terms with the ethical questions?). The sad conclusion seems to be that, when we intervene on the physical world and observe a change, it remains difficult to speak of anything more than a correlation between what you did and the behavior you (hope to have) caused.

Luckily, alternative approaches exist. The equivalent of an A/B test when you can’t randomize your sample is a quasi-experiment. In a recent paper, we adopted this perspective in order to examine the impact of changing the user access policy to London’s (shared) Boris bikes. By splitting our data around the time of the change, carefully cleaning it, and ensuring that we maintain a large (temporal and spatial) scale of data, we examined how sensor readings from bicycle stations can be used to observe how the policy change propagated across time and the city. Interestingly, the data showed us that this change in policy resounded differently in different stations: some locations that were areas that people went to in the morning and left from in the evening flipped their pattern completely.

The rest of the details are in the paper (reference below). However, since this is the year that politicians are taking on coding, maybe they should also start taking a few hints from web companies and start running their own A/B tests as well.

N. Lathia, S. Ahmed, L. Capra. Measuring the Impact of Opening the London Shared Bicycle Scheme to Casual Users. (To appear) In Elsevier Transportation Research Part C, accepted December 2011.

Follow

Get every new post delivered to your Inbox.