1. Google+ has lost its early momentum. Is it the new Chrome, or the new Wave?

    Posted August 4, 2011 in social media  |  1 Comment so far

    Remember Google Chrome? It was a browser that Google launched in 2008. They said it’d be as well-known as Firefox and Safari and Internet Explorer and Konqueror. And it had a logo that looked like a Pokéball.

    Google Chrome logo mashup

    Image courtesy of labnol.org

    Ring a bell? Yes? Of course, I knew you’d remember the Pokéball. So what happened to Chrome? At first everyone was really enthusiastic about it but then they got bored and usage dropped off. People who look at browser statistics started saying that Chrome was a failure within a few months of launch:

    Usage of Chrome peaked soon after its launch to about 3.1% share of the browsers market, after which users pretty much lost interest and went to their usual browser making Chrome’s market share down to a steady 1.5%…

    And I was describing Chrome as a Google mis-fire in December 2008:

    Like around 3% of the internet I installed and started using Chrome when it came out. However, I’m not among the 0.83% of the internet who are still using it…

    So given that the writing was so obviously on the wall for Chrome, it’s not surprising that hardly anyone remembers it nowadays, right? Right?

    …OK, time to drop this strained rhetorical device. The point, in case you haven’t guessed, is that a lot of people – me included – called time on Chrome when its brief honeymoon period ended. A couple of years later and these doubters – yes, me included – were proven wrong. In fact, Chrome’s just overtaken Firefox as the UK’s second most popular browser. An early stumble doesn’t always mean impending doom.

    More recently, another new Google product has come off the starting blocks only to falter in its first few strides: Google+. Despite initial enthusiasm the buzz is dying down and traffic has dropped off from its early weeks. People are talking about “giving up” on it.

    Can Google+ take heart from what happened to Chrome? Or is it doomed? Let’s look at a couple of arguments either way.

    “Google+ will rule over us all and bring light to the darkest corners of the Earth”

    Let’s compare Google+ to Twitter. To begin, how many of you had even heard of Twitter in May 2006 when it was as old as Google+ is now? I hadn’t, and I’m a committed geek. It took Twitter ages to get even recognisably close to its current levels of popularity.

    Remember spring 2009, and how there was so much confusion about what Twitter was for? That was three years into Twitter’s lifespan. Google+ has only been around for three months, and already has 25 million users. Judged by Twitter’s standards, that growth rate is positively stratospheric.

    The same applies to Facebook – it didn’t get to half a billion users in its first three months, did it? So who cares about a minor dip in traffic? Google+ is destined for greatness.

    “Google+ is doomed! Escape before it sinks beneath the waves or you’ll be doomed too”

    Let’s go back to the comparison between G+ and Chrome. So Chrome had an early stumble but then recovered? Fair enough. But there are differences between G+ and Chrome – big differences.

    Imagine you’re a Chrome user and you love it. You uninstalled IE. You uninstalled Firefox. Hell, you even uninstalled Minesweeper – Chrome is that good. Then you find out that no-one else in the world uses Chrome, no-one apart from you. Do you care?

    No, not at all. Your immediate experience of using Chrome is unaffected by others using it or not. But Google+, as a social product, is more exposed to network effects – if no-one you know uses Google+, it’s next to useless. If everyone you knew uses it, it is useful even if it’s a shockingly poor product (cf. Myspace). So the sophomore dip in traffic is meaningful for G+ in a way that it wasn’t for Chrome. When a social product like Google+ loses its users, it loses everything.

    So what’ll happen to G+?

    My gut instinct isn’t all that positive. I like it – there’s something a bit “old-school-internet” about my own personal experience of G+, probably because of the specific people I’ve been connecting to there. But I’ve been involved in launching and running quite a few “online communities” (remember them?) in my time and I notice some telltale signs among the people I follow. Not enough posts. Too many ghost speakers, links cast off into the void that spark no discussion, no debate.

    Healthy online communities need some tension, some arguments, some passion, some disagreements. Maybe that’s what Google+ needs so that it feels less like a lab and more like a space for life and all its anger and mess. So let’s post some flamebait and check back in six months to see how it’s getting on.


  2. A hedge fund based on Twitter may not be as stupid as it sounds

    Posted May 24, 2011 in comment  |  No Comments so far

    Using online analytics and social media trends to predict real-world events is nothing new. Twitter’s been used to predict box-office sales (story link, detailed paper) and Google search data has been telling us about future flu epidemics for a while now.

    Even I got in the act, demonstrating back in 2009 that Google Insights could anticipate changes in UK unemployment figures.

    Financial difficulties searches versus unemployment, until April 2009

    UK unemployment rate charted against search volumes for 24 related keywords, from January 2004 to April 2009 Sources: Office for National Statistics, Google Insights

    Maybe I should have followed through with that idea, because there’s now a hedge fund that bases its investment decisions on data from Twitter. It’s called Derwent Capital Markets, it opened for business last week, and if its managers end up making a mint there might well be a new bandwagon in town.

    So how do you run a hedge fund based on tweets? From what I understand of Derwent’s methodology, their algorithms measure the “calmness” of the Twittersphere – presumably based on sentiment analysis, which I’m a bit skeptical about. This is used to estimate the volatility of the Dow Jones Industrial Average index, with a three-day time lag.

    This leaves a lot of unanswered questions. Does a non-calm day of Twitter conversations always correspond to a drop in the DJIA, or just volatility? Are they trying to predict metrics like trade volume and so on as well as broader day-to-day movements in the overall index? And are they ranking Twitter users based on credibility, or are spam bots equal to financial journalists, economists, and prominent investors?

    Obviously algorithmic hedge funds aren’t about to disclose their inner workings so questions like this will have to remain unanswered for now. But what of the other, larger, question – isn’t the whole idea just, well, a bit… silly?

    I can see why people might react in this way, and even I feel a bit skeptical about something describing itself as a “social media-based hedge fund” and that apparently pulls data only from Twitter, when there are lots of other sources that could be tapped. But it would be wrong to dismiss the basic concept.

    Our everyday activities – web searches, page views, purchases, things we say on open social networks – leave a trail of data behind, which we tend to see as ephemeral or throwaway. We severely underestimate the value of this data but Google doesn’t, Facebook doesn’t, and we shouldn’t either. This data becomes even more valuable when aggregated across entire countries, continents, or the planet as a whole. In fact, it could be argued that the predictive potential of aggregated global real-time data has yet to be fully imagined, let alone realised.

    The biggest problem with this resource is that we don’t really know how to exploit it yet. Things like Google Flu Trends or this Twitter-based hedge fund may be crude and experimental, and will definitely look even more so in five years time. Along the way there will be hype, bandwagonism, maybe even a stock market bubble, resulting from the application of real-time data to real-world problems.

    But we need to make a start somewhere, and as silly as a Twitter-based hedge fund might sound, it’s as good a place to begin as any.


  3. Felix Salmon on the problems with Twitter’s transience

    Posted December 31, 2010 in comment, social media  |  No Comments so far

    I’m posting this from my phone, so apologies in advance for any typos. But I wanted to share this article from Felix Salmon on how the Wired/Wikileaks discussions of the last few days have highlighted a problem with Twitter’s new role in online debates:
    http://blogs.reuters.com/felix-salmon/2010/12/30/the-evanescence-of-twitter-debates/

    As commentators use their blogs for increasingly journalistic content, the conversational aspect of blogging moves on to Twitter. This leads to two problems.

    First, these conversations become very hard to join mid-stream. If you weren’t following from the beginning, you’ll have a hard time catching up. This is especially true of conversations that involve more than two people, as the “in reply to” functionality is no help. A commment thread on a blog or forum, on the other hand, can be read from the beginning even if you’re coming late to the party, and its linear structure makes it easy to catch up.

    The second problem is that Twitter loses these discussions after a couple of months, so they’re not available for future reference. This ephemerality is part of Twitter’s appeal for users, but from an archiving point of view it’s definitely a weakness. It’s good to be able to look back on how topics were discussed in their time, but Twitter currently doesn’t let us do that.

    Maybe Twitter will evolve to address these problems over time. If it doesn’t, however, there could be an opportunity for third party products that do.


  4. Towards a truly social TV experience (part 1)

    Posted November 24, 2010 in media  |  2 Comments so far

    When the concept of on-demand television was still new and exciting, it was tempting to think it might lead to the demise of the mass synchronous experience that was broadcast TV. After all, what value could broadcast TV deliver that on-demand services like the iPlayer couldn’t? And was that value really worth the inconvenience and inflexibility it imposed on the viewer, who had to be in a set place at a set time to view the programme? Apart from sport and news, would anyone really care about the transmission times of programmes once on-demand TV had taken off?

    By now we know that, yes, people do still care about the transmission times of TV programmes, and the synchronous viewing experience of broadcast TV can have a value that justifies the burdens it places on the viewer. But this isn’t because on-demand hasn’t taken off. On-demand services have transformed the way we view television, but the broadcast TV experience has a new lease of life too.

    The internet, unsurprisingly, is the driving force behind both on-demand’s success and the renaissance in broadcast viewing. But two intertwined yet distinct “strands” of the internet are at work here.

    With on-demand, it’s the internet’s infrastructure – content delivery networks, consumer ISPs, the computers and set-top boxes found in the homes of viewers. The nuts and bolts of the internet’s growth have enabled on-demand services and the design of products like the iPlayer.

    But with broadcast TV, it’s not so much the technological or infrastructural “strand” of the internet as its social layer – social use of the internet among the wider public has grown hugely in the last five years. At the same time social interactions have accelerated, becoming more synchronous and less like the newsgroup / messageboard model of old. We post less words, more frequently, and the result is a far more conversational mode of online interaction.

    This has introduced a new dimension to the experience of watching broadcast TV. Viewers might not be physically connected to one another, as they were in the heyday of TV with the whole family gathered in the living room. But they’re connected to hundreds, thousands, maybe millions of others, watching the same show as they are. Some of these people are friends and others are strangers, but all are in reach – all are potential contributors to a conversation about the programme. Even the viewer sat alone in their living room can feel connected and involved as they watch, in a way that they couldn’t before.

    So the internet has brought about an alternative to broadcast TV while giving it a new lease of life at the same time. And it’s not just geeks that are engaged in this new way of TV viewing – if you need proof of this, a cursory glance at the #xfactor hashtag on Twitter should do it. The public has raced ahead of the technology here, using whatever gadgets come to hand to keep up with the conversations. No tool or “product” designed for social TV viewing is particularly prominent, it’s something that the public just does, in its own way.

    Is this going to change? Will technology catch up with the public – will new services specifically designed for social TV viewing come along, will they work, and will they bridge the gap between the on-demand and broadcast experiences? I’ll explore these questions in more detail in part 2 of this post.


  5. Google Buzz: a serious new fixture in the social web?

    Posted February 12, 2010 in social media  |  No Comments so far

    Not everyone is all that impressed by Google Buzz so far, but I am. Yes, questions are being raised about privacy – but such questions are a given in any modern discussions about social technology. And some have been quick to point out limitations in terms of interface (“I quickly found the Buzz user interface… visually uninviting“) and features (“Google Buzz: The Missing Features“) – but imperfection is inevitable when a service is only two days old.

    For what it’s worth, there are things about Buzz I’d like to change. Conversations shouldn’t be treated so much like emails, for example, with “read” and “unread” states – this brings “inbox anxiety” into the equation, something Twitter was wise to discard. And users could benefit from more fine-grained control over privacy settings.

    Inbox anxiety with Buzz

    Inbox anxiety with Google Buzz - I'm not looking forward to having hundreds of unread "Buzzes"

    But I’m happy to put these thoughts to one side: at the moment I’m more interested in the response it’s provoked among my own contacts, many of whom are tech-savvy but not really social web junkies. So far, it’s making me think that Buzz has an appeal for people who are active online but always disliked Twitter and had never heard of Friendfeed.

    Buzz has definitely been a conversation-starter in a way that Wave wasn’t. In the first few hours, many posts were as you’d expect – “what is this for?”, “can anyone see this post?”, that sort of thing. Today is day two for Buzz, however, and the conversations have started to move away from these meta topics. In fact they’re slowly starting to resemble the sorts of conversations these people have in real life.

    This is very different from Wave, which prompted a few discussions of the “what’s this all about?” variety before being largely abandoned even by early adopter types like myself. Obviously this might happen with Buzz as well – as I said above, today is only day two – but the acceptance trajectory so far seems very different. For example, the risk of being flooded with too much Buzz data seems much greater than that of Buzz falling into disuse.

    In many ways I’m tempted to think that Wave has been a kind of public beta for Buzz. MG Seigler at TechCrunch is thinking along similar lines in this post, If Google Wave Is The Future, Google Buzz Is The Present. Buzz certainly explains why Wave had no Gmail integration, something I wondered about at the time.

    Once again, it’s early days with Buzz. But my own anecdotal experiences so far make me suspect that – despite the contrary opinions of various mavens and competitors – it’s going to be a fixture in the social media landscape for some time to come.


  6. How to post your Last.fm loved tracks to Twitter

    Posted December 8, 2009 in How-to  |  5 Comments so far

    I remember when Twitter was still quite new. Back then, a lot of people were still trying to think of uses for it and one thing that was fairly common was to plug it into your Last.fm account.

    In retrospect I can see why that was seen as a good idea. Twitter was supposed to be about broadcasting minor ephemeral details, and the music you were currently listening to definitely fell into that category. But there was a downside. People listen to a lot of music and, with a Twitter post for each track played, that added up to a lot of useless information on Twitter. Thankfully, the practise of scrobbling directly to Twitter soon faded out.

    Today there are some more useful and less irritating ways of posting information from Last.fm (or, indeed, its open source alternative Libre.fm to your Twitter account. One of them, Tweekly.fm, produces an automated weekly tweet of your top three artists. Another one, which I’m going to explain here, involves posting tracks that you “love” on Last.fm to your Twitter account.

    Here’s how it works:

    1. If you don’t have a Last.f account, create one here
    2. Get the URL of your “Loved tracks” RSS feed. This is easy: just change “USERNAME” in the URL below for your Last.fm username.

      http://ws.audioscrobbler.com/2.0/user/USERNAME/lovedtracks.rss

    3. Test the URL by opening it in a browser. You should see something that looks a bit like this:
      Last.fm RSS feed browser output
    4. If it works, go to Twitterfeed.com and create an account if necessary
    5. Once logged in to Twitterfeed, click on the “Create new feed” button to the top-right of the screen
    6. In “Step 1: Send Feed To”, select Twitter. Click on the large “Authenticate Twitter” button and enter your Twitter account details. You’ll then be directed back to Twitterfeed.com
    7. In “Step 2: Name feed & source URL”, enter a name for the feed – this can be anything you like. In the “RSS Feed URL” field, paste the URL of your RSS feed
      Twitterfeed screenshot 1
    8. Click on the “test feed” button to make sure the feed is valid
    9. Click “Advanced settings”. A bunch of new options will appear underneath. Here’s a screenshot with the things you need to check circled in red:
      Twitterfeed's advanced settings

    10. In “Post content”, select “Title Only”. This will ensure that the posts to your Twitter account only contain the artist, title and shortened URL to the track you loved
    11. Make sure “Post link” is checked and a URL shortening service is selected
    12. You might also want to enter some text in the “Post Prefix” or “Post Suffix” fields, otherwise your tweets might be slightly baffling
    13. You’re done – just click “Create feed” and that’s it set up.

    Now whenever you “love” a track on Last.fm, your Twitter account will post a link to it. This makes Last.fm’s “love” feature a bit more useful when it comes to recommending music to other people – especially people who don’t use Last.fm. And as long as you don’t love everything you listen to you won’t be clogging up your Twitter feed.


  7. Using Google Spreadsheets to extract Twitter data

    Posted November 20, 2009 in How-to, twitter  |  28 Comments so far

    Update (5th December 2017): Several years ago, Twitter changed its API in a way that completely broke the process I describe below. I don’t know how you’d do the same thing today. It would probably help if you were some kind of white supremacist, going by where Twitter’s moral compass seems to be pointing.


    Last weekend I was looking for ways to extract Twitter search data in a structured, easily manageable format. The two APIs I was using (Twitter Search and Backtweets) were giving good results – but as a non-developer I couldn’t do much with the raw data they returned. Instead, I needed to get the data into a format like CSV or XLS.

    Some extensive googling led me to this extremely useful post on Labnol, where I learnt about how to use the ImportXML function in Google Spreadsheets. Before too long I’d cracked my problem. In this post I’m going to explain how you can do it too.

    Data you can extract from Twitter

    This walkthrough will teach you how to extract two types of Twitter data using Google Spreadsheets – tweets and links.

    Tweets are extracted using the Twitter Search API in conjunction with ImportFeed. This allows Twitter search results to be extracted into a spreadsheet format.

    Links are extracted using the Backtweets API in conjunction with ImportXML. The Backtweets API allows you to find any links posted on Twitter even if they’ve been shortened using services like bit.ly or tinyurl.

    I’m in a hurry, can I just do this right now?

    If you just want to do it – instead of learn how to do it – just open this Google spreadsheet I’ve created.  You’ll need to make your own local copy so you can edit it. Instructions can be found in the spreadsheet itself.

    How to extract tweets containing links

    The instructions below will help you create a Google Spreadsheet that pulls in and displays the time, username and text of all tweets containing links to a specified page. Because it uses Backtweets, these tweets will be retrieved even if they used shortened URLs from services like bit.ly or tinyurl.

    1. Create a new spreadsheet in Google Documents.
    2. Enter column labels in this order: “Search criteria”, “Timestamp”, “Username” and “Tweet text” in cells A1 to D1.
    3. In cell B2, underneath Timestamp, insert the following formula:
    4. =ImportXML("http://backtweets.com/search.xml?itemsperpage=100&since_id=1255588696&key=key&q="&A2,"//tweet_created_at")
    5. In cell C2, underneath Username, insert the following formula:
      =ImportXML("http://backtweets.com/search.xml?itemsperpage=100&since_id=1255588696&key=key&q="&A2,"//tweet_from_user")
    6. In cell D2, underneath Tweet Text, insert the following formula:
      =ImportXML("http://backtweets.com/search.xml?itemsperpage=100&since_id=1255588696&key=key&q="&A2,"//tweet_text")
    7. Now paste a search query into cell A2 – say, http://www.google.com. After a few seconds, you should see columns B, C and D fill up with tweets, looking something like the image below:
    8. Google Spreadsheet showing Backtweets results

    9. The formulas pasted into cells B2, C2 and D2 all reference the URL in cell A2. This means that whenever you paste anything new into A2, the search results should refresh.
    10. Also, you can paste parts of URLs into A2 – not just entire ones. This is useful for seeing all links to a specific directory on your site, for example.

    Finally, this tool can only extract 100 results at a time – but it is possible to set it up to retrieve more than that. Look at my sample Google Spreadsheet if you want to do this.

    Extracting tweets from Twitter search results

    The method for doing this is identical to the above, but uses the ImportFeed function instead of ImportXML.

    1. Create a new spreadsheet in Google Documents.
    2. Enter column labels in this order: “Search criteria”, “Timestamp”, “Username” and “Tweet text”. For the rest of this walkthrough, I’m going to assume that these labels are in cells A1 to D1, but in reality you can put them wherever you like
    3. In cell B2, underneath Timestamp, insert the following formula:
      =ImportFeed("http://search.twitter.com/search.atom?rpp=20&page=1&q="&A2, "items created")
    4. In cell C2, underneath Username, insert the following formula:
      =ImportFeed("http://search.twitter.com/search.atom?rpp=20&page=1&q="&A2, "items author")
    5. In cell D2, underneath Tweet Text, insert the following formula:
      =ImportFeed("http://search.twitter.com/search.atom?rpp=20&page=1&q="&A2, "items title")

    6. Type a search query into cell A2 - say, "Hoth." Hit enter and the results will load. It should look something like this:
    7. Google Spreadsheets with data from Twitter searchThings will go wrong if you insert characters like # or @ into the search query. To get around this, type %23 instead of # and %40 instead of @. This will allow you to search for hash tags and usernames.

    I haven't been successful in generating more than 20 search results per request, but you can get around this using the page number parameter in the ImportFeed query string. See my own Google spreadsheet to find out how to do this.

    I hope these instructions are useful - if you have any comments, questions or feedback, please let me know in the comments.


  8. Charging companies for Twitter – what could it involve?

    Posted February 17, 2009 in social media, strategy  |  No Comments so far

    You’re probably aware that Biz Stone, one of Twitter’s co-founders, told Marketing magazine on February 10th that:

    “We are noticing more companies using Twitter and individuals following them. We can identify ways to make this experience even more valuable and charge for commercial accounts”

    How to decode this quote? It’s fairly vague, but I can think of a few possible charging models that Twitter might adopt. I’ve listed three of them here:


    1) “Twitter tax”

    Twitter will try to identify accounts that are run by companies rather than individuals. It will then attempt to extract money from the owners of these accounts. Failure to pay will result in closure of the account.

    I don’t think this is very likely, however:

    • Distinguishing companies from individuals would be extremely difficult. A lot of anger come from those who felt they’d been unfairly classified (e.g. if you’re a consultant and you discuss professional topics on Twitter, are you a “company”?)
    • No value would be added for those who pay
    • A lot of genuinely handy and non-revenue-generating information services would vanish from Twitter, diminishing the value of Twitter as an information utility
    • This diminishing of Twitter’s usefulness would lead many people to desert the service.

    2) “Singling out the marketers”

    Like the first option, Twitter will identify accounts that are run by companies. However, it will draw a line between companies that use it for information services and those who use it as a sales channel. Companies who use it as a sales channel will be penalised while those who use it for information services will not.

    This is a bit more viable than option 1:

    • Distinguishing sales from servicing would be easier than distinguishing companies from individuals. Rules could be defined, e.g. if you are seen to link to product pages or talk about offers or sales then you’ll be penalised
    • It would allow services that people find useful to continue – e.g. getting news updates from the BBC
    • It would encourage companies to use the service in an “ethical” way while heavily penalising spammers
    • As a result, there would be a lower risk of people leaving the service.

    3) “The enhanced service”

    Twitter will not try to distinguish companies from individuals. However, it will create an “enhanced” account which will provide additional features at a cost. Companies will be free to keep using the “basic” service if they want to.

    This is the most likely option, I’d say:

    • The challenge would be to come up with features that would make a paid account compelling
    • These could include things like offering brand protection (the account is marked as ‘official’), ecommerce features (people being able to pay over Twitter), advanced analytics (see reports on your followers and their behaviour etc), tracking abilities (find out how many people clicked the link in the last message you sent, etc)…
    • This would add value for people who chose to pay
    • There would be no need on Twitter’s part to pay people to detect and penalise companies
    • Things like news feeds and so on would continue to operate, meaning that the usefulness of Twitter wouldn’t be too diminished.

    Most of the commentary I’ve read so far seems to assume that something akin to the first option, the “Twitter tax”, would be introduced. But Twitter surely realise that it would be costly to implement and would seriously impact their growth rate. An enhanced service for which companies or individuals could pay is far more likely.

    In particular, keep an eye out for commerce features. Pay-by-Twitter might seem far-fetched at the moment but as the service becomes ever more pervasive a compelling user need for that service will begin to emerge.


  9. Another Twitter visualisation

    Posted February 3, 2009 in social media, visualisation  |  No Comments so far

    I promise I’ll stop posting links to these one day. Anyway, this is from a series of Superbowl-related interactive visualisations produced by the New York Times:

    Screenshot of NYT Twitter visualisation

    Unlike the visualisation of #inauguration posts I linked to recently, this isn’t based on hash tags but instead uses moving tag clouds to illustrate the volume of Twitter posts on various subjects during the Super Bowl.

    Examples include “Cardinals vs Steelers” (I know the Steelers are from Pittsburgh but from this animation I’d guess the Cardinals are from… Las Vegas? San Diego?), “talking about ads” (it’s vaguely depressing to see how much conversation the ads inspire) and player names (a guy called Fitzgerald obviously does something notable in the fourth quarter).

    This is maybe the most effective use of Twitter data I’ve seen so far, as it is centred around a single event but tracks various subjects of conversation related to that event. A far simpler and less interesting animation would have simply flagged every post with the hash tag #superbowl.


  10. Presidential inauguration – Twitter visualisation

    Posted January 23, 2009 in social media, visualisation  |  No Comments so far

    This animated map from FlowingData shows the global location of each Twitter post tagged as #inauguration between Monday and Wednesday this week.

    Twitter visualisation from FlowingData

    Although the world map isn’t shown, over time the US and the UK become almost perfectly defined by the density of Twitter post markers. You can also see outlines of south America and western Europe.

    http://projects.flowingdata.com/inauguration/

    The big flurry happens when the US wakes up on Tuesday morning…