1. A hedge fund based on Twitter may not be as stupid as it sounds

    Posted May 24, 2011 in comment  |  No Comments so far

    Using online analytics and social media trends to predict real-world events is nothing new. Twitter’s been used to predict box-office sales (story link, detailed paper) and Google search data has been telling us about future flu epidemics for a while now.

    Even I got in the act, demonstrating back in 2009 that Google Insights could anticipate changes in UK unemployment figures.

    Financial difficulties searches versus unemployment, until April 2009

    UK unemployment rate charted against search volumes for 24 related keywords, from January 2004 to April 2009 Sources: Office for National Statistics, Google Insights

    Maybe I should have followed through with that idea, because there’s now a hedge fund that bases its investment decisions on data from Twitter. It’s called Derwent Capital Markets, it opened for business last week, and if its managers end up making a mint there might well be a new bandwagon in town.

    So how do you run a hedge fund based on tweets? From what I understand of Derwent’s methodology, their algorithms measure the “calmness” of the Twittersphere – presumably based on sentiment analysis, which I’m a bit skeptical about. This is used to estimate the volatility of the Dow Jones Industrial Average index, with a three-day time lag.

    This leaves a lot of unanswered questions. Does a non-calm day of Twitter conversations always correspond to a drop in the DJIA, or just volatility? Are they trying to predict metrics like trade volume and so on as well as broader day-to-day movements in the overall index? And are they ranking Twitter users based on credibility, or are spam bots equal to financial journalists, economists, and prominent investors?

    Obviously algorithmic hedge funds aren’t about to disclose their inner workings so questions like this will have to remain unanswered for now. But what of the other, larger, question – isn’t the whole idea just, well, a bit… silly?

    I can see why people might react in this way, and even I feel a bit skeptical about something describing itself as a “social media-based hedge fund” and that apparently pulls data only from Twitter, when there are lots of other sources that could be tapped. But it would be wrong to dismiss the basic concept.

    Our everyday activities – web searches, page views, purchases, things we say on open social networks – leave a trail of data behind, which we tend to see as ephemeral or throwaway. We severely underestimate the value of this data but Google doesn’t, Facebook doesn’t, and we shouldn’t either. This data becomes even more valuable when aggregated across entire countries, continents, or the planet as a whole. In fact, it could be argued that the predictive potential of aggregated global real-time data has yet to be fully imagined, let alone realised.

    The biggest problem with this resource is that we don’t really know how to exploit it yet. Things like Google Flu Trends or this Twitter-based hedge fund may be crude and experimental, and will definitely look even more so in five years time. Along the way there will be hype, bandwagonism, maybe even a stock market bubble, resulting from the application of real-time data to real-world problems.

    But we need to make a start somewhere, and as silly as a Twitter-based hedge fund might sound, it’s as good a place to begin as any.


  2. Gap are targeting annoying people this Christmas

    Posted December 6, 2010 in ephemera, marketing  |  No Comments so far

    I saw this window display at the Moorgate branch of Gap this morning:

    "Social media whiz" wants "epic hat" (click for full size)

    I don’t know where to begin…


  3. Towards a truly social TV experience (part 2)

    Posted November 29, 2010 in media  |  No Comments so far

    This is part two of a three-part post. Click here for part one

    When programmes like Question Time, X Factor or The Apprentice are broadcast, an increasing proportion of the audience are watching with other devices close to hand. Some might have a laptop in front of them, others a mobile phone. As the programme begins, these secondary devices come into play, helping viewers share quips and observations with their friends and with the wider world.

    From a design perspective, it’s not pretty. People are focusing on more than one screen at a time, their attention divided between the programme and the stream of comments. A large number of apps, devices, tools and services are being used. The ergonomics are awkward, with laptops perched on knees and thumbs typing on Blackberry keyboards. It’s a mess, a dirty hack, a bottom-up kludge where everyone’s improvising with products designed with other things in mind.

    Media owners and interactive designers can react to this with both excitement and frustration: it’s exciting that a mass audience is doing something new with technology, something that has its own momentum; but it’s frustrating that they’re doing it in such an inelegant way. A lot of designers and broadcasters would prefer that the social TV experience was happening within a context they had defined, enabled by tools handed down to audiences for precisely that purpose.

    Audiences, however, might be quite happy doing what they’re doing with the tools they already have. Can media owners offer them something compelling enough to change this? As Lindsey put it in a comment on part 1 of this post:

    …there are serious questions about usability and the role of the TV as an ‘appliance’… simply layering the social web experience on top of a broadcast viewing experience won’t work. …the web people will not be able to assume success on the basis of the same usability principles they’ve always used, when they’re working at 10 feet.

    So a more ambitious approach is needed for any “product” that wants to be a central part of the social TV experience – merely putting a browser on a TV isn’t going to cut it.

    There are lots of reasons for this, not least of which is that the television has never been good at receiving user input. The three-digit page numbers on Ceefax were a pain to type, and searching on the Virgin Media iPlayer is a gruelling chore that demonstrates how little text entry has improved since then. When you’re watching a TV programme while following numerous online conversations about it, you need to comment quickly and the act of typing can’t be too distracting. Until the television gets better at this, secondary devices like phones and laptops are here to stay.

    Browsers on television screens aren’t the only angle being explored. During the UK’s 2010 general election campaign, broadcasters looked at ways of using online conversations about the leaders’ debates to enhance their coverage. Jude summed this up pretty well in another comment on part one:

    The TV networks… each created their own online/social viewing pages with ‘worms’ to show focus group reactions, drawing together social media references, setting up their own chat pages, etc… it was as if they’d tried to do everything at once because they didn’t know what would strike a chord.

    Which is understandable, given that broadcasters are still trying to grasp this phenomenon. Adopting a spirit of experimentation by doing as many things as possible is admirable, but can result in an unfocussed experience where the lack of confidence translates to a sense of desperation.

    Jude also mentions a tactic the TV networks used, where online audiences were surveyed to capture instant public response to the debates. This idea – incorporate elements of the online discussion into the programme itself – works well in principle, but can a TV programme really convey the volume and diversity of a full-fledged online discussion in a spot lasting thirty seconds? Or are the two mediums in conflict here, with the very nature of television encouraging a kind of editorialised “summing up” that runs counter to the nature of the online discussion As Jude says, this feature came across as fairly “traditional media” when applied to the electoral debates, so maybe the gap between online discussion and linear TV is still too big (although it’s decreasing all the time, as Martin Belam points out in this informative article).

    In part three of this post I’ll talk about some other ways in which people are trying to support new ways of watching and talking about TV, including what I see as potentially the most interesting – fusing the broadcast conversations and the on-demand viewing into one single experience.

    Edit, Jan 2012: OK, so I never got round to the third part of this post! Too much time was spent watching X-Factor and not enough writing blog posts. I’ll revisit this topic at some point though, I promise


  4. Towards a truly social TV experience (part 1)

    Posted November 24, 2010 in media  |  2 Comments so far

    When the concept of on-demand television was still new and exciting, it was tempting to think it might lead to the demise of the mass synchronous experience that was broadcast TV. After all, what value could broadcast TV deliver that on-demand services like the iPlayer couldn’t? And was that value really worth the inconvenience and inflexibility it imposed on the viewer, who had to be in a set place at a set time to view the programme? Apart from sport and news, would anyone really care about the transmission times of programmes once on-demand TV had taken off?

    By now we know that, yes, people do still care about the transmission times of TV programmes, and the synchronous viewing experience of broadcast TV can have a value that justifies the burdens it places on the viewer. But this isn’t because on-demand hasn’t taken off. On-demand services have transformed the way we view television, but the broadcast TV experience has a new lease of life too.

    The internet, unsurprisingly, is the driving force behind both on-demand’s success and the renaissance in broadcast viewing. But two intertwined yet distinct “strands” of the internet are at work here.

    With on-demand, it’s the internet’s infrastructure – content delivery networks, consumer ISPs, the computers and set-top boxes found in the homes of viewers. The nuts and bolts of the internet’s growth have enabled on-demand services and the design of products like the iPlayer.

    But with broadcast TV, it’s not so much the technological or infrastructural “strand” of the internet as its social layer – social use of the internet among the wider public has grown hugely in the last five years. At the same time social interactions have accelerated, becoming more synchronous and less like the newsgroup / messageboard model of old. We post less words, more frequently, and the result is a far more conversational mode of online interaction.

    This has introduced a new dimension to the experience of watching broadcast TV. Viewers might not be physically connected to one another, as they were in the heyday of TV with the whole family gathered in the living room. But they’re connected to hundreds, thousands, maybe millions of others, watching the same show as they are. Some of these people are friends and others are strangers, but all are in reach – all are potential contributors to a conversation about the programme. Even the viewer sat alone in their living room can feel connected and involved as they watch, in a way that they couldn’t before.

    So the internet has brought about an alternative to broadcast TV while giving it a new lease of life at the same time. And it’s not just geeks that are engaged in this new way of TV viewing – if you need proof of this, a cursory glance at the #xfactor hashtag on Twitter should do it. The public has raced ahead of the technology here, using whatever gadgets come to hand to keep up with the conversations. No tool or “product” designed for social TV viewing is particularly prominent, it’s something that the public just does, in its own way.

    Is this going to change? Will technology catch up with the public – will new services specifically designed for social TV viewing come along, will they work, and will they bridge the gap between the on-demand and broadcast experiences? I’ll explore these questions in more detail in part 2 of this post.


  5. Wealth of Networks – a semi-account

    Posted July 25, 2008 in conferences, social media  |  1 Comment so far

    Yesterday I went along to the Wealth of Networks conference at Imperial College. I was only able to attend the morning sessions, but it was a valuable chance to hear the thoughts of various individuals with diverse backgrounds but a shared understanding of the power of networks (in the broader sense of the word).

    The Twitter Experiment

    One of the things that I was planning on doing at the conference was to have a go at using Twitter as a means of realtime communication with other attendees, so I used my phone’s browser to run a Twitter Search for the event’s hash tag. I saw a tweet from someone else who was wondering if he’d “still be awake by 10am” and felt a bit excited at the idea of sharing a ‘back-channel’ of conversation during the sessions.

    As it turned out, however, there was some confusion around the hash tags. Most people turned out to have been using #wealthofnetworks rather than the #won tag listed in an email I’d received the day before. Also, I didn’t charge my phone adequately and it died within a couple of hours. Still, I think I’ve got a good understanding of how Twitter can make these sorts of events much more useful from a networking point of view, and I’m looking forward to my next chance to try it out in real time.

    John Varney’s Keynote Address

    John Varney was formerly Chief Technology Officer at the BBC and now runs Maximum Clarity, a consultancy. The earlier part of his talk set out to contextualise recent developments in social media & networks by looking at five key figures who have played a part in their development:

    • David Sarnoff – Founder of NBC, saw television’s potential as a community medium
    • Gordon Moore – Co-founder of Intel, laid foundations for the evolution of the devices we use today
    • Bob Metcalfe – Inventor of Ethernet, founder of 3Com
    • David Reed – Forecast the exponential scalability and utility of social and computer networks (Reed’s Law
    • Ross Mayfield – cited as the ‘father of the social network’ in the modern sense of the term

    Having given an overview of where we are, he then outlined where he felt we were going. Referencing Yoklai Benker‘s book, “The Wealth of Networks” (from which the conference took its name), Varney argued that the wealth-creating potential of networks was now being realised to a significant extent. Benkler had identified three manifestations of the wealth of networks:

    • Social Wealth – for example, Facebook (I’m a bit of a Facebook cynic myself but was hardly going to argue with its prevalence)
    • Monetary Wealth – for example, eBay
    • Creative Wealth – for example, Metacafé (Varney said that he’d considered listing Youtube as the example, but doesn’t really like it)

    Also, we’re currently moving towards a state of ‘information symmetry’, in which powerful, monolithic arbiters of information (he heavily implied that the BBC could be thought of as one of these) are seeing their influence subside as individuals start becoming sources in their own right.
    The examples Varney cited to illustrate this principle were September 11th 2001 attacks in the US, the 2004 Boxing Day Tsunami and the 7th July 2005 tube bombings in London.

    • September 11th – this event was covered by the broadcast networks, who were on the scene immediately and were the main source of information (although you could go into more detail here about how forums and mailing lists on the internet became a crucial news source as the BBC, Sky and CNN websites crashed under the strain…)
    • The Asian Tsunami – the broadcast networks weren’t there and didn’t arrive until two days later. The event was covered by people who were at the scene, but these people handed their footage to the networks and relied on them to disseminate it
    • 7th July – in this case, the people who took pictures and filmed video at the scene also published their material themselves via sites like Flickr and Youtube, demonstrating the obsolescence of the broadcast media.

    So given the above observations about wealth-generation and information symmetry, Varney concluded his talk with a number of assertions, among which were:

    • The death of the organisation and rise of the ‘collaboration’ – in a few years we won’t be talking about organisations but instead about more fluid and diffuse ‘collaborations’ (he intentionally used the word as a noun). This will require a “new breed” of management and leadership (I wanted to ask more about this but there wasn’t time)
    • Open source innovation – not limited to software development. Through being open to ideas from people outside one’s organisation or discipline, utilising the power of one’s networks, genuine innovation becomes easier
    • Computing-free enterprise – “if your company still has an IT department by 2015, you’re in trouble.” Cloud computing will cause companies to abandon their own technology infrastructure and move to remotely-served
      applications
      .

    Panel Discussion Moderated by Gareth Mitchell

    After a 20-minute break a large panel assembled for the second session, a panel discussion moderated by Gareth Mitchell (lecturer at Imperial as well as presenter of the BBC’s Digital Planet). It’s worth noting here that I heard about this event via a Twitter post from Gareth, which may or may not be of interest to the event’s organisers!

    File-sharing in the UK

    The first subject under discussion was the morning’s story on file-sharing. In short, ISPs are going to send nasty letters to people who use P2P file-sharing applications. A straw poll of the panel & audience revealed that the vast majority of those present used these tools, and the whole initiative was treated with some levity.

    The discussion focused on record labels and their flawed business models, the consensus being that these, rather than rampant criminality, were the real root of the problem. I furrowed my brow at the idea that Led Zeppelin or Bob Dylan could be thought of as being in the “long tail”, and wrote in my notebook, “do these people know how long that tail really is???”. To be fair though I’m an embittered ex-owner of a tiny independent record label so have a bit of a skewed perspective on what music can be seen as obscure…

    One attendee with a Canadian accent mentioned Live Nation’s deal with Madonna and their focus on music at the point of consumption – specifically, concerts – rather than on ownership. Would this model trickle along the “tail” to the point that lesser-known artists would make money this way?

    The panel broadly thought so, but I didn’t – I feel that physical events will remain the domain of either very large or intensely localised acts, and the most interesting thing the internet has done for music has been to decouple fan-bases from geography. You need a critical mass of fans in a certain area to be able to make money through performance, and if your music appeals to people with more specialised tastes, this will always be elusive.

    Internet Blackout

    The second topic was one of “internet blackout” and how the UK would cope with it. One question was, is there a commercial motive for ISPs and telcos to continually upgrade their infrastructure to accomodate the constantly-rising demand? In short – there is, and previous predictions of internet meltdown have come to nothing. John Varney noted that newspapers in particular tend to pick up on these stories as they’re innately attracted to the idea of the internet going away. It’s going to happen though.

    I wanted to ask about a more realistic “internet blackout” scenario than global meltdown, namely the suspected Russian attack on Estonia’s internet infrastructure in 2007. Events like that are likely to become more common in the future, and if botnets continue to grow in size it won’t just be states that are behind them – however, a lot of people raised their arms before I got a chance to pipe up.

    Net Neutrality

    The final topic was net neutrality and the question of whether or not multiple tiers of internet access were emerging. This issue is a very complex one and the panel didn’t have too much time to discuss it in real detail – one interesting question that came up, however, was Nick Leon’s assertion that ISPs should look for ways to provide additonal services to their customers and avoid being ‘dumb pipe carriers’, destined to suffer as US telcos had done in the 1990s.

    John Varney seemed to disagree with this view, though, and so do I – surely that’s AOL’s model? There may be a market for that sort of ISP but I feel that consumers have largley become able to navigate the open internet without difficulty, and are therefore unlikely to be attracted back to the “walled garden” regardless of how much proprietary content is offered to them. You never know though.

    In Summary

    I’d like to have been able to stay for the afternoon and join in one of the breakout panels, but as one of the few wage-slaves in attendance (in a show-of-hands most attendees identified themselves as scientists or entrepeneurs) I had to head back to the office. It was annoying that I’d forgotten to charge my phone too, as it would have been good to carry on the Twitter experiment a bit further and see if I could seek out the flesh-forms of people who’d been tweeting.