1. The city as interface (part one)

    Posted August 2, 2010 in user centred design  |  No Comments so far

    In Responsive Environments: Architecture, Art and Design, writer Lucy Bullivant refers to urban environments as “interfaces in their own right”. Reading this, I found myself wondering – do modern cities function as interfaces? If so, how? And can designers of interactive systems find new inspiration by thinking of cities in this way?

    “The map is not the territory”

    By expressing functionality in a way that’s more suited to our needs, interfaces help us understand and act upon devices and systems that could otherwise be confusing. They’re most helpful when something’s function is not directly expressed by its form: a pencil doesn’t need an interface, but a pencil sharpener might.

    From this perspective, it could be said that the Tube map of London is an interface for the city. After all, it abstracts the tangled system of London streets into a neatly organised network of straight lines, making its complexity manageable even to tourists. But this is incorrect. It’s the tube network itself – not the Tube map – that acts as an interface for London.

    This is because the Tube map isn’t actually an abstraction: it might distort geography, but it represents the network’s structure faithfully. The network itself is the abstraction, the layer of navigation that helps us forget the confusing mess of streets and avenues above ground. It is interactive: we can use it. The Tube map is an ingenious visualisation of that interface, but is not an interface in itself. As Alfred Korzybski said, the map is not the territory.

    So when we think of cities as interfaces, we should go beyond thinking of visualisations and maps and focus instead on how the physical make-up of the city facilitates its use.

    The problems cities are required to solve

    All human settlements have certain things in common. Places for people to eat and sleep, and facilities for producing food, materials and so on. Another is navigation. Even the smallest hamlet has a navigational or wayfinding role to play, acting as a landmark for passing travellers with no interest in local happenings.

    These three basic roles – habitation, production and navigation – apply to almost every place that people live. But in larger settlements additional uses are encountered. A traveller passing through a town might look for medical assistance, or establishments providing food and accommodation. In a larger town, there might be thriving local industries, academic institutions, working artisans.

    As settlements grow in size these roles explode in number. These in turn attract ever more visitors, many of whom establish businesses and institutions and therefore add to the range of uses on offer. After a while the vast size and complexity of the settlement begins to pose a new challenge: how can anyone possibly understand everything that’s happening?

    It’s in response to this challenge of incomprehensibility that the “interface” of the city has evolved over time.

    The designed environment

    As cities grow, they effectively become designed environments. Rivers are submerged beneath roads, hills and valleys are smoothed over, landfill and burial sites override the natural topography. When we’re surrounded by city we’re in an environment shaped (consciously or not) by humans, an environment whose very structure has a function: to point us towards the roles, uses and amenities that the city offers.

    If the city’s environment fails to do this well, the city itself is failing. Visitors looking to sell won’t locate the businesses willing to buy. People needing help won’t know where to look for it. The “functionality” of the city will go undiscovered and the city won’t be used. The goal of the urban environment is to make the city’s functionality discoverable to its users.

    Patterns

    The structure of a city, then, has objectives in common with “conventional” interfaces – to help users locate and utilise underlying capability. In a city, this capability could be anything from acquiring a visa to buying a rare, imported album. In a computer system, it might be turning the wi-fi antenna off and on. But in cities, the number of capabilities is significantly greater. So how do cities help us make sense of them?

    One way – which can also be seen in interactive interface design – is through the presence of patterns which help make cities comprehensible. As we enter an unfamiliar city from its outskirts, we will probably know without being told when we’re entering its central district. Other patterns are more specialised. Record collectors visiting unfamiliar cities will often find record shops easily, thanks to the patterns that “signpost” those sorts of areas.

    Just as computer interfaces use patterns to accommodate the mental models of their users, cities use them to reward familiarity: the more cities you know, the easier it is to find your way around the ones you don’t. And there’s a functional importance too. These patterns often develop into localised “clusters” within which industries or disciplines are closely concentrated, such as the cluster of media businesses around Soho or the legal profession’s concentration around Chancery Lane.

    Windows 7 control panel

    Grouping related controls in Windows 7

    When this happens the city isn’t just making itself easier to navigate, it’s making itself easier to operate, just as an interactive system is more usable when similar features are placed near one another. And when cities are easy to operate, all constituent parts – from local businesses to visiting strangers – feel the benefits.

    So far we’ve explored how the structures of successful cities share some of the conventions of successful interfaces. But real interfaces are interactive – they aren’t just static maps and informational aids. When we do something with them, we receive feedback and response. The second part of this post will look at how the city provides feedback and how, when we  use a city, we become a part of the interface ourselves.

    This is part one of a two-part post. Click here for part two of this article


  2. The keyboard is not going away

    Posted April 8, 2010 in user centred design  |  No Comments so far

    Since the launch of the iPad, hubris and hysteria among technology commentators has been gradually increasing. The device is the future; Rupert Murdoch thinks it’s the saviour of journalism; it will change the world; it “can replace any real-world object you own”.

    One notion that I take exception to, however, is that the iPad signals the death of the keyboard and that touch interfaces are destined for ubiquity.

    Now, I’m no technological conservative – I’ve been using touch-screen phones since before the iPhone came out. But I think a more fundamental point is being missed here, which is that the roles computers play in our lives are multiplying greatly.

    Computers used to play a relatively limited set of roles which could be supported with a common set of interface models, mainly centred around the keyboard and the mouse. The keyboard & mouse setup worked when the computer operator was sat at a desk, had to enter lots of data from a large character set, and needed direct access to many (maybe even several hundred) on-screen controls offered by their applications.

    Today, not every computer user is sat at a desk and in the need state described above. Computer users might be on the other end of a phone line from the machine itself, operating it through a (notoriously infuriating) voice interface. They might be delivering a parcel and collecting the recipient’s signature using a handheld computer’s (notoriously infuriating) pen interface. And of course the computer user might be using a personal device like a smartphone, which needs to be small and light and whose functions don’t require the sort of  intricate and precise interactions supported by the keyboard/mouse combo.

    But this doesn’t change the fact that some computer users will still be in situations where the keyboard and mouse paradigm is appropriate. Therefore the keyboard will not die.

    What things like the iPad illustrate is that we are using computers more than we used to – in a wider number of contexts and for a wider range of reasons. They don’t replace what we already have, they’re just a new addition to our collection of tools.


  3. Readability of online text

    Posted November 10, 2009 in user centred design  |  No Comments so far

    I’ve been trying to codify some guidelines for writing for the web recently, and came across this study (PDF) by Wichita University’s Software Usability Research Laboratory in 2005.

    The study involved 66 graduate students with either normal or corrected vision being given a short story to read online. A preliminary reading test was carried out on participants so the study could predetermine their reading speed. Different text layouts were used, such as multiple column, full justification and so on. Study participants were tested for both reading speed and reading comprehension.

    • Reading speed: Multiple-column layouts impaired reading speed when text was left-justified. However, left-justified text was read more quickly in a single column layout than full-justified text. The highest reading speed was 269.33 words per minute for two-column, full-justified text.
    • Reading comprehension: No significant variation was found across the different text formats.
    • Fast versus slow readers: Faster readers benefited most from the 2-column, fully-justified layout. Slow readers benefited from 1-column, left-justified text.

    The study was perhaps limited by the fact that the participants, as undergraduates, were heavier readers of online text than the average member of the population. I’d be interested to see if any similar studies have been carried out with a larger sample size, broader age range and a more representative mix of internet ‘natives’ versus internet ‘newbies’. Does anyone know of any? If I find some I’ll post them here.


  4. Ergonomics for interaction designers

    Posted January 26, 2009 in user centred design  |  No Comments so far

    This series of articles from Rob Tannen at Designing for Humans discusses how a knowledge of ergonomics can be increasingly helpful to people working in interaction design.

    Ergonomics considers the suitability of physically extant products to the human form in all its varieties. As a result it’s not historically been very relevant to interaction designers, who have worked in a more abstracted space than those who design chairs, computer mice, monitors and keyboards. But Rob Tannen argues that the advent of ubiquitous computing and the resulting diversity of form factors (netbooks, phones, touchscreens, kiosks, etc) require interaction designers to develop their understanding of this field.

    Overview of Anthropometric Design Types

    The three-part series of articles makes for easy reading, an interesting and engaging introduction to the field. It’s also rich with links to more rigorous and in-depth materials for those who want to explore it further. If you want to be able to talk knowledgeably about anthropometrics, satisficing and the flaws of the Proctrustus approach, you’ll find Rob’s writings more than helpful.


  5. I’ve seen the future and it’s… a bit like MacOS X

    Posted August 11, 2008 in projects, user centred design, web  |  No Comments so far

    My friend Lindsey sent me this link earlier on today. It’s a video exploring a future user experience concept, developed by Adaptive Path for Mozilla Labs.

    http://www.vimeo.com/1450211

    Jill looks at the New York Times website

    In the video Jill, the principal user, makes use of a number of futuristic interface devices to:

    • Interact with a friend while browsing
    • Extract and manipulate data sets from within websites
    • Navigate through a vast collection of bookmarks using a 3D interface
    • Migrate her browsing experience seamlessly from desktop to mobile devices
    http://www.vimeo.com/1450211

    It’s a bit like MacOS X

    I initially found myself wondering, is the future really going to look so much like Mac OS X? But looking past the visual treatment, there are some strong concepts here. I particularly like the ability to extract and manipulate data from web pages, the near-removal of the browser interface, and the utilisation of the 3D interface to convey the age of bookmarks.

    That said, not everyone agrees with me – I’ve had a few conversations today about these ideas and there isn’t really a consensus among the people I’ve been talking to.

    http://www.vimeo.com/1450211

    The Z-axis is used to convey the age of a bookmark

    Is 3D ever really going to enter the mainstream as a means of web navigation? I’ve always been quite sceptical, to be honest. It comes down to incentive – if there’s a serious benefit to be had from learning unfamiliar and complex interfaces, then people will do it. People learnt how to use Myspace, after all!

    So, what would have to happen to make us want to learn new, complicated, 3D web interfaces?

    Well, the web (along with our own slice of it; our bookmarks, our browsing histories, our social networks etc) is on its way to becoming unmanageably large. Past a certain point, there may be a real benefit in migrating to more sophisticated – but more complex – interfaces.

    The standard methods of searching and browsing may still be usable, but woefully inefficient; like running a modern computer with only a command line interface and no GUI. Achievable, but insane.

    The web is growing exponentially – its size in five or ten years’ time could present us with unique problems and challenges. Some of the ideas in this concept video shed some light on how we might solve them. But what are those problems and challenges going to be? I’m probably more interested in them than I am in the solutions.