Tuesday, June 28, 2016

The Rat Trap

Another of the capsule insights I took from The Shape of Actions by Harry Collins (see also Auto Did Act) is the idea that a function of the value some technology gives us is the extent to which we are prepared to accommodate its behaviour.

What does that mean? Imagine that you have a large set of data to process. You might pull it into Excel and start hacking away at its rows and columns, you might use a statistical package like R to program your analysis, you might use command line tools like grep, awk and sed to cut out slices of the data for narrower manual inspection. Each of these will have compromises, for instance:
  • some tools have possibilities for interaction that other tools do not have (Excel has a GUI which grep does not)
  • some tools are more specialised for particular applications (R has more depth in statistics than Excel)
  • some tools are easier to plug into pipelines than others (Linux utilities can be chained together in a way that is apparently trickier in R

These are clear functional benefits and disbenefits, and surely many others could be enumerated, although they won't be universal but dependent on the user, the task in hand, the data, and so on.

In this book, Collins is talking about a different dimension altogether. He calls it RAT or Repair, Attribution and all That. As I read it, the essential aspect is that users tend to project unwarranted capabilities onto technology and ignore latent shortcomings.

For example, when a cheap calculator returns 6.9999996 for the calculation (7/11) x 11 we repair its result to 7. We conveniently forget this, or just naturally do not notice it, and attribute powers to the calculator which we are in fact providing, e.g. by translating data on the way in (to a form the technology can accept) and out (to correct the technology's flaws).

The all that is more amorphous but constitutes the kinds of things that need to be done to put the technology in a position to perform. For example, entering the data into a small display which can be hard to read under some lighting conditions using very fiddly rubber keys with multiple functions represented by indiscernible graphics.

Because these skills are ubiquitous in humans (for the most part), we think nothing of them. But imagine how useful a calculator would be if a human was not performing those actions.

I had some recent experience of this with a mapping app I bought to use as a Satnav when driving in the USA. I had some functional requirements, including:
  • offline maps (so that I wasn't dependent on a phone or network connection)
  • usable in the UK and the USA (so that I could practise with it at home)
  • usable on multiple devices (so that I can walk with it using my phone, or drive with it on a tablet)

I tried a few apps out and found one that suited my needs based on short experiments done on journeys around Cambridge. Despite accepting this app, I observed that it had some shortcomings, such as:
  • its built-in destination-finding capacity has holes
  • it is inconsistent in notifications about a road changing name or number while driving along it
  • it is idiosyncratic about whether a bend in the road is a turn or not
  • it is occasionally very late with verbal directions
  • its display can be unclear about which option to take at complex junctions

In these cases I am prepared to do the RAT by, for instance, looking up destinations on Google, reviewing a route myself in advance, asking a passenger for assistance in some cases. Why? Because the functionality I want wasn't as well-satisfied by other apps I tried; because in general it is good enough; because overall it is a time-saver; because even flawed it provides some insurance against getting lost; because recovery in the case of taking the wrong turning was generally very efficient; because human navigators are not perfect or bastions of clarity either; because my previous experience of a Satnav (a dedicated piece of hardware) was much, much worse; because while interacting with the software more I started to get used to the particular behaviours that the app exhibits and was able to interpret its meaning more accurately.

Having just read The Shape of Actions, this was an interesting experience and meta-experience and user experience. A takeaway for me is that software which can exploit the human tendency to repair and accommodate and all that - which aligns its behaviour with that of its users - gives itself a chance to feel more usable and more valuable more quickly.
Image: https://flic.kr/p/x2M76w

No comments:

Post a Comment