Wednesday, March 22, 2017

Can You Afford Me?


I'm reading The Design of Everyday Things by Donald Norman on the recommendation of the Dev manager, and borrowed from our UX specialist. (I have great team mates.)

There's much to like in this book, including
  • a taxonomy of error types: at the top level this distinguishes slips from mistakes. Slips are unconscious and generally due to dedicating insufficient attention to a task that is well-known and practised. Mistakes are conscious and reflect factors such as bad decision-making, bias, or disregard of evidence.
  • discussion of affordances: an affordance is the possibility of an action that something provides, and that is perceived by the user of that thing. An affordance of a chair is that you can stand on it. The chair affords (in some sense is for) supporting, and standing on it utilises that support.
  • focus on mappings: the idea that the layout and appearance of the functional elements significantly impacts on how a user relates them to their outcome. For example, light switch panels that mimic the layout of lights in a room are easier to use.
  • consideration of the various actors: the role of the designer is to satisfy their client; the client may or may not be the user; the designer may view themselves as a proxy user; the designer is almost never a proxy user; the users are users; there is rarely a single user (type) to be considered.

But the two things I've found particularly striking are the parallels with Harry Collins' thoughts in a couple of areas:
  • tacit and explicit knowledge: or knowledge in the head and knowledge in the world, as Norman has it. When you are new to some task, some object, you have only knowledge that is available in the world about it: those things that you can see or otherwise sense. It is on the designer to consider how the affordances suggested by an object affect its usability. This might mean - for example - following convention, e.g. the push side of doors shouldn't have handles and the plate to push on should be at a point where pushing is efficient.
  • action hierarchies: actions can be viewed at various granularities. In Norman's model they have seven stages and he gives an example of several academics trying to thread an unfamiliar projector. In The Shape of Actions, Collins talks about an experiment attempting to operate a laboratory air pump. Both authors deconstruct the high-level task (operate the apparatus) into sub-tasks, some of which are familiar to some extent - perhaps by analogy, or by theoretical knowledge, or by having seen someone else doing it - and some of which are completely unfamiliar and require explicit experience of that specific task on that specific object.

I love finding connections like this, even if I don't know quite what they can afford me, just yet.

Saturday, March 18, 2017

A Field of My Stone


The Fieldstone Method is Jerry Weinberg's way of gathering material to write about, using that material effectively, and using the time spent working the material efficiently. Although I've read much of Weinberg's work I'd never got round to Weinberg on Writing until last month, and after several prompts from one of my colleagues.

In the book, Weinberg describes his process in terms of an extended analogy between writing and building dry stone walls which - to do it no justice at all - goes something like this:
  • Do not wait until you start writing to start thinking about writing.
  • Gather your stones (interesting thoughts, suggestions, stories, pictures, quotes, connections, ideas) as you come across them. 
  • Always have multiple projects on the go at once. 
  • Maintain a pile of stones (a list of your gathered ideas) that you think will suit each project.
  • As you gather a stone, drop it onto the most suitable pile.
  • Also maintain a pile for stones you find attractive but have no project for at the moment.
  • When you come to write on a project, cast your eyes over the stones you have selected for it.
  • Be inspired by the stones, by their variety and their similarities.
  • Handle the stones, play with them, organise them, reorganise them.
  • Really feel the stones.
  • Use stones (and in a second metaphor they are also periods of time) opportunistically.
  • When you get stuck on one part of a project move to another part.
  • When you get stuck on one project move to another project.

The approach felt extremely familiar to me. Here's the start of an email I sent just over a year ago, spawned out of a Twitter conversation about organising work:
I like to have text files around [for each topic] so that as soon as I have a thought I can drop it into the file and get it out of my head. When I have time to work on whatever the thing is, I have the collected material in one place. Often I find that getting material together is a hard part of writing, so having a bunch of stuff that I can play with, re-order etc helps to spur the writing process.
For my blogging I have a ton of open text files:


You can see this one, Fieldstoning_notes.txt and, to the right of it, another called notes.txt which is collected thoughts about how I take notes (duh!) that came out of a recent workshop on note-taking (DUH!) at our local meetup.

I've got enough in that file now to write about it next, but first here's a few of the stones I took from Weinberg on Writing itself:

Never attempt to write what you don’t care about.

Real professional writers seldom write one thing at a time.

The broader the audience, the more difficult the writer’s job.

Most often [people] stop writing because they do not understand the essential randomness involved in the creative process.

... it’s not the number of ideas that blocks you, it’s your reaction to the number of ideas.

Fieldstoning is about always doing something that’s advancing your writing projects.

The key to effective writing is the human emotional response to the stone.

If I’ve been looking for snug fits while gathering, I have much less mortaring to do when I’m finishing

Don’t get it right; get it written.

"Sloppy work" is not the opposite of "perfection." Sloppy work is the opposite of the best you can do at the time.

Friday, March 10, 2017

Well, That was a Bad Idea


I was listening to Giselle Aldridge and Paul Merrill on the Reflection as a Service podcast one morning this week as I walked to work. They were talking about ideas in entrepreneurship, assessing their value, when and how and with who they should be discussed, and how to protect them when you do open them up to others' scrutiny.

I was thinking, while listening, that as an entrepreneur you need to be able to filter the ideas in front of you, seeking to find one that has a prospect of returning sufficiently well on an investment. Sometimes, you'll have none that fit the bill and so, in some sense, they are bad ideas (for you, at that time, for the opportunity you had in mind, at least). In that situation one approach is to junk what you have and look for new ideas.  But an alternative is to make a bad idea better.

I was speculating, as I was thinking, and listening, that there might be heuristics for turning those bad ideas into good ideas. So I went looking, and I found an interesting piece by Alan Dix, a lecturer at Birmingham University, titled Silly Ideas:
Thinking about bad ideas is part brainstorming, but more important about learning to think about any idea, new good ideas you have yourself, other people's existing ideas and products.
Dix suggests that deliberately (stating that you are) starting with bad ideas is itself a useful heuristic. You are naturally less attached to bad ideas; they can provoke you into trains of thought that you might not otherwise have encountered; you will have more confidence that you can improve them; they will likely generate more questions and challenge your assumptions.

He gives a set of questions for interrogating an idea, something like a SWOT analysis:

  • what is good about it? in what contexts? why?
  • what is bad about it? in what contexts? why?
  • in what contexts it is optimal?
  • how would you sell it? how would you defend it?

For me, a key aspect of this analysis is the focus on context. An idea is not necessarily unequivocally good or bad. Aspects of it might be good, or bad, or better or worse, in different scenarios, for different purposes. Dix invites you to discover in which it might be which and for which. To draw another parallel, this feels akin to factoring.

Armed with data about the idea, you can now look to change it in ways that keep the good and lose the bad, and maybe change the context or manner in which it's used. Or throw it away completely and use the learnings you have from the domain to make a fresh start with a new idea.

The new idea I like best here is that of starting from a point that you assert is bad. I've encountered similar suggestions before: that functional fixedness can be reduced by starting a familiar process from an unfamiliar situation, that in brainstorming you shouldn't reject ideas as you come up with them, and that of not evaluating until you have options in the rule of three.

I enjoy ideas simply for the sake of having them. I am fascinated by the way in which ideas spawn ideas and by the way that connections are made between them. I celebrate the fact that multiple perspectives on the same idea can differ enormously. I particularly like exploring the ambiguity that can result from those perspectives at work, where the task is often to tease out and then squeeze out ambiguity, or for fun, making up corny puns. And corny puns are never a bad idea.
Image: ITV News

Monday, March 6, 2017

commit -m "My idea is ..."


One of the many things I've learned over the years is that (for me) getting an idea out - on paper, on screen, on a whiteboard, into the air; in words, or pictures, or verbally, ... - is a strong heuristic for making it testable, for helping me to understand it, and for provoking new ideas.

Once out, and once I've forced myself to represent the idea in prose or some other kind of model, I usually find that I've teased out detail in some areas that were previously woolly. I can begin to challenge the idea, to see patterns and gaps between it and the other ideas, to search the space around it and see further ideas, perhaps better ideas.

Once out, I feel like I have freed up some brain room for more thoughts. I don't have to maintain the cloud of things that the idea was when it was only in mind and I was repeatedly running over it to keep it alive, to remember it.

Once out, once I've nailed it down that first time, I have a better idea of how to explain it to someone else. So I can choose to share the idea and get the benefits of others' challenges to it.

Don't get me wrong, I do a lot of thinking in my head. But pulling an idea out, even to somewhere only visible to me, is a commitment to the idea of the idea - which doesn't mean that I think it's a good idea; just that it's worth exploring.
Image: https://flic.kr/p/7nxbXj

Works For Me


There's a tester position open at Linguamatics just now and, as I've said before on here, this usually means a period of reflection for me.

On this occasion the opening was created by someone leaving - I'm pleased to say that it was on good terms, for a really exciting opportunity, a chance to really make a difference at the new place - and so, although I wasn't looking for change, it has arrived. Again.

Change. There was a time when, for me, change was also challenge. Given the choice of change or not, I would tend to prefer not. These days I like to think I'm more pragmatic. Change comes with potential costs and benefits. The skill is in taking on those changes that return the right benefits at the right costs. When change is not a choice the skill is still in trading benefits and costs, but now of the ways you can think of to implement the change.

Change. My team has changed a lot in the last twelve months or so. We grew rapidly and also changed our structure. You may have noticed that I've written a lot about management in the last few months. This is not unrelated to the changes. In search of potential ways to implement change, and ways to assess the benefits and costs of change, and also the risks associated with changing, I read. And I wrote, as I'm writing now.

Change. Linguamatics, the company I co-founded, is also changing. In fact, now I look back I don't think it's ever stopped changing. In 15 years we've gone from the four of us in one tiny room to 100 of us in a couple of office suites and a handful of other locations on either side of the Atlantic. We're encountering some of the difficulties and some of the beauties of expansion, and we're exploring ways to deal with and embrace them.

If you're a tester and able to be responsive to change, if you can be an agent of change for the better, and if you fancy a change of scenery, perhaps you'd like to consider coming to work with us?
Image: https://flic.kr/p/q3xh3Q

Friday, February 24, 2017

The Testing Kraftwerk


If you're around testers or reading about testing it won't be long before someone mentions models. (Probably after context but some time before tacit knowledge.)

As a new tester in particular, you may find yourself asking what they are exactly, these models. It can be daunting when, having asked to see someone else's model, you are shown a complex flowchart, or a state diagram, or a stack of UML, a multi-coloured mindmap, or a barrage of blocked-out architectural components linked by complex arrangements of arrows with various degrees of dottedness.

But stay strong, my friend, because - while those things and many others can be models and can be useful - models are really just a way of describing a system, typically to aid understanding and often to permit predictions about how the system will behave under given conditions. What's more, the "system" need not be the entirety of whatever you're looking at nor all of the attributes of it.

It's part of the craft of testing to be able to build a model that suits the situation you are in at the time. For some web app, say, you could make a model of a text field, the dialog box it is in, the client application that launched it, the client-server architecture, or the hardware, software and comms stacks that support the client and server.

You can model different bits of the same system at the same time in different ways. And that can be powerful, for example when you realise that your models are inconsistent, because if that's the case, perhaps the system is inconsistent too ...

I'm a simple kind of chap and I like simple models, if I can get away with them. Here's a bunch of my favourite simple model structures and some simple ideas about when I might try to use them, rendered simply.

Horizontal Line

You're looking at some software in which events are triggered by other events. The order of the events is important to the correct functioning of the system. You could try to model this in numerous ways, but a simple way, a foothold, a first approximation, might be to simply draw a horizontal line and mark down the order you think things are happening in.


Well done. There's your model, of the temporal relationship between events. It's not sophisticated, but it represents what you think you know. Now test it by interacting with the system. Ah, you found out that you can alter the order. Bingo, your model was wrong, but now you can improve it. Add some additional horizontal lines to show relationships. Boom!

Edit: Synchronicity. On the day I published this post, Alex Kotliarsky published Plotting Ideas which also talks about how simple structures can help to understand, and extend understanding of, a space. The example given is a horizontal line being used to model types of automated testing.

Vertical Pile

So horizontal lines are great, sure, but let's not leave the vertical out of it. While horizontal seems reasonably natural for temporal data, vertical fits nicely with stacks. That might be technology stacks, or call sequences, process phases, or something else.

Here's an example showing how some calls to a web server go through different libraries, and which might be a way in to understanding why some responses conform to HTTP standards and some don't. (Clue: the ones that don't are the ones you hacked up yourself.)


Scatter Plot

Combine your horizontal and vertical and you've got a plane on which to plot a couple of variables. Imagine that you're wondering how responsiveness of your application varies with the number of objects created in its database. You run the experiments and you plot the results.


If you have a couple of different builds you might use different symbols to plot them both on the same chart, effectively increasing its dimensionality. Shape, size, annotations, and more can add additional dimensions.

Now you have your chart you can see where you have data and you can begin to wonder about the behaviour in those areas where you have no data. You can then arrange experiments to fill them, or use your developing understanding of the application to predict them. (And then consider testing your prediction, right?)

Just two lines and a few dots, a biro and a scrap of paper. This is your model, ladies and gentlemen.

Table

A picture is worth a thousand words, they say. A table can hold its own in that company. When confronted with a mass of text describing how similar things behave in different ways under similar conditions I will often reach for a table so that I can compare like with like, and see the whole space in one view. This kind of approach fits well when there are several things that you want to compare in several dimensions.

In this picture, I'm imagining that I've taken written reports about the work that was done to test some versions of a piece of software against successive versions of the same specification. As large blocks of text, the comparisons are hard to make. Laid out as a table I have visibility of the data and I have the makings of a model of the test coverage.


The patterns that this exposes might be interesting. Also, the places that there are gaps might be interesting. Sometimes those gaps highlight things that were missed in the description, sometimes they're disallowed data points, sometimes they were missed in the analysis. And sometimes they point to an error in the labels. Who knows, this time? Well, you will soon. Because you've seen that the gaps are there you can go and find out, can't you?

I could have increased the data density of this table in various ways. I could have put traffic lights in each populated cell to give some idea of the risk highlighted by the testing done, for example. But I didn't. Because I didn't need to yet and didn't think I'd want to and it'd take more time.

Sometimes that's the right decision and sometimes not. You rarely know for sure. Models themselves, and the act of model building, are part of your exploratory toolkit and subject to the same kinds of cost/value trade-offs as everything else.

A special mention here for Truth tables which I frequently find myself using to model inputs and corresponding outcomes, and which tester isn't fascinated by those two little blighters?

Circle

The simple circle. Once drawn you have a bipartition, two classes. Inside and outside. Which of the users of our system run vi and Emacs? What's that? Johnny is in both camps? Houston, we have a problem.


This is essentially a two variable model, so why wouldn't we use a scatter plot? Good question. In this case, to start with I wasn't so interested in understanding the extent of vi use against Emacs use for a given user base. My starting assumption was that our users are members of one editor religion or another and I want to see who belongs in each set. The circle gives me that. (I also used a circle model for separating work I will do from work I won't do in Put a Ring on It.)

But it also brings Johnny into the open. The model has exposed my incorrect assumption. If Johnny had happened not to be in my data set, then my model would fit my assumptions and I might happily continue to predict that new users would fall into one of the two camps.

Implicit in that last paragraph are other assumptions, for example that the data is good, and that it is plotted accurately. It's important to remember that models are not the thing that they model. When you see something that looks unexpected in your model, you will usefully ask yourself these kinds of questions:

  • is the system wrong?
  • is the data wrong?
  • is the model wrong?
  • is my interpretation wrong?
  • ...

Venn Diagram

The circle's elder sister. Where the circle makes two sets, the Venn makes arbtrarily many. I used a Venn diagram only this week - the spur for this post, as it happens - to model a collection of text filters whose functionality overlaps. I wanted to understand which filters overlapped with each other. This is where I got to:


In this case I also used the size of the circles as an additional visual aid. I think filter A has more scope than any of the others so I made it much larger. (I also used a kind of Venn diagram model of my testing space in Your Testing is a Joke.)

And now I have something that I can pass on to others on my team - which I did - and perhaps we can treat each of the areas on the diagram as an initial stab at a set of equivalence classes that might serve useful when testing this component.

In this post, I've given a small set of model types that I use frequently. I don't think that any of the examples I've given couldn't be modelled another way and on any given day I might have modelled them other ways. In fact, I will often hop between attempts to model a system using different types as a way to provoke thought, to provide another perspective, to find a way in to the problem I'm looking at.

And having written that last sentence I now see that this blog post is the beginnings of a model of how I use models. But sometimes that's the way it works too - the model is an emergent property of the investigation and then feeds back into the investigation. It's all part of the craft.
Image: In Deep Music Archive

Edit: While later seeking some software to draw a more complex version of the Venn Diagram model I found out that what I've actually drawn here is an Euler Diagram.

Sunday, February 19, 2017

Before Testing


I happened across Why testers? by Joel Spolsky at the weekend. Written back in 2010, and - if we're being sceptical - perhaps a kind of honeytrap for Fog Creek's tester recruitment process, it has some memorable lines, including:
what testers are supposed to do ... is evaluate new code, find the good things, find the bad things, and give positive and negative reinforcement to the developers.
Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares.
you really need very smart people as testers, even if they don’t have relevant experience. Many of the best testers I’ve worked with didn’t even realize they wanted to be testers until someone offered them the job.
The job advert that the post points at is still there and reinforces the focus on testing as a service to developers and the sentiments about feedback, although it looks like, these days, they do require test experience.

It's common to hear testers say that they "fell into testing" and I've offered jobs to, and actually managed to recruit from, non-tester roles. On the back of reading Spolsky's blog I tweeted this:
And, while it's a biased and also self-selected sample (to those who happen to be close enough to me in the Twitter network, and those who happened to see it in their timeline, and those who cared to respond) which has no statistical validity, I enjoyed reading the responses and wondering about patterns.

Please feel free to add your own story about the years BT (Before Testing) to either the thread or the comments here.
Image: https://flic.kr/p/rgXeNz