Friday, February 24, 2017

The Testing Kraftwerk


If you're around testers or reading about testing it won't be long before someone mentions models. (Probably after context but some time before tacit knowledge.)

As a new tester in particular, you may find yourself asking what they are exactly, these models. It can be daunting when, having asked to see someone else's model, you are shown a complex flowchart, or a state diagram, or a stack of UML, a multi-coloured mindmap, or a barrage of blocked-out architectural components linked by complex arrangements of arrows with various degrees of dottedness.

But stay strong, my friend, because - while those things and many others can be models and can be useful - models are really just a way of describing a system, typically to aid understanding and often to permit predictions about how the system will behave under given conditions. What's more, the "system" need not be the entirety of whatever you're looking at nor all of the attributes of it.

It's part of the craft of testing to be able to build a model that suits the situation you are in at the time. For some web app, say, you could make a model of a text field, the dialog box it is in, the client application that launched it, the client-server architecture, or the hardware, software and comms stacks that support the client and server.

You can model different bits of the same system at the same time in different ways. And that can be powerful, for example when you realise that your models are inconsistent, because if that's the case, perhaps the system is inconsistent too ...

I'm a simple kind of chap and I like simple models, if I can get away with them. Here's a bunch of my favourite simple model structures and some simple ideas about when I might try to use them, rendered simply.

Horizontal Line

You're looking at some software in which events are triggered by other events. The order of the events is important to the correct functioning of the system. You could try to model this in numerous ways, but a simple way, a foothold, a first approximation, might be to simply draw a horizontal line and mark down the order you think things are happening in.


Well done. There's your model, of the temporal relationship between events. It's not sophisticated, but it represents what you think you know. Now test it by interacting with the system. Ah, you found out that you can alter the order. Bingo, your model was wrong, but now you can improve it. Add some additional horizontal lines to show relationships. Boom!

Vertical Pile

So horizontal lines are great, sure, but let's not leave the vertical out of it. While horizontal seems reasonably natural for temporal data, vertical fits nicely with stacks. That might be technology stacks, or call sequences, process phases, or something else.

Here's an example showing how some calls to a web server go through different libraries, and which might be a way in to understanding why some responses conform to HTTP standards and some don't. (Clue: the ones that don't are the ones you hacked up yourself.)


Scatter Plot

Combine your horizontal and vertical and you've got a plane on which to plot a couple of variables. Imagine that you're wondering how responsiveness of your application varies with the number of objects created in its database. You run the experiments and you plot the results.


If you have a couple of different builds you might use different symbols to plot them both on the same chart, effectively increasing its dimensionality. Shape, size, annotations, and more can add additional dimensions.

Now you have your chart you can see where you have data and you can begin to wonder about the behaviour in those areas where you have no data. You can then arrange experiments to fill them, or use your developing understanding of the application to predict them. (And then consider testing your prediction, right?)

Just two lines and a few dots, a biro and a scrap of paper. This is your model, ladies and gentlemen.

Table

A picture is worth a thousand words, they say. A table can hold its own in that company. When confronted with a mass of text describing how similar things behave in different ways under similar conditions I will often reach for a table so that I can compare like with like, and see the whole space in one view. This kind of approach fits well when there are several things that you want to compare in several dimensions.

In this picture, I'm imagining that I've taken written reports about the work that was done to test some versions of a piece of software against successive versions of the same specification. As large blocks of text, the comparisons are hard to make. Laid out as a table I have visibility of the data and I have the makings of a model of the test coverage.


The patterns that this exposes might be interesting. Also, the places that there are gaps might be interesting. Sometimes those gaps highlight things that were missed in the description, sometimes they're disallowed data points, sometimes they were missed in the analysis. And sometimes they point to an error in the labels. Who knows, this time? Well, you will soon. Because you've seen that the gaps are there you can go and find out, can't you?

I could have increased the data density of this table in various ways. I could have put traffic lights in each populated cell to give some idea of the risk highlighted by the testing done, for example. But I didn't. Because I didn't need to yet and didn't think I'd want to and it'd take more time.

Sometimes that's the right decision and sometimes not. You rarely know for sure. Models themselves, and the act of model building, are part of your exploratory toolkit and subject to the same kinds of cost/value trade-offs as everything else.

A special mention here for Truth tables which I frequently find myself using to model inputs and corresponding outcomes, and which tester isn't fascinated by those two little blighters?

Circle

The simple circle. Once drawn you have a bipartition, two classes. Inside and outside. Which of the users of our system run vi and Emacs? What's that? Johnny is in both camps? Houston, we have a problem.


This is essentially a two variable model, so why wouldn't we use a scatter plot? Good question. In this case, to start with I wasn't so interested in understanding the extent of vi use against Emacs use for a given user base. My starting assumption was that our users are members of one editor religion or another and I want to see who belongs in each set. The circle gives me that. (I also used a circle model for separating work I will do from work I won't do in Put a Ring on It.)

But it also brings Johnny into the open. The model has exposed my incorrect assumption. If Johnny had happened not to be in my data set, then my model would fit my assumptions and I might happily continue to predict that new users would fall into one of the two camps.

Implicit in that last paragraph are other assumptions, for example that the data is good, and that it is plotted accurately. It's important to remember that models are not the thing that they model. When you see something that looks unexpected in your model, you will usefully ask yourself these kinds of questions:

  • is the system wrong?
  • is the data wrong?
  • is the model wrong?
  • is my interpretation wrong?
  • ...

Venn Diagram

The circle's elder sister. Where the circle makes two sets, the Venn makes arbtrarily many. I used a Venn diagram only this week - the spur for this post, as it happens - to model a collection of text filters whose functionality overlaps. I wanted to understand which filters overlapped with each other. This is where I got to:


In this case I also used the size of the circles as an additional visual aid. I think filter A has more scope than any of the others so I made it much larger. (I also used a kind of Venn diagram model of my testing space in Your Testing is a Joke.)

And now I have something that I can pass on to others on my team - which I did - and perhaps we can treat each of the areas on the diagram as an initial stab at a set of equivalence classes that might serve useful when testing this component.

In this post, I've given a small set of model types that I use frequently. I don't think that any of the examples I've given couldn't be modelled another way and on any given day I might have modelled them other ways. In fact, I will often hop between attempts to model a system using different types as a way to provoke thought, to provide another perspective, to find a way in to the problem I'm looking at.

And having written that last sentence I now see that this blog post is the beginnings of a model of how I use models. But sometimes that's the way it works too - the model is an emergent property of the investigation and then feeds back into the investigation. It's all part of the craft.
Image: In Deep Music Archive


Sunday, February 19, 2017

Before Testing


I happened across Why testers? by Joel Spolsky at the weekend. Written back in 2010, and - if we're being sceptical - perhaps a kind of honeytrap for Fog Creek's tester recruitment process, it has some memorable lines, including:
what testers are supposed to do ... is evaluate new code, find the good things, find the bad things, and give positive and negative reinforcement to the developers.
Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares.
you really need very smart people as testers, even if they don’t have relevant experience. Many of the best testers I’ve worked with didn’t even realize they wanted to be testers until someone offered them the job.
The job advert that the post points at is still there and reinforces the focus on testing as a service to developers and the sentiments about feedback, although it looks like, these days, they do require test experience.

It's common to hear testers say that they "fell into testing" and I've offered jobs to, and actually managed to recruit from, non-tester roles. On the back of reading Spolsky's blog I tweeted this:
And, while it's a biased and also self-selected sample (to those who happen to be close enough to me in the Twitter network, and those who happened to see it in their timeline, and those who cared to respond) which has no statistical validity, I enjoyed reading the responses and wondering about patterns.

Please feel free to add your own story about the years BT (Before Testing) to either the thread or the comments here.
Image: https://flic.kr/p/rgXeNz

Tuesday, February 14, 2017

People are Strange


Managers. They're the light in the fridge: when the door is open their value can be seen. But when the door is closed ... well, who knows?

Johanna Rothman and Esther Derby reckon they have a good idea. And they aim to show, in the form of an extended story following one manager as he takes over an existing team with problems, the kinds of things that managers can do and do do and - if they're after a decent default starting point - should consider doing.

What their book, Behind Closed Doors, isn't - and doesn't claim to be - is the answer to every management problem. The cast of characters in the story represent some of the kinds of personalities you'll find yourself dealing with as a manager, but the depth of the scenarios covered is limited, the set of outcomes covered is generally positive, and the timescales covered are reasonably short.

Michael Lopp, in Managing Humans, implores managers to remember that their staff are chaotic beautiful snowflakes. Unique. Individual. Special. Jim Morrison just says, simply, brusquely, that people are strange. (And don't forget that managers are people, despite evidence to the contrary.)

Either way, it's on the manager to care to look and listen carefully and find ways to help those they manage to be the best that they can be in ways that suit them. Management books necessarily use archetypes as a practical way to give suggestions and share experiences, but those new to management especially should be wary of misinterpreting the stories as a how-to guide to be naively applied without consideration of the context.

What Behind Closed Doors also isn't, unlike so much writing on management, is dry, or full of heroistic aphorisms, or preachy. In fact, I found it an extremely easy read for several reasons: it's well-written; it's short; the story format helps the reader along; following a consistent story gives context to situations as the book progresses; sidebars and an appendix keep detail aside for later consumption; I'm familiar with work by both of these authors already; I'm a fan of Jerry Weinberg's writing on management and interpersonal relationships and this book owes much to his insights (he wrote the foreword here); I agree with much of the advice.

What I found myself wanting - and I'd buy Rothman and Derby's version of this like a shot - is more detailed versions of some of the dialogues in this book with commentary in the form of the internal monologues of the participants. I'd like to hear Sam, the manager, thinking though the options he has when trying to help Kevin to learn to delegate and understand how he chose the approach that he took. I'd like to hear Keven trying to work out what he thinks Sam's motives are and perhaps rejecting some of Sam's premises.  I'd also like to see a deeper focus on a specific relationship over an extended period of time, with failures, and techniques for rebuilding trust in the face of them.

But while I wait for that, here's a few quotes that I enjoyed, loosely grouped.

On the contexts in which management takes place:
Generally speaking, you can observe only the public behaviors of managers and how your managers interact with you. 
Sometimes people who have never been in a management role believe that managers can simply tell other people what to do and that’s that. 
The higher you are in the organization, the more other people magnify your reactions. 
Because managers amplify the work of others, the human costs of bad management can be even higher than the economic costs. 
Chaos hides problems—both with people and projects. When chaos recedes, problems emerge. 
The moral of this fable is: Focus on the funded work.
On making a technical contribution as a manager:
Some first-level managers still do some technical work, but they cannot assign themselves to the critical path.

It’s easier to know when technical work is complete than to know when management work is complete.

The more people you have in your group, the harder it is to make a technical contribution.

The payoff for delegation isn’t always immediate.

It takes courage to delegate.
On coaching:
You always have the option not to coach. You can choose to give your team member feedback (information about the past), without providing advice on options for future behavior.

Coaching doesn’t mean you rush in to solve the problem. Coaching helps the other person see more options and choose from them.

Coaching helps another person develop new capability with support.

And it goes without saying, but if you offer help, you need to follow through and provide the help requested, or people will be disinclined to ask again.

Helping someone think through the implications is the meat of coaching.
On team-building:
Jelled teams don’t happen by accident; teams jell when someone pays attention to building trust and commitment

Over time they build trust by exchanging and honoring commitments to each other.

Evaluations are different from feedback.

A one-on-one meeting is a great place to give appreciations.

[people] care whether the sincere appreciation is public or private ... It’s always appropriate to give appreciation for their contribution in a private meeting.

Each person on your team is unique. Some will need feedback on personal behaviors. Some will need help defining career development goals. Some will need coaching on how to influence across the organization.

Make sure the career development plans are integrated into the person’s day-to-day work. Otherwise, career development won’t happen.

"Career development" that happens only once a year is a sham.
On problem solving:
Our rule of thumb is to generate at least three reasonable options for solving any problem.

Even if you do choose the first option, you’ll understand the issue better after considering several options.

If you’re in a position to know a problem exists, consider this guideline for problem solving: the people who perform the work need to be part of the solution.

We often assume that deadlines are immutable, that a process is unchangeable, or that we have to solve something alone. Use thought experiments to remove artificial constraints,

It’s tempting to stop with the first reasonable option that pops into your head. But with any messy problem, generating multiple options leads to a richer understanding of the problem and potential solutions

Before you jump to solutions, collect some data. Data collection doesn’t have to be formal. Look for quantitative and qualitative data.

If you hear yourself saying, “We’ll just do blah, blah, blah,” Stop! “Just” is a keyword that lets you know it just won’t work.

When the root cause points to the original issue, it’s likely a system problem.
On managing:
Some people think management is all about the people, and some people think management is all about the tasks. But great management is about leading and developing people and managing tasks.

When managers are self-aware, they can respond to events rather than react in emotional outbursts.

And consider how your language affects your perspective and your ability to do your job.

Spending time with people is management work.

Part of being good at [Managing By Walking Around and Listening] is cultivating a curious mind, always observing, and questioning the meaning of what you see.

Great managers actively learn the craft of management.
Image: http://www.45cat.com/record/j45762

Friday, February 10, 2017

The Bug in Lessons Learned


The Test team book club read Lessons Learned in Software Testing the other week. I couldn't find my copy at the time but Karo came across it today, on Rog's desk, and was delighted to tell me that she'd discovered a bug in it...

Saturday, February 4, 2017

Y2K


What Really Happened in Y2K? That's the question Professor Martyn Thomas is asking in a forthcoming lecture and in a recent Chips With Everything podcast, from which I picked a few quotes that I particularly enjoyed.

On why choosing to use two digits for years was arguably a reasonable choice, in its time and context:
The problem arose originally because when most of the systems were being programmed before the 1990s computer power was extremely expensive and storage was extremely expensive. It's quite hard to recall that back in 1960 and 1970 a computer would occupy a room the size of a football pitch and be run 24 hours a day and still only support a single organisation.
It was because those things were so expensive, because processing was expensive and in particular because storage was so expensive that full dates weren't stored. Only the year digits were stored in the data.
On the lack of appreciation that, despite the eventual understated outcome, Y2K exposed major issues:
I regard it as a signal event. One of these near-misses that it's very important that you learn from, and I don't think we've learned from it yet. I don't think we've taken the right lessons out of the year 2000 problem. And all the people who say it was all a myth prevent those lessons being learned.
On what bothers him today:
I'm [worried about] cyber security. I think that is a threat that's not yet being addressed strategically. We have to fix it at the root, which is by making the software far less vulnerable to cyber attack ... Driverless cars scare the hell out of me, viewed through the lens of cyber security.
We seem to feel that the right solution to the cyber security problem is to train as many people as we can to really understand how to look for cyber security vulnerabilities and then just send them out into companies ... without recognising that all we're doing is training a bunch of people find all the loopholes in the systems and then encourage companies to let them in and discover all their secrets.
Similarly, training lots of school students to write bad software, which is essentially what we're doing by encouraging app development in schools, is just increasing the mountain of bad software in the world, which is a problem. It's not the solution.
On building software:
People don't approach building software with the same degree of rigour that engineers approach building other artefacts that are equally important. The consequence of that is that most software contains a lot of errors. And most software is not managed very well.
One of the big problems in the run-up to Y2K was that most major companies could not find the source code for their big systems, for their key business systems. And could not therefore recreate the software even in the form that it was currently running on their computers.  
The lack of professionalism around managing software development and software was revealed by Y2K ... but we still build software on the assumption that you can test it to show that it's fit for purpose.
On the frequency of errors in software:
A typical programmer makes a mistake in, if they're good, every 30 lines of program. If they're very, very good they make a mistake in every 100 lines. If they're typical it's in about 10 lines of code. And you don't find all of those by testing. 
 On his prescription:
The people who make the money out of selling us computer systems don't carry the cost of those systems failing. We could fix that. We could say that in a decade's time - to give the industry a chance to shape up - we were going to introduce strict liability in the way that we have strict liability in the safety of children's toys for example.
Image: https://flic.kr/p/7wbBSu 

Thursday, February 2, 2017

You Rang!


So, last year I blogged about an approach I take to managing uncertainty: Put a Ring on It.

The post was inspired by a conversation I'd had with several colleagues in a short space of time, where I'd described my mental model of a band I put around all the bits of the problem I can't deal with now, leaving behind the bits that are tractable.

After doing that, I can proceed, right now, on whatever is left. I've encircled the uncertainty with a dependency on some outside factor, and I don't need to think about the parts inside it until the dependency is resolved. (Or the context changes.)

And this week I was treated to a beautifully simple implementation of it, from one of those colleagues. In a situation in which many things might need doing - but the number and nature is unknown - she controlled the uncertainty with a to-do list and a micro-algorithm:
  • do the thing now, completely, only if it's easy and important
  • do a pragmatic piece now, if it's needed but not easy, and revisit it later (via the list) 
  • otherwise, put it on the list

Uncertainty encountered. And ringed with a list. And mental energy conserved. And progress consistently made.

Tuesday, January 31, 2017

Elis, Other People


I've written before about the links I see between joking and testing - about the setting up of assumptions, the reframing, and the violated expectations, amongst other things. I like to listen to the The Comedian's Comedian podcast because it encourages comics to talk about their craft, and I sometimes find ideas that cross over, or just provoke a thought in me. Here's a few quotes that popped out of the recent Elis James episode.

On testing in the moment:
Almost everyone works better with a bit of adrenaline in them. In the same way that I could never write good stuff in the house, all of my best jokes come within 20 minutes to performing or within 20 minutes of performing ... 'cos all of my best decisions are informed by adrenaline.
On the value of multiple perspectives, experiences, skills:
I've even tried sitting in the house and bantering with myself like I'm in a pub because I hate the myth that standups are all these weird auteurs and we should do everything on our own. 
The thing with being bilingual is that I have a different personality in Welsh and English. My onstage persona is different.
On the gestalt possibilities of collaboration:
I love collaborating ... being in a room with another comic ... that's the funnest part of comedy, bouncing off each other and developing an idea together. 
The difference between thinking of an idea on your own and wondering if it's funny, and then immediately asking the person next to you, who's a trusted friend whose opinion you respect, and then they say "yeah!" and say one little tweak and it sends you off down a completely different path. 
The king of this is Henry Packer. If you take anything to him he will give you an angle that is from such a bizarre place and suddenly it will be a great routine. 
On actively looking for variety, especially similar-but-different:
I will occasionally write out a routine longhand and I'll put all the words into a thesaurus. The thing with a thesaurus - it's an extraordinary tool - is that the reason that 'seldom' and 'doggerel' are funny is that you know what they mean but you'd never use them. They're not quite on the tip of your tongue, they're sort of half-way back.
Image: https://flic.kr/p/tQup4