Friday, January 12, 2018

On The Wisdom Of Testing


To celebrate its 25th anniversary, EuroSTAR asked 25 testers who have played a big part in its history for a "top tip or piece of advice" that has returned value to them across their career and compiled the answers into a short (in length and height) publication, The Little Book of Testing Wisdom. Sales from the book raise money for the Saving Linnea campaign.

We put the book on our Test team reading group agenda at Linguamatics for this week with the mission to "read all or some, but bring one (or more) articles you liked!" I decided on the strategy of reading the start of every article and continuing only with those that grabbed me immediately. I didn't seek (and haven't sought) to understand why some grabbed me and some not, but I did think about whether there was any commonality to the set of four that I particularly liked by the end and took to the meeting ...

For me, advice that will stand the test of time must have inbuilt sensitivity to context, and be valuable enough frequently enough to make it a net positive for the advice-taker.  I can't claim to have been in testing for a quarter-century, but I do feel like I've been around the block a few times and, while walking that beat, and watching the scenery change as the years go by, I've come to feel that any advice with designs on immortality is likely to be meta-advice. That is, advice that guides the making of a decision, rather than advice that dictates the decision to take.

Jerry Weinberg is a master of advice that is not only responsive to the context in which it is to be applied, but which forces the advice-taker to think about that context and the factors at play in it. Here's one gem from a discussion about reporting test results:
When I say start with the most important item, I mean start with the one that's most important to start with.
Even with good intent on the part of the advice-giver, even with inherent adaptivity to circumstance, even with pieces of advice that have served you well for many years, it's worth regarding any kind of lore as heuristic. You and your interpretation of a given situation are only two of the variables in play and you are unlikely to control them all. Put simply, any advice is likely to fail to achieve what you want sometimes.

With that in mind, I've picked out a handful of words of wisdom from the 25 that got past my first-paragraph filter and spoke to me and my own experience.

Rikard Edgren, Understand Your Testing Mission

To do a good job as a tester you need to know what information the relevant stakeholders want from your testing. Unfortunately, you can't just ask them what your mission is because others don't think in terms of test missions. Instead you need to find out what's important to them and then think of ways to find it at an acceptable cost. These are are your missions. Personally, I sometimes start with a mission of finding out what my mission should be.

Alan Richardson, It's Simpler Than That

When we begin to learn a skill it feels hard. We are slow and awkward and we do unnecessary work because we haven't identified which aspects are core and which can be glossed ... yet. Take solace, though, from Richardson's advice to tell yourself that, whatever you are doing, however you are doing it, it's simpler than that. With practice and particularly with reflection, you can discover other ways, and those ways will have time, resource, or effort advantages over the current one. This chimes with something I read last year: regard even your best ideas as bad ideas and you'll feel more able to challenge them, alter them, substitute other people's ideas for them.

Fiona Charles, Diversify Your Thinking

There's value in recognising that not all problems are amenable to the same solution, or even the same patterns of finding solutions. It's also worth remembering that different people will respond differently to a problem cast in different forms: verbal, written, or pictorial. Finally, be aware that at different times, changing your own perspective on a problem can provoke you into thinking about it differently too. I love the rule of three as a way to provoke the use different perspectives in pursuit of more options.

Michael Bolton, Relatively Rational

The advice here is actually Weinberg's. When confronted by what looks like irrationality on the part of others — an ugly codebase, for example — try to view is as "rational from the perspective of a different set of values". This advice requires us to put ourselves in someone else's shoes, and to consider that they were trying to do a good job in spite of whatever constraints (personal or contextual) they were under at the time they did the work in question. I think it's useful also to turn this around and understand your (often implicit) hope that others see your own efforts for what they are: a pragmatic attempt to compromise on a decent solution given all the competing pressures on you at the time.

The full set of contributors is: Michael Bolton, Hans Buwalda, Fiona Charles, Anne-Marie Charrett, Derk-Jan de Grood, Rikard Edgren, Isabel Evans, John Fodeh, Paul Gerrard, Shmuel Gershon, Dorothy Graham, Julian Harty, Anne-Mette Hass, Rob Lambert, James Lyndsay, Rik Marselis, Fran O’Hara, Declan O’Riordan, Stuart Reid, Alan Richardson, Huib Schoots, Ruud Teunissen, Geoff Thompson, Bob van de Burgt and Erik van Veenendaal. (Taken from Derk-Jan de Grood.)
Image: EuroSTAR

Wednesday, January 10, 2018

The State I Am In


As testers we'll generate, compile, inspect, manipulate, and synthesise data on a regular basis. We do this because it helps us to understand a system, to hypothesise about its behaviour, and to support conclusions that we might draw about it and report to others.

As testers we are in a particular system, the testing profession, and the State of Testing survey is a data gathering and synthesis exercise for it. The results are shared and so can be used to help us to understand, to test hypotheses for, and to draw conclusions about the state we are in.
Image: EIL

Tuesday, January 2, 2018

Their Art's in the Right Place


Mike Brearley was a professional cricketer for around 20 years and was England captain for 31 of the 39 test matches that he played.  His book, The Art of Captaincy, was recommended to managers by participants at both of the last couple of CEWTs, so it's been on my reading list for a while.

The book is, as you might expect, heavily biased towards the role of the cricket captain and some of the examples given in it require a bit of cricketing knowledge. Despite this, it has a lot to say to anyone with an interest in interpersonal relationships, particularly those in the workplace, particularly about manager-managee interactions. Here, I've collected and grouped a few of the passages that resonated with me.

Professionals should not rely only on practice during their day job

Compton was a genius and thus a law unto himself. But the general belief was then and continued to be that fitness for cricket was achieved simply by playing the game. (p. 54)
All cricketers can enhance their performance by better all-round fitness ... Injuries can be avoided, and an extra foot of speed, an extra inch of suppleness that avoid a narrow run-out or turns a half-chance into a chance can be achieved. (p.54-5)

Professionals, and those who advise, manage, train, lead, mentor, and encourage them, need to think carefully about the timing and context of learning

Most professional cricketers, myself included, have been unwilling to learn ... We distrust theory, and are apprehensive lest change bring catastrophe in its train. (p. 64)
If we define a good coach as "One who enables the potentialities of others to flower", Tiger Smith certainly qualifies: his advice helped me to my best season yet. It came from the right man ... at the right time ... The horse may not want to drink — or he may be unable to. (p. 68)
Excellent advice — but was this the moment for it? ... I felt that he should try to change at a less critical period in his career. The three of us talked this out throroughly. (p. 70)

Professionals recognise that there are different roles and responsibilities in their field, and that with them come different challenges

Gale's role in this process was typical of a good chairman of any committee. He had listened to the discussion, and was able to help the participants to see what it was that we were wanting. He did not, in this instance, need to be closely in touch with the facts on which our views were based in order to have this effect. (p. 84)
My own experience of vice-captaincy was in India, in 1976-7 ... I enjoyed the job, and was struck by how simple it is compared with being the man who has to act and "take the can". (p. 95)
In order to find all the evidence he needs a captain must watch the play. (p. 154)

The captain must be considerate, collaborative, and above all congruent

Some might argue that the captain should act like the junior officers in the First World War and never ask of another what he is not prepared to do himself ... However, such a policy may be simply silly. (p. 162)
The captain should expect the fielders to keep their positions, and not wander. He will demand that they are tidy in their work ... even when sloppiness does not cost runs. (p. 166)
I am often struck by the extent to which bowlers earn their attacking fields, and thus their wickets. Too often they deplore their bad luck when the edged shot misses the solitary slip; they forget that if they had bowled fewer half-volleys and long-hops they could have had a whole ring of slips. (p. 167)
... an important aspect of the captain's job: to remind, or even teach, bowlers that they have more resources than they give themselves credit for: they they have more strings to their bow. (p. 168)
... the captain should encourage all the team to think about the game from a tactical point of view; he must also insist that each member of his side plays a part in enouraging and motivating the others. (p. 170)
Bowlers expect their captain to give them a chance to perform at their best, be being aware of their capacities and preferences. They also have a right to a fair chance. The captain should avoid favouritism ... Yet true fairness is hard to assess. It is simply foolish not to give your best bowlers the first chance on a helpful pitch. (p. 172)
Certainly the captain can never please all his bowlers all the time. (p. 174)  
... it was recognised that the captain is likely to get the best out of those he values highly. (p. 88)

The captain needs to maintain perspective

The captain's arc of attention will constantly oscillate between the short-term and other needs: between changes or schemes that concern the next ball or the next over, plans for the disposition of bowlers in the next hour ... or for the rest of the session, and indeed some outline of strategy for the whole day. (p. 182)
There was once a lionkeeper at the Dublin Zoo called Mr Flood, who was remarkable in that over the years he had bred many lion cubs but never lost one. When asked his secret he replied, "No two lions are alike." No doubt he had strategies and general lines of policy. But like a good cricket captain, he responded to each situation afresh. (p. 201)
Bob Paisley has said of managing Liverpool FC, "A manager has to cut his coat according to the cloth — he has to mould his team's style to the players available. The same applies to the individual player. None of them is perfect ... " The illusion of omnipotence is a particular trap for the captain with a well-developed sense of responsibility. (p. 273)
Image: Amazon

Wednesday, December 20, 2017

When Support Calls

Just over a year ago now, I put out an appeal for resources on testers doing technical support. A tester on my team had asked for background material before his first support call and I didn't know of any, beyond our internal doc for support staff.

Turns out there's not much out there: I got a book recommendation, for The Mom Test which isn't strictly about either testing or technical support, and a couple of offers to pool experiences from local testers Neil Younger and Chris George.

I bought and blogged about The Mom Test, and started a Google doc where Neil, Chris, and me began to deposit notes, stories, and advice (our fieldstones). Some of my own material was culled from blog posts here on Hiccupps (e.g. 1, 2) at a time when I was managing both the Test and Support teams at Linguamatics.

When the doc had got to about 20 pages, I began the painstaking process of editing it into shape. Eventually four broad categories emerged:
  • What even is technical support?
  • What should I do before the call?
  • What should I do on the call?
  • What should I do after the call?
And then I began the next painstaking process of editing that into something coherent, taking guidance from a set of reviewers that included testers, technical support staff, those who have done both, the tester who wanted that first advice, and a technical author.

The Ministry of Testing got interested at this point. They didn't like the title, Technical Support Can Be Testing, but that was just the start of three or four rounds of even more (naturally) painstaking (you know it) editing.

I won't say that I didn't sometimes look up and wonder where I was.

Happily, the MoT editors accepted another title with just as much pun power as the original, When Support Calls, and as the end of the editing process (painstaking, as already noted) came to a close, they commissioned some ace monster artwork. The piece at the top here is a re-imagining of an xkcd cartoon which inspired one of the drawings used in the four articles that we created.

Then, eventually, today it went live on the Ministry of Testing Dojo. Please enjoy it.

And now excuse me while I walk myself into the sea.
Image: Thomas Harvey from an original by xkcd.
Thanks: all our editors and reviewers, everyone at Ministry of Testing, and especially Neil and Chris.


Wednesday, December 13, 2017

Cambridge Lean Coffee


This month's Lean Coffee was hosted by us at Linguamatics. Here's some brief, aggregated comments and questions on topics covered by the group I was in.

Performance testing

  • We have stress tests that take ages to run because they are testing a long time-out
  • ... but we could test that functionality with a debug-only parameter.
  • Should we do it that way, or only with production, user-visible functionality?
  • It depends on the intent of the test, the risk you want to take, the value you want to extract.
  • Do both? Maybe do the long-running one less often?

Driving change in a new company

  • When you join a new company and see things you'd like to change, how do you do it without treading on anyone's toes?
  • How about when some of the changes you want to make are in teams you have no access to, on other sites?
  • Should I just get my head down and wait for a couple of years until I understand more?
  • Try to develop face-to-face relationships.
  • Find the key players.
  • Build a consensus over time; exercise patience.
  • Make changes incrementally so you don't alienate anyone.
  • If you wait you'll waste good ideas.
  • Don't be shy!
  • There's no monopoly on good ideas.
  • Can you do a proof-of-concept?
  • Can you just squeeze a change in?
  • You don't want to be a distraction, so take care.
  • Organise a show-and-tell to put your ideas out there.
  • Give feedback to other teams.
  • Attend other team's meetings to see what's bothering them.
  • Get some allies. Find people who agree with you.
  • Find someone to give you historical context
  • ... some of your ideas may have been had, or even tried and failed.
  • As a manager, I want you to make some kind of business case to me
  • ... what problem to do you see; who does it affect; how; what solution do you propose; what pros and cons does it have; what cost/benefit?
  • Smaller changes will likely be approved more easily.
  • Find small wins to build credibility.

When did theory win over practice?

  • I've been reading a performance testing book which has given me ideas I can take into work on Monday and implement.
  • I've been reading TDD with Python and it's changed how I write code
  • ... and reinvigorated my interest in the testing pyramid.
  • Rapid Software Testing provided me with structure around exploratory testing.
  • ... I now spend my time in the software, not in planning.
  • Sometimes theory hinders practice; I found that some tools recommended by RST just got in my way.
  • I heard about mindmaps for planning testing at a conference.
  • I've been influenced by Jerry Weinberg. His rule of three and definition of a problem help me step back and consider angles
  • ... the theory directly influences the practice.

How many testers is the right number?

  • That's a loaded question.
  • It depends!
  • The quality of the code matters; better code will need less testing
  • ... but could the development team do more testing of their own?
  • How do you know what the quality of the code is, in order to put the right number of testers on it?
  • Previous experience; how many bugs were found in the past.
  • But the number of bugs found is a function of how hard you look.
  • Or how easy they are to find.
  • Or what kinds of bugs you care to raise.
  • You need enough testers to get the right quality out at the end (whenever that is).
  • Our customers are our testers.
  • Our internal customers are our testers.
  • We have no testers
  • ... we have very high expectations of our unit tests
  • ... and our internal customers are very good at giving feedback
  • ... in fact, our product provides a reporting interface for them.
  • Microservices don't need so many testers, but perhaps the developers would benefit from a test coach.
  • If the customers are happy, do you need to do much testing?
  • Customers will work around issues without telling you about it.
  • It's helpful to have a culture of reporting issues inside the company.
  • I see a lot of holes in process as well as software.
  • You don't need any testers if everyone is a tester.

Sunday, December 3, 2017

Compare Testing


If you believe that testing is inherently about information then you might enjoy Edward Tufte's take on that term:
Information consists of differences that make a difference.
We identify differences by comparison, something that as a working tester you'll be familiar with. I bet you ask a classic testing question of someone, including yourself, on a regular basis:
  • Our competitor's software is fast. Fast ... compared to what?
  • We must export to a good range of image formats. Good ... compared to what?
  • The layout must be clean. Clean ... compared to what?
But while comparison as a tool to get clarification by conversation is important, for me, it feels like testing is more fundamentally about comparisons.

James Bach has said "all tests must include an oracle of some kind or else you would call it just a tour rather than a test." An oracle is a tool that can help to determine whether something is a problem. And how is the value extracted from an oracle? By comparison with observation!

But we've learned to be wary of treating an oracle as an all-knowing arbiter of rightness. Having something to compare with should not lure you into this appealing trap:
I see X, the oracle says Y. Ha ha! Expect a bug report, developer!
Comparison is a two-way street and driving in the other direction can take you to interesting places:
I see X, the oracle says Y. Ho hum. I wonder whether this is a reasonable oracle for this situation?
Cem Kaner has written sceptically about the idea that the engine of testing is comparison to an oracle:
As far as I know, there is no empirical research to support the claim that testers in fact always rely on comparisons to expectations ... That assertion does not match my subjective impression of what happens in my head when I test. It seems to me that misbehaviors often strike me as obvious without any reference to an alternative expectation. One could counter this by saying that the comparison is implicit (unconscious) and maybe it is. But there is no empirical evidence of this, and until there is, I get to group the assertion with Santa Claus and the Tooth Fairy. Interesting, useful, but not necessarily true.
While I don't have any research to point to either, and Kaner's position is a reasonable one, my intuition here doesn't match his. (Though I do enjoy how Kaner tests the claim that testing is about comparisons by comparing it to his own experience.) Where we're perhaps closer is in the perspective that not all comparisons in testing are between the system under test and an oracle with a view to determine whether the system behaviour is acceptable.

Comparing oracles to each other might be one example. And why might we do that? As Elaine Weyuker suggests in On Testing Non-testable Programs, partial oracles (oracles that are known to be incomplete or unreliable in some way) are common. To compare oracles we might gather data from each of them; inspect it; look for ways in which each has utility (such as which has more predictive power in scenarios of interest).

And there we are again! The "more" in "which has more predictive power" is relative, it's telling us that we are comparing and, in fact, here we're using comparisons to make a decision about which comparisons might be useful in our testing. I find that testing is frequently non-linear like that.

Another way in which comparison is at the very heart of testing is during exploration. Making changes (e.g. to product, data, environment, ...) and seeing what happens as a result is a comparison task. Comparing two states separated by a (believed) known set of actions irrespective of whether you have an idea about what to expect is one way of building up knowledge and intuition about the system under test, and of helping to decide what to try next, what to save for later, what looks uninteresting (for now).

Again this throws up meta tasks: how to know which aspects of a system's state to compare? How to know which variables it is even possible to compare? How to access the state of those at the right frequency and granularity to make them usable? And again there's a potential cycle: gather data on what it might be possible to compare; inspect those possibilities; find ways in which they might have utility.

I started here with a Tufte quote about information being differences that make a difference, and said that identifying the differences is an act of comparison. I didn't say at that point but identifying the ones that make a difference is also a comparison task. And the same skills and tools that can be used for one can be used for both: testing skills and tools.
Image: https://flic.kr/p/q8zmqn

Thursday, November 23, 2017

Six & Bugs & Joke & Droll


Hiccupps just turned six years old. Happy birthday to us. And thank you for reading; I hope you're getting something out of it still.

Unwittingly I've stumbled into a tradition of reflecting on the previous 12 months and picking out a few posts that I liked above the others for some reason. Here's this year's selection:

  • What We Found Not Looking for Bugs: a headrush conversation with Anders Dinsen on the nature and timing of testing 
  • The Dots: a headrush conversation with myself on the connections between the connected things 
  • Fix Up, Look Sharp: a headrush reading experience from Ron Jeffries' Extreme Programming Adventures in C# 
  • Quality != Quality: a headrush of being picked up by Hacker News, my page views going nuts, and developers debating quality 
  • A (Definition of) Testing Story: a headrush last-minute conference proposal accepted at UKSTAR 2018 

And in the meantime my mission to keep my testing mind limber with rule-of-three punning continues too. Check 'em out on Twitter. Join in!

(And apologies to Ian Dury.)