Skip to main content

Talking the Fork


Four lightning talks at the Cambridge Tester meetup at Linguamatics last night, four topics apparently unrelated to one another, four slices through testing. Yet, as I write up my notes this morning I wonder whether there's a common thread ...

Samuel Lewis showed us the money. Or, at least, where the money is held before being dispensed through those holes in the wall. He included some fascinating background information about ATMs (and a scary security story) but the thrust of his talk was the risks and corresponding mitigation strategies in a massive project to migrate the ATMs for a big bank to a new application layer and OS (more scariness: many are still running Windows XP).

Much of the approach involved audit trails of various kinds, with customer and other stakeholders sharing their road maps and getting a view of the test planning and strategy in return. I enjoyed that the customer themselves was considered a risk (because they had a reputation for changing their minds) and contingency was built in for that. Samuel described the approach as waterfall and spoke in praise of that kind of process for this kind of project (massive, regulated, traditionally-minded customer). I can accept that; I certainly don't have personal experience there to argue against it. But it was striking to me that one of the factors that contributed to the successful completion of the project was the personal relationship with a developer which lead to the testers getting an unofficial early build to explore.

If you want to get your way, make your case fit the worldview of the person you need to convince. That's one of the three persuasiveness spanners (Robert Cialdini's principles of persuasion) in Sneha Bhat's toolbox. Another is to set up a context in which the other person feels some obligation to you: help them first and they'll likely help you back. The third she shared with us was to find "social proof", that is some evidence that someone else, someone respected, endorses the perspective you're proposing.

She touched a little on how persuasion might turn into coercion and gave us a useful acronym from Katrina Clokie for framing a conversation that's requesting something: SPIN. Identify the Situation and the Problem, explain the Implication and describe the Need you have to resolve it. I've heard the talk a couple of times now and, while everything I've said so far is useful, the phrase that sticks in my mind is that it's important to prepare, and then deliver the message with "passion, compassion, and purpose".

Andrew Fraser started his talk with the request to criticise it. I was already interested (a talk about testing in the abstract, with philosophical tendencies, wrestling with big-picture questions is my thing, and I don't care who knows it) but at that point I was hooked. As far as I understood it, Andrew's basic argument runs something like this: all metrics can be gamed; you can view tests as metrics; so tests can be gamed, i.e. developers will code to the tests; the conditions that the tests check for may not represent anything a customer cares about; ergo software that maximises conformity to the wrong thing is produced.

Phew. I can't pretend to agree, but I enjoyed it so much that afterwards I asked to be a reviewer of the bigger essay that this short piece was abstracted from. From my notes: so this is anti-TDD? so this is like over-fitting to a model? so all the tests need to be specified up front? but surely if you can "train" your developers (in some Skinnerian sense) to code in particular ways you can use it to the advantage of the product?

Finally I ran through an early version of The Anatomy of a Definition of Testing which I'll be delivering at  UKSTAR next month. It's a personal definition, one that helps me to do the work that I need to do in my context.

Four diverse talks then, but what thread did I divine running through them? Well perhaps it reflects something about me, about what I took from the talks, or about what I want to impose on them. It seems to me that people are at the heart of these stories: a personal relationship delivered the early build, a persuasion conversation involves human emotions on both sides, it's people that intuitively game metrics, and a personal definition is really only about the person. Jerry Weinberg was quoted during the questions for my talk and I doubt he'd be surprised to find this kind of theme in talks around software, his second law of consulting being "No matter how it looks at first, it's always a people problem."
Image: https://flic.kr/p/njHqzD

Comments

  1. Sounds like you had an enjoyable event :)

    I think I agree with Andrew Fraser, but I think he is addressing a more fundamental problem than just TDD.

    Testing, as we perform it, is quantitative. Even exploratory testing is difficult to value. Much testing is driven by a hope that testing might help.

    Scripted and automated tests are in some ways worse, as they only assert what they've been programmed to do. They only produce data, not knowledge.

    But I'm actually inclined to say that most of the testing we do produces data, signals, streams of information, and often only very little real experience about risks of the product we're testing.

    Narratives about the testing helps, but only if we chew on them, and tries to understand the story they tell. I think we need to learn how to qualify testing's ontological results , i.e. make them explicitly valuable in business, operational, and development perspectives.

    The core of the problem, I think, is that there is too little research happening in testing. Probably because IT is still a very young industry, and testing is widely seen as a hands-on activity, not an analytical one.

    Jess Ingrassellino shared an interesting perspective here: jessingrassellino.com/communities-practice/

    /Anders

    ReplyDelete

Post a Comment

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll