Monday, November 28, 2016

Mum's the Word


A few weeks ago I put out an appeal for resources for testers who are pulled into live support situations:
One suggestion I received was The Mom Test by Rob Fitzpatrick, a book intended to help entrepreneurs or sales folk to efficiently validate ideas by engagement with an appropriate target market segment. And perhaps that doesn't sound directly relevant to testers?

But it's front-loaded with advice for framing information-gathering questions in a way which attempts not to bias the the answers ("This book is specifically about how to properly talk to customers and learn from them"). And that might be, right?

The conceit of the name, I'm pleased to say, is not that mums are stupid and have to be talked down to. Rather, the insight is that "Your mom will lie to you the most (just ‘cuz she loves you)" but, in fact, if you frame your questions the wrong way, pretty much anyone will lie to you and the result of your conversation will be non-data, non-committal, and non-actionable. So, if you can find ways to ask your mum questions that she finds it easy to be truthful about, the same techniques should work with others.

The content is readable, and seems reasonable, and feels like real life informed it. The advice is - hurrah! - not in the form of some arbitrary number of magic steps to enlightenment, but examples, summarised as rules of thumb. Here's a few of the latter that I found relevant to customer support engagements, with a bit of commentary:
  • Opinions are worthless ... go for data instead
  • You're shooting blind until you understand their goals ... or their idea of what the problem is
  • Watching someone do a task will show you where the problems and inefficiencies really are, not where the customer thinks they are ... again, understand the real problem, gather real data
  • People want to help you. Give them an excuse to do so ... offer opportunities for the customer to talk; and then listen to them
  • The more you’re talking, the worse you’re doing ... again, listen

These are useful, general, heuristics for talking to anyone about a problem and can be applied with internal stakeholders at your leisure as well as with customers when the clock is ticking. (But simply remembering Weinberg's definition of a problem and the Relative Rule has served me well, too.)

Given the nature of the book, you'll need to pick out the advice that's relevant to you - hiding your ideas so as not to seem like you're needily asking for validation is less often useful to a tester, in my experience - but  as someone who hasn't been much involved in sales engagements I found the rest interesting background too.
Image: Amazon

Wednesday, November 23, 2016

Cambridge Lean Coffee


This month's Lean Coffee was hosted by Abcam. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

Suggest techniques for identifying and managing risk on an integration project.

  • Consider the risk in your product, risk in third-party products, risk in the integration
  • Consider what kinds of risk your stakeholders care about; and to who (e.g. risk to the bottom line, customer data, sales, team morale ...)
  • ... your risk-assessment and mitigation strategies may be different for each
  • Consider mitigating risk in your own product, or in those you are integrating with
  • Consider hazards and harms
  • Hazards are things that pose some kind of risk (objects and behaviours, e.g. a delete button, and corruption of database)
  • Harms are the effects those hazards might have (e.g. deleting unexpected content, and serving incomplete results)
  • Consider probabilities and impacts of each harm, to provide a way to compare them
  • Advocate for the resources that you think you need 
  • ... and explain what you won't (be able to) do without them
  • Take a bigger view than a single tester alone can provide
  • ... perhaps something like the Three Amigos (and other stakeholders)
  • Consider what you can do in future to mitigate these kinds of risks earlier
  • Categorise the issues you've found already; they are evidence for areas of the product that may be riskier
  • ... or might show that your test strategy is biased
  • Remember that the stuff you don't know you don't know is a potential risk too: should you ask for time to investigate that?

Didn't get time to discuss some of my own interests: How-abouts and What-ifs, and Not Sure About Uncertainty.

Can templates be used to generate tests?

  • Some programming languages have templates for generating code 
  • ... can the same idea apply to tests?
  • The aim is to code tests faster; there is a lot of boilerplate code (in the project being discussed)
  • How would a template know what the inputs and expectations are?
  • Automation is checking rather than testing
  • Consider data-driven testing and QuickCheck
  • Consider asking for testability in the product to make writing test code easier (if you are spending time reverse-engineering the product in order to test it)
  • ... e.g. ask for consistent Ids of objects in and across web pages
  • Could this (perceived) problem be alleviated by factoring out the boilerplate code?

How can the coverage of manual and automated testing be compared?

  • Code coverage tools could, in principle, give some idea of coverage
  • ... but they have known drawbacks
  • ... and it might be hard to tie particular tester activity to particular paths through the code to understand where overlap exists
  • Tagging test cases with e.g. story identifiers can help to track where coverage has been added (but not what the coverage is)
  • What do we really mean by coverage?
  • What's the purpose of the exercise? To retire manual tests?
  • One participant is trying to switch to test automation for regression testing
  • ... but finding it hard to have confidence in the automation
  • ... because of the things that testers can naturally see around whatever they are looking at, that the automation does not give

What are the pros and cons of being the sole tester on a project?

  • Chance to take responsibility, build experience ... but can be challenging if the tester is not ready for that
  • Chance to make processes etc that works for you ... but perhaps there are efficiencies in sharing process too
  • Chance to own your work ... but miss out on other perspectives
  • Chance to express yourself ... but can feel lonely
  • Could try all testers on all projects (e.g. to help when people are on holiday or sick)
  • ... but this is potentially expensive and people complain about being thinly sliced
  • Could try sharing testing across the project team (if an issue is that there's insufficient resource for the testing planned)
  • Could set up sharing structures, e.g. team standup, peer reviews/debriefs, or pair testing across projects

What do (these) testers want from a test manager?

  • Clear product strategy
  • As much certainty as possible
  • Allow and encourage learning
  • Allow and encourage contact with testers from outside the organisation
  • Recognition that testers are different and have different needs
  • Be approachable
  • Give advice based on experience
  • Work with the tester 
  • ... e.g. coaching, debriefing, pointing out potential efficiency, productivity, testing improvements
  • Show appreciation
  • Must have been a tester
Image: https://flic.kr/p/bumiPG

Monday, November 21, 2016

A Mess of Fun


In The Dots I referenced How To Make Sense of Any Mess by Abby Covert. It's a book about information architecture for non-information architects, one lesson per page, each page easily digestible on its own, each page informed by the context on either side.

As a tester, I find that there's a lot here that intersects with the way I've come to view the world and how it works and how I work with and within it. I thought it would be interesting to take a slice through the book by noting down phrases and sentences that I found thought-provoking as I went.

So, what's below is information from the book, selected and arranged by one reader, and so it is also information about that reader.

Mess: a situation where the interactions between people and information are confusing or full of difficulties. (p. 169)

Messes are made of information and people. (p.11)

Information is whatever is conveyed or represented by a particular arrangement or sequence of things. (p. 19)

The difference between information, data, and content is tricky, but the important point is that the absence of content or data can be just as informing as the presence. (p. 21)

Intent is language.  (p. 32)

Think about nouns and verbs. (p. 98)

Think about relationships between nouns and verbs. (p. 99)

I once spent three days defining the word "customer". (p. 88)


We create objects like maps, diagrams, prototypes, and lists to share what we understand and perceive. Objects allow us to compare our mental models with each other. (p. 57)

People use aesthetic cues to determine how legitimate, trustworthy, and useful information is.  (p. 64)

Ambiguous instructions can weaken our structures and their trustworthiness. (p. 131)

Be careful not to fall in love with your plans or ideas. Instead, fall in love with the effects you can have when you communicate clearly. (p. 102)

Why, what and how are deeply interrelated. (p. 43)

We make places. (p. 86)

No matter what you're making, your users will find spaces between places. (p. 87)

We listen to our users and our guts. There is no one right way. There is only your way. (p. 101)

Murk: What alternative truths or opinions exist about what you're making or trying to achieve? (p. 113)

Uncertainty comes up in almost every project. But you can only learn from those moments if you don't give up. (p. 118)

One tiny decision leads to another, and another. (p. 85)

Perfection isn't possible, but progress is. (p. 148)
Image: Discogs,Amazon

Saturday, November 19, 2016

The Dots


One of the questions that we asked ourselves at CEWT 3 was what we were going to do with the things we'd discovered during the workshop. How would, could, should we attempt to share any insights we'd had, and with who?

One of the answers I gave was that Karo and me would present our talks at Team Eating, the regular Linguamatics brown-bag lunch get-together. And this week we did that, to an audience of testers and non-testers from across the company. The talks were well-received and the questions and comments were interesting.

One of them came from Rog, our UX Specialist. I presented a slide which showed how testing, for me, is not linear or strictly hierarchical, and it doesn't necessarily proceed in a planned way from start to finish, and it can involve people and objects and information outside of the software itself. Testing can be gloriously messy, I probably said:


His comment was (considerably paraphrased) that that's how design feels to him. We spoke for a while afterwards and he showed me this, the squiggle of design:


I saw his squiggle and raised him a ring, showing images from a blog post I wrote earlier this year. In Put a Ring on It I described how I attempt to deal (in testing, and in management) with an analog of the left-hand end of that squiggle, by constraining enough uncertainty that I can treat what remains as atomic and proceed without needing to consider it further, at that time, so that I can shift right:


He reminded me that, perhaps a year earlier, we'd spoken about information architecture and that this was relevant to the discussion were were having right there and then. He lent me a book, How to Make Sense of Any Mess by Abby Covert.


The book discusses information-based approaches to understanding a problem, working out what kinds of changes might exist and be acceptable, choosing a route to achieving a change, monitoring progress towards it, and adapting to whatever happens along the way. I started reading it that evening and came immediately across something that resonated strongly with me:
Intent is Language: Intent is the effect we want to have on something ... The words we choose matter. They represent the ideas we want to bring into the world ... For example, if we say we want to make sustainable, eco-centered design solutions, we can't rely on thick, glossy paper catalogs to help us reach new customers. By choosing those words we completely changed our options.
Covert goes on to suggest that for our designs we list two sets of adjectives: those that describe properties we want and those that describe properties we don't want. The second list should not be simple negative versions of the first and the aim should be that a neutral observer should not be able to tell which is the desired set. In this way, we can attempt to capture our intent in language in a way which can be shared with others and hopefully result in a shared vision of a shared goal.

Later in the book, she suggests some structures for managing the information that is intrinsic to any mess-resolution project. Here I saw a link to another book that I'm reading at the moment, one that I borrowed from Sime, another colleague at Linguamatics: Beautiful Evidence by Edward Tufte.


This book considers ways to improve the presentation of evidence, of information, by removing anti-patterns, by promoting clarity, by exploiting aspects of the human perceptual system. It does this in order to provide increased opportunity for greater data density, enhanced contextual information about the data, the provision of comparative data, and ultimately more useful interpretation of the data presented.

Covert's high-level information structures are useful tools for organisation of thoughts and, in one phrase - "keep it tidy" - with one brief page of prose to accompany it, she opens a door into Tufte's more detailed world.

I had begun to reflect on these things while speaking to another couple of my colleagues and noted that I continue to see value returned to me by reading around testing and related areas. The value is not necessarily immediate, but I perceive that, for example, it adds depth to my analyses, it allows me to make connections that I otherwise would not, it helps me to avoid dead ends by giving a direction that might otherwise not have been obvious.

I was a long way into my career (hindsight now shows me) before I realised that reading of this kind was something that I could be doing regularly rather than only when I had a particular problem to solve. I now read reasonably widely, and also listen to a variety of podcasts while I'm walking to work and doing chores.

And so it was interesting to me that yesterday, with all of the above fresh in my mind, while I was raking up the leaves in our back garden, a recently-downloaded episode of You Are Not So Smart with James Burke came on. In his intro, David McRaney says this, reflecting Burke's own words from a television series made in the 1970's, called Connections:
Innovation took place in the spaces between disciplines, when people outside of intellectual and professional silos, unrestrained by categorical and linear views, synthesized the work of people still trapped in those institutions ...
Innovation, yes, and testing.
Images: EilReVision LabAmazon

Edit: after reading this post, Sime pointed out Jon Bach's graphical representation of his exploratory testing, which bears a striking surface resemblance to the squiggle of design:



Thursday, November 17, 2016

Something of Note

The Cambridge Tester meetup last week was a workshop on note-taking for testers by Neil Younger and Karo Stoltzenburg. An initial presentation, which included brief introductions to techniques and tools that facilitate note-taking in various ways (Cornell, mind map, Rapid Reporter, SBTM), was followed by a testing exercise in which we were encouraged to try taking notes in a way we hadn't used before. (I tried the Cornell method.)

What I particularly look for in meetups is information, inspiration, and the stimulation of ideas. And I wasn't disappointed in this one. Here's some assorted thoughts.

I wonder how much of my note-taking is me and how much is me in my context?
  • ... and how much I would change were I to move somewhere else, or do a different job at Linguamatics
  • ... given that I already know that I have evolved note-taking to suit particular tasks over time
  • ... further, I already know that I use different note-taking approaches in different contexts. But why? Can I explore that more deeply?

Is this blog post notes?
  • ... what is a note?
  • ... perhaps this is an article? It doesn't feel like a formal report, although perhaps it could turn into one
  • ... but it's more than simple aides memoire
  • ... but it's not exactly full sentences 
  • ... but it started as notes. Then I iterated on them and they become a draft, of sorts
  • ... but how? Why? According to who?
  • ... and when do notes turn into something else?
  • ... and when should notes turn into something else?

By writing up my notes for this post I have remembered other things that aren't in my notes
  • ... and thought things that I didn't think at the time
  • ... and, a week later, after discussing the evening with Karo, I've had more thoughts (and taken notes of them)

I showed my notes from CEWT 3 to one of the other participants at the event
  • ... and I realised that my written notes are very wordy compared to others'
  • ... and that I layer on top of them with emphasis, connections, sub-thoughts, new ideas etc

What axes of comparison make sense when considering alternative note-taking techniques?
  • ... what do they give over pen and paper? (which scores on ubiquity and familiarity and flexibility)
  • ... what do they give over a simple use of words? (perhaps transcription of "everything" is a baseline?)
  • ... what about shorthand? (is simple compression a form of note taking?)
  • ... is voice a media for notes? Some people use voice recorders
  • ... sketchnoting is richer in some respects, but more time-consuming

What advantages might there be of constraining note-taking?
  • ... Rapid Reporter appears to be a line-by-line tool, with no editing of earlier material
  • ... the tooling around SBTM enforces a very strict syntax
  • ... the concentration on structure over text of mind maps

How might contextual factors affect note-taking?
  • ... writing on graph paper vs lined paper vs plain paper; coloured vs white
  • ... one pen vs many different pens; different colour pens
  • ... a blank page vs a divided page (e.g. Cornell)
  • ... a blank page vs a page populated with e.g. Venn diagram, hierarchical structure, shapes, pie charts
  • ... scrap paper vs a Moleskine
  • ... pencil vs fountain pen pen vs crayon vs biro

Time allocation during note-taking
  • ... what kinds of techniques/advice are there for deciding how to apportion time to note-taking vs listening/observing?
  • ... are different kinds of notes appropriate when listening to a talk vs watching an event vs interacting with something (I do those differently)

What makes sense to put into notes?
  • ... verbatim quotes?
  • ... feelings?
  • ... questions?
  • ... suggestions?
  • ... connections?
  • ... emotions?
  • ... notes about the notes?
  • ...
  • ... what doesn't make sense, if anything? Could it ever make sense?

I am especially inspired to see whether I can distil any conventions from my own note-taking. I have particular contexts in which I make notes on paper - meetups are one - and those where I make notes straight onto the computer - 1-1 with my team, for instance, but also when testing. I make notes differently on the computer in those two scenarios.

I have written before about how I favour plain text for note-taking on the computer and I have established conventions that suit me for that. I wonder are any conventions present in multiple of the approaches that I use?

Good thought, I'll just note that down.
Image: https://flic.kr/p/djNq4b

Saturday, November 12, 2016

The Anatomy of a Definition of Testing


At CEWT 3 I offered a definition of testing up for discussion. This is it:
Testing is the pursuit of actual or potential incongruity
As I said there, I was trying to capture something of the openness, the expansiveness of what testing is for me: there is no specific technique; it is not limited to the software; it doesn't have to be linear; there don't need to be requirements or expectations; the same actions can contribute to multiple paths of investigation at the same time; it can apply at many levels and those levels can be distinct or overlapping in space and time.
 
And these are a selection of the comments and questions that it prompted before, during and after the event, loosely grouped:

Helicopter view

  • it is sufficiently open that people could buy into it, and read into it, particularly non-testers.
  • it's accurate and to the point.
  • it has the feel of Weinberg's definition of a problem. 
  • it sounds profound but I'm not sure whether there is any depth.
  • it seems very close to the regular notion of targeting information/unknowns.

Coverage

  • can not testing be part of this idea of testing?
  • how does the notion of tacit testing (from CEWT 3 discussion) fit in?
  • Kaner talks about balancing freedom and responsibility in testing. Is that covered here?
  • the definition doesn't talk about risk.

Practical utility

  • it couldn't be used to help someone new to testing decide what to do when testing.
  • I could imagine putting this onto a sticky and trying to align my actions with it.

Definitional

  • what do you mean by pursuit
  • incongruity is too complex a word.
  • what other words could replace testing in the definition and it still hold?
  • when I see or I wonder about whether it's exclusive (in the Boolean sense).

In this post I'm going to talk about just the words. I spent a deal of time choosing my words - and that in itself is a warning sign. If I have to graft to find words whose senses are subtly tuned to achieve just the interpretation that I want, then I worry that others will easily have a different interpretation.

And, despite this being a definition of testing for me, it's interesting to observe how often I appeal to my feelings and desires in the description below. Could the degree of personal investment compromise the possibility of it having general appeal or utility, I wonder.

"pursuit"

Other definitions use words like search, explore, evaluate, investigate, find out, ... I was particularly keen to find a verb that captured two aspects of testing for me: finding out what is there, and digging into what has been found.

What I like about pursuit is that it permits (at least to me) both, and additionally conveys a sense of chasing something which might be elusive, itinerant, latent or otherwise hard to find. Oxford Dictionaries has these definitions, amongst others of pursue:
  • follow or chase (someone or something)
  • continue to investigate or explore (an idea or argument)

These map onto my two needs in ways that other verbs don't:
  • search: feels more about the former and less about the latter.
  • investigate: feels more natual when there's a thing to investigate.
  • explore: could possibly do duty for me (and it's popular in testing definitions) but exploratory testing can be perceived as cutting out other kinds of testing and I don't want that interpretation.
  • evaluate: needs data; pursuit can gather data.
  • find out: feels like it has finality in it. To reflect the fact that testing is unlikely to be complete I'd want to say something like "Testing is the attempt to find out about actual or potential incongruity"

"incongruity"

As one of the criticisms of my definition points out, this word is not part of most people's standard lexicon. Oxford Dictionaries says that it means this:
 Not in harmony or keeping with the surroundings or other aspects of something.
I like it because it permits nuance in the degree to which something needs to be out of place: it could be completely wrong, or just feel a bit odd in its context. But the price I pay for the nuance is the lack of common currency. On balance I accepted this in order to keep the definition short.

"actual or potential"

I felt unhappy with a definition that didn't include this, such as:
Testing is the pursuit of incongruity
because I wanted testing's possibilities to include suggesting that there might be a problem. If the definition of incongruity I am using permitted possible disharmony then I'd have been happier with this shorter variant.

I have subsequently realised that I am, to some extent, reflecting a testing/checking distinction here too: a check with fixed expectations can find actual incongruity while testing could in addition find potential incongruity.

However, the entire definition is, for me, in the context of the relative rule - so any incongruities of any kind are tied to a context, person, time - and also the need to govern the actions in the pursuit by some notions of what is important to the people who are important to whatever is being tested.

But, even given that, I still find it hard to accept the definition without potential. Perhaps because it flags the lack of certainty inherent in much testing.
Image: https://flic.kr/p/6Hkgyy

Edit: Olekssii Burdin wrote his own definition of testing after reading this.

CEWT 3



CEWT is the Cambridge Exploratory Workshop on Testing, a peer discussion event on ideas in and around software testing. The third CEWT, held a week or so ago, had the topic Why do we Test, and What is Testing Anyway?  With six speakers and 12 participants in total, there was scope for a variety of viewpoints and perspectives to be voiced - and we heard them - but I'll pull out just three particular themes in my reflection on the event.

Who

Lee Hawkins viewed testing through the eyes of different players in the wider software development industry, and suggested aspects of what testing could be to them. For tools vendors or commercial conference organisers, testing is an activity from which money can be made; for financial officers, testing is an expense, something to be balanced against its return and other costs; for some managers and developers and even testers, testing is something to be automated and forgotten.

James Coombes also considered a range of actors, but he was reporting on how each of them - at his work - contribute to an overall testing effort: the developer, tester, security expert, technical author, support operator, integration tester, manager and customer. Each person's primary contribution in this approach is their expertise, their domain knowledge, their different emphasis, their different lenses through which to view the product.

In discussion, we noted that the co-ordination of this kind of activity is non-trivial and, to some extent, unofficial and outside of standard process. Personal relationships and the particular individuals concerned can easily make or break it.

There was also some debate about whether customers are testing a product when they use it. It's certainly the case that they may find issues, but should we regard testing as inherent in "normal use"? Does testing require intent on the part of the "tester"?

Why

Karo Stoltzenburg focussed on an individual tester's reasons for testing and concluded that she tests because it makes her happy. Her talk was a description of the kinds of testing she'd done, of herself, to arrive at this understanding and then to try to see whether her own experience can be generalised, and to who. She suggested that we, as testers, sell our craft short and called on us to tell others what a great job it is!

One particularly motivating slide gave a selection of responses from her work colleagues to the question "why do you test?", which included: it's like a puzzle, variety of challenges, a proper outlet for my inner pedant, it's fun. Lee's talk also included a set of people who found testing fun and he characterised them as people like us, people who love the craft of software testing.

Later, in his blog post about this event, Lee described the CEWT participants as "a passionate group of testers". There was an interesting conversation thread in which we asked why we were there, doing what we were doing, and what we'd do with whatever we got from it.

Why? is a powerful question. On a personal level, I enjoy talking about testing, about its practical aspects and in the abstract. I like being exposed to ideas from other practitioners (which is not to say that I get ideas only from other practitioners) and I like to get other perspectives on my own ideas.

And, of course, understanding our own motivations is interesting, but I think the conversations in the event rarely got very deeply into the motivations of other stakeholders who ask for, or just expect, testing. We did discuss questions such as "if the testing is being done by others, why do we need testers?" and again wondered whether what others were doing was testing, or contributing to testing, or both. Harry Collins' The Shape of Actions has things to say about the way in which activities can be broken down and viewed differently at different resolutions.

But to return to Karo's challenge to us: does an event like CEWT essentially have the choir singing to itself? We know why we like testing and we know that we implicitly value testing, because we do it and because we gave up a Sunday to talk about it. But we're self-selecting. The event can help us to support each other and improve ourselves and the work we do, but can it change other's views of testing? Should it? Why?

What

Michael Ambrose described a project concerned with increasing the software development skills of his test team. The aim was to write code in order to reduce the manual effort required to support testing of a wide range of platforms without reducing the coverage (in terms of the platform set).

Naturally, this begs questions such as: is the new software doing testing? is it replacing testing (and if so to what extent?) or augmenting testing? is it extending testing scope (e.g. by freeing testers to take on work that currently isn't being done at all)? what dimensions of coverage might be increased, reduced? how can its success be evaluated?

We talked a little during the day about the tacit testing that goes on outside of that which was planned or expected or intended by a tester: those things that a human will spot "in passing" that an algorithm would not even consider.

Does that tacit investigation fit into the testing/checking distinction? If so, it's surely in testing. But, again, how important is intent in testing activity? Is it sufficient to set out with the intent to test, and then anything done during that activity is testing?  What kind of stuff that happens in a tester's day might not be regarded as testing? One participant gave an example of projects on which 85% of effort was spent covering their arse.

In his talk, Aleksandar Simic presented a diary of two days in his role as a tester, and categorised what he did in various ways. These categorisations intriguingly included "obsession" which described how he didn't want to let an issue go, and how he spent his own time building background knowledge to help him at work. He talked about how he is keeping a daily diary of work done and his feelings about that work, and looking for patterns across them to help him to improve himself.

This is challenging. It's easy to mislead and be misled by ourselves. Seeing through our own rose-tinted spectacles involves being prepared to accept that they exist and need to be at least cleaned if not removed.

But is that kind of sense-making, data gathering and analysis a testing activity? I would like to regard it as such. In my own talk I explained how I had explored a definition of testing from Explore It! and also explored my reaction to it, and how these processes - and others - ran in parallel, and overlapped with, and impacted on each other. I rejected testing as simply linear and tried to find my own definition that encompasses the gloriously messy activity that it can be (for me).

One comment on my definition - which inverted a concern that I have about it - was that it is sufficiently open that people could buy into it, and read into it, particularly non-testers. This touches again on the topic of taking testing out of the testing bubble.

There was some thought that distinctions like testing vs checking - which have not been universally approved of in the wider testing community; some thinking that it is simply navel-gazing and semantics - are useful as a tool for conversations with non-testers. An example might be explaining why a unit test suite, valuable as it might be, need not be all that testing is.

Perhaps that's a useful finding for us: that we can get value from events like these by going away and being open to talking about testing, explaining it, justifying it, in ways that other parties can engage with. By doing that we might (a) spread the word about testing,  (b) understand what others want and need from it, and (c) have fun.

I am intellectually buoyed by events like this, and also not a little proud to see something I created providing a forum for, and returning pleasure and value to, others. CEWT 4 anyone?