For me, the conference worked really well because of the small scope of topics that most of the talks and workshops were based on. This meant that you got a much deeper understanding of the topics as ideas were discussed, built on and sometimes challenged over the conference.
The only real negative about the conference, was the record heatwave that hit Sydney at the same time as it was on. The temperature hit 42 degrees on the 2nd afternoon which made for a pretty tough environment to think and learn in, while we were in a historic cell block building with no aircon…
At least the heat provided an opportunity to introduce some Cucumber basics with a very relevant example of driving aircon to a more comfortable temperature!
Because there were so many insightful talks and workshops to learn new skills in, I’ve split my highlights blog into this one covering the talks and a forthcoming one diving into what I learnt from the workshops.
Beyond BDD (Matt Wynne)
Starting at the end. In spite of the stifling afternoon heat, Matt wrapped up the conference with an energising talk that pulled together many of the threads and themes covered at the conference and also looked forward.
Matt looked at different levels of BDD development using a burned toast analogy. This was based on W. Edwards Demings’ “You burn it, I’ll scrape it” quote on tolerating systems that produce substandard results instead of fixing the process. At lower levels of BDD development (i.e. low collaboration, poor communication) the process results in a high amount of bugs and rework.
When collaboration is built into the development process (e.g. with 3 Amigos sessions defining examples) more quality is built in from the start. However, it’s when teams take a test-first approach that quality is really baked into the process and teams are really “doing” BDD.
… However, a test-first approach has a flipside … BDD can become addictive and automated test suites can get too large, too bloated and too brittle (I’ve certainly seen this with automation efforts that have started with the best of intentions, but the team hasn’t kept control of test quality, relevance and performance)
This is often the point at which it becomes easy to blame the tool, rather than the process that led you to this point.
The key here is you still have to do software design. Matt highlighted how testing and design are inextricably linked – quality design relies on great feedback from Testers and Developers who understand testing themselves.
The less code the better and that includes automated test code. With good design you require less automated tests.
The shallower your tests can be (lower in the application stack) the more targeted they can be. Shallower tests are faster to run and when they fail, you know why they’ve failed.
Matt finished with a quote from David West’s Object Thinking “Model the problem well enough, and the solution will take care of itself”
The 10 Do’s and 500* Don’ts of Automated Acceptance Testing (Alister Scott)
I’ve been following Alister’s blog (Watirmelon.com) and using his testing pyramids to help illustrate automation concepts for some time now, so I was very keen to hear what he had to say.
His talk reinforced some ideas covered in the conference, but also challenged some other ideas and assumptions. “DO run end-to-end tests in production”, in particular generated a lot of discussion in the Q&A. This approach is about maximising your returns on developing test automation, by using the tests wherever they offer value.
Alister delved into scenarios where running end-to-end tests in production has worked for him. The main scenario discussed was from when he worked with Domino’s Pizza and they ran automated end-to-end against the production ordering system. Using test-specific data in the orders meant they could be easily be separated from real orders and automatically cancelled.
- DO differentiate between acceptance and end-to-end tests
- DO specify intention over implementation
- DO write independent automated tests
- DO automate (just) enough to eliminate manual regression testing
- DO manual story / exploratory testing
- DO ensure your whole team is responsible for your auto tests
- DO run your tests on every commit
- DO run your tests in parallel
- DO run your tests against a single browser
- DO run end-to-end tests in production
and Alister wrapped the talk nicely with his 1 Don’t …
Alister’s blogged the full text of his talk here
Keynote: When your testing is in a pickle (Anne-Marie Charrett)
In her keynote, Anne-Marie Charrett emphasised the need to focus on bridging communication gaps and enabling collaboration. This was challenging the situations where BDD or Cucumber get adopted without really understanding what problems you’re trying to solve.
The keynote covered 4 “insights on Cucumbers” which provided wisdom beyond Cucumber or BDD
#1 – It’s not just a Cucumber
We need to focus on what problems are we trying to solve, rather than the tool itself.
“Is fast feedback what we really want from automation? more important is quality feedback”
#2 – Cucumbers can’t talk
Don’t look for Cucumber to solve your communication and collaboration problems.
“A lot of problems that occur in BDD are people problems. Not tooling problems”
“the problems we are trying to solve are for people.”
#3 – Cucumbers make lousy hammers
The tool is only as good as the person using it and the specific context they’re working in.
“Do you really need a tool for Given / When / Then if you’re trying to improve communication and collaboration. Maybe start with a document, use that approach to get a shared understanding”
Included in this part of Anne-Marie’s talk were two extremely useful summary slides on questions for testers to ask in “3 Amigos” sessions and important skills for Testers.
Questions for Testers to ask in 3 Amigos sessions
- Business impacts
- Technical complexity – e.g. cross-system test examples, which ones are really useful? where is the value in these tests?
- Does it have to be automated?
- Risk Identification
- Analysis of the system
- Business model
- Learning mindset
- Challenging assumptions
- Logical reasoning
- Strategic thinking
Anne-Marie used the metaphor of Cucumber as a base camp on your way to the final destination of a quality product. It can help you along the way, but implementing it is only a step towards your actual goal.
Twelve BDD Anti-Patterns: Stories from the Trenches about how NOT to do Behaviour Driven Development (John Smart)
What I liked most about John’s session was that while he was covering anti-patterns in BDD, the focus was on balance – i.e. understand what the anti-patterns are, but take them as guidelines and direction for improving your process, rather than absolute rules.
John discussed 3 types of balance:
- Timing (#1-3) When do you write your requirements – too early? too late? are the right people involved? are you providing time for feedback?
- Pitch(#4-11) How are you pitching these requirements? Are you giving enough information to develop the features, but also not locking the implementation into a specific solution? Are you focusing on features that will actually make a difference?
- Feedback (#12) Are you providing the most relevant and meaningful feedback to the right people?
- Out to lunch – you need time from the right people, to ensure you’re delivering value.
- Cucumber salad – where detailed requirements are being written as gherkin scenarios up front. The more work done in isolation up front, the more likely you’re heading down the wrong path and wasting effort.
- Cucumber as a Test Tool – If you are just using Cucumber as a test automation tool, you aren’t doing BDD. Cucumber is a collaboration and communication tool and there are better automation tools out there if all you want to do is test automation.
- Aimless requirements – Where there is no clear business goal. Make sure the level of requirement abstraction is higher than the action being used.
- No shit Sherlock – don’t waste time on the obvious, prioritise the uncertain
- Eyes on the screen – Examples tied to implementation, meaning the example has to change if the implementation does. Examples are about what you are trying to achieve, not how you are trying to achieve it
- Top-heavy scenarios – when too many scenarios are automated as UI-tests. UI tests are slow and brittle compared to tests down the stack. Only automate against the UI where the scenario needs UI interaction.
- Not having all the cards – poorly-defined inputs and outcomes. The purpose and value of scenarios is unclear.
- Scenario overload – the more information you have, the less information you actually communicate.
- Cucumber test scripts – writing step-by-step test scripts in Gherkin
- Tech-speak scenarios – scenarios are essentially regression tests, rather than examples relating to business goals.
- Incommunicado scenarios – you want to provide reports that are meaningful to the business.
John’s BDD anti-patterns slides are shared here