CukeUp Australia – The Talks


Myself and 3 colleagues from Assurity (Scottie, Isham & Darren) recently spent two days in Sydney at the CukeUp conference.


For me, the conference worked really well because of the small scope of topics that most of the talks and workshops were based on. This meant that you got a much deeper understanding of the topics as ideas were discussed, built on and sometimes challenged over the conference.

The only real negative about the conference, was the record heatwave that hit Sydney at the same time as it was on. The temperature hit 42 degrees on the 2nd afternoon which made for a pretty tough environment to think and learn in, while we were in a historic cell block building with no aircon…

At least the heat provided an opportunity to introduce some Cucumber basics with a very relevant example of driving aircon to a more comfortable temperature!


Because there were so many insightful talks and workshops to learn new skills in, I’ve split my highlights blog into this one covering the talks and a forthcoming one diving into what I learnt from the workshops.

Beyond BDD (Matt Wynne)


Starting at the end. In spite of the stifling afternoon heat, Matt wrapped up the conference with an energising talk that pulled together many of the threads and themes covered at the conference and also looked forward.

Matt looked at different levels of BDD development using a burned toast analogy. This was based on W. Edwards Demings’ “You burn it, I’ll scrape it” quote on tolerating systems that produce substandard results instead of fixing the process. At lower levels of BDD development (i.e. low collaboration, poor communication) the process results in a high amount of bugs and rework.


When collaboration is built into the development process (e.g. with 3 Amigos sessions defining examples) more quality is built in from the start. However, it’s when teams take a test-first approach that quality is really baked into the process and teams are really “doing” BDD.


… However, a test-first approach has a flipside … BDD can become addictive and automated test suites can get too large, too bloated and too brittle (I’ve certainly seen this with automation efforts that have started with the best of intentions, but the team hasn’t kept control of test quality, relevance and performance)


This is often the point at which it becomes easy to blame the tool, rather than the process that led you to this point.

The key here is you still have to do software design. Matt highlighted how testing and design are inextricably linked – quality design relies on great feedback from Testers and Developers who understand testing themselves.


The less code the better and that includes automated test code. With good design you require less automated tests.

The shallower your tests can be (lower in the application stack) the more targeted they can be. Shallower tests are faster to run and when they fail, you know why they’ve failed.


Matt finished with a quote from David West’s Object Thinking Model the problem well enough, and the solution will take care of itself

The 10 Do’s and 500* Don’ts of Automated Acceptance Testing (Alister Scott)


I’ve been following Alister’s blog ( and using his testing pyramids to help illustrate automation concepts for some time now, so I was very keen to hear what he had to say.

His talk reinforced some ideas covered in the conference, but also challenged some other ideas and assumptions. “DO run end-to-end tests in production”, in particular generated a lot of discussion in the Q&A. This approach is about maximising your returns on developing test automation, by using the tests wherever they offer value.

Alister delved into scenarios where running end-to-end tests in production has worked for him. The main scenario discussed was from when he worked with Domino’s Pizza and they ran automated end-to-end against the production ordering system. Using test-specific data in the orders meant they could be easily be separated from real orders and automatically cancelled.

The 10 DO’s covered were:
  1. DO differentiate between acceptance and end-to-end tests
  2. DO specify intention over implementation
  3. DO write independent automated tests
  4. DO automate (just) enough to eliminate manual regression testing
  5. DO manual story / exploratory testing
  6. DO ensure your whole team is responsible for your auto tests
  7. DO run your tests on every commit
  8. DO run your tests in parallel
  9. DO run your tests against a single browser
  10. DO run end-to-end tests in production

and Alister wrapped the talk nicely with his 1 Don’t …


Alister’s blogged the full text of his talk here

Keynote: When your testing is in a pickle (Anne-Marie Charrett)


In her keynote, Anne-Marie Charrett emphasised the need to focus on bridging communication gaps and enabling collaboration. This was challenging the situations where BDD or Cucumber get adopted without really understanding what problems you’re trying to solve.

The keynote covered 4 “insights on Cucumbers” which provided wisdom beyond Cucumber or BDD

#1 – It’s not just a Cucumber

We need to focus on what problems are we trying to solve, rather than the tool itself.

“Is fast feedback what we really want from automation? more important is quality feedback”

#2 – Cucumbers can’t talk

Don’t look for Cucumber to solve your communication and collaboration problems.

“A lot of problems that occur in BDD are people problems. Not tooling problems”

“the problems we are trying to solve are for people.”

#3 – Cucumbers make lousy hammers

The tool is only as good as the person using it and the specific context they’re working in.

“Do you really need a tool for Given / When / Then if you’re trying to improve communication and collaboration. Maybe start with a document, use that approach to get a shared understanding”

Included in this part of Anne-Marie’s talk were two extremely useful summary slides on questions for testers to ask in “3 Amigos” sessions and important skills for Testers.


Questions for Testers to ask in 3 Amigos sessions

  • Variables
  • Data
  • Scenarios
  • Outputs
  • Business impacts
  • Technical complexity – e.g. cross-system test examples, which ones are really useful? where is the value in these tests?
  • Does it have to be automated?


Important Testing Skills
  • Risk Identification
  • Analysis of the system
  • Business model
  • Learning mindset
  • Challenging assumptions
  • Logical reasoning
  • Strategic thinking
#4 – Cucumbers make great pickles

Anne-Marie used the metaphor of Cucumber as a base camp on your way to the final destination of a quality product. It can help you along the way, but implementing it is only a step towards your actual goal.

Twelve BDD Anti-Patterns: Stories from the Trenches about how NOT to do Behaviour Driven Development (John Smart)


What I liked most about John’s session was that while he was covering anti-patterns in BDD, the focus was on balance – i.e. understand what the anti-patterns are, but take them as guidelines and direction for improving your process, rather than absolute rules.


John discussed 3 types of balance:

  • Timing (#1-3) When do you write your requirements – too early? too late? are the right people involved? are you providing time for feedback?
  • Pitch(#4-11) How are you pitching these requirements? Are you giving enough information to develop the features, but also not locking the implementation into a specific solution? Are you focusing on features that will actually make a difference?
  • Feedback (#12) Are you providing the most relevant and meaningful feedback to the right people?

BDD Anti-Patterns

  1. Out to lunch – you need time from the right people, to ensure you’re delivering value.
  2. Cucumber salad – where detailed requirements are being written as gherkin scenarios up front. The more work done in isolation up front, the more likely you’re heading down the wrong path and wasting effort.
  3. Cucumber as a Test Tool If you are just using Cucumber as a test automation tool, you aren’t doing BDD. Cucumber is a collaboration and communication tool and there are better automation tools out there if all you want to do is test automation.
  4. Aimless requirements – Where there is no clear business goal. Make sure the level of requirement abstraction is higher than the action being used.
  5. No shit Sherlock – don’t waste time on the obvious, prioritise the uncertain
  6. Eyes on the screen – Examples tied to implementation, meaning the example has to change if the implementation does. Examples are about what you are trying to achieve, not how you are trying to achieve it
  7. Top-heavy scenarios – when too many scenarios are automated as UI-tests. UI tests are slow and brittle compared to tests down the stack. Only automate against the UI where the scenario needs UI interaction.
  8. Not having all the cards – poorly-defined inputs and outcomes. The purpose and value of scenarios is unclear.
  9. Scenario overload – the more information you have, the less information you actually communicate.
  10. Cucumber test scripts – writing step-by-step test scripts in Gherkin
  11. Tech-speak scenarios – scenarios are essentially regression tests, rather than examples relating to business goals.
  12. Incommunicado scenarios – you want to provide reports that are meaningful to the business.

John’s BDD anti-patterns slides are shared here


Vegetables: CukeUp conference in Sydney


Next week (November 19th/20th) myself and 3 Assurity colleagues will be going to the CukeUp conference in Sydney. The CukeUp conference is focused around the software development approach of Behaviour-Driven Development (BDD), in particular using the tool Cucumber.

Cucumber is used to capture definitions of application behaviour from a business perspective. These definitions are able to be automated to check the software behaves as expected.  This enables business and development teams to have a shared understanding of application behaviour, without the implementation having to be specified.

While BDD is aimed at collaborating on the business scenarios before and during development, all 4 of us attending are currently (or have been recently) implementing Cucumber as an automated regression test framework for our clients. While we’re not gaining some of the benefits of Cucumber as a BDD tool from the start of feature development yet, we have been benefiting from the business-friendly language and easy to read reporting, making the purpose of the tests and the impact of failures clear.

We have taken different approaches to how we’ve defined our Cucumber scenarios / tests – at Stuff we’ve focused on developing a limited set of User Journeys with each scenario following an workflow through the application (mostly via the UI) and covering a number of functions. For us these scenarios provide the business-confidence element in our build pipeline, complementing the bulk of our automation (unit and integration) which provides build and code confidence and is targeted below the UI.

The others have worked with their scenarios more at a function or feature level. No one way is more right, both suit the specific contexts we’re working in. I’ll be going into our experiences at Stuff with implementing a Cucumber acceptance test framework in more detail soon.

Back to the conference … there’s a good mix of short talks and workshops (I like to “do” so the workshops are a definite focus for me!), technical and non-technical sessions … and it’ll be interesting to talk with a wider community of people using Cucumber, developing and testing using BDD, to learn more about how they are used outside of the narrow context of functional regression checking. I’ll be sharing anything I learn from the conference here!


CukeUp Sydney

Cucumber framework

Musical Accompaniment:

Beach Boys – Vege-Tables

Jesus & Mary Chain – Vegetable Man (Pink Floyd cover)