A really interesting case study from Eric Brickarp on changing to an exploratory approach. Some great lessons to learn.
Here’s my notes.
I’m working in a team, that’s developing a mobile application and back-end, that does not have any testers. We didn’t plan it that way, but temporarily, that’s just the way it is. While I can add my testing skills and experience to the team, that’s not the primary reason I’m there, and so I’ve been looking at ways in which the whole team can help to contribute to a quality product.
It’s been interesting. At first, when our tester left then nothing happened. Like nothing at all, tickets moved over to the ‘Ready for Testing’ column and there they stayed. Immobile, ignored and unloved. It was as if all eyes were to the left, where the fun stuff for developers was, understanding designs and coding. Although there are unit tests and some automated integration level tests for our code it clearly wasn’t enough.
I’ve used session based testing in the past when I managed test teams, and have found it to be a very useful way of breaking exploratory testing work into understandable pieces, and distributing that work out to the team. James and Jon Bach explain the approach far better than I can. Ministry of Testing have also got an excellent resource on SBTM.
Having read some articles from people who had broadened their approach then I thought I’d take the session based testing concept and roll it out to the whole team as a bug bash.
The key point for me was that any sort of session had to be fun. Initially I feared that not all the team were going to see the full value of such a session since it wasn’t development work per se, so keeping the approach light-weight was key. The ideas for our first bug bash had formed.
Getting away from the everyday work was important so the bug bash took place in a separate room, close enough to our team area that it was quick to pop out and pickup forgotten devices, etc but away from the interruptions that phones, etc provide. I made sure we had enough phones, tablets, post-it notes, pens and cake to keep us going. I also brought in some popular testing books and the Test Heuristics Cheatsheet from Elizabeth Hendrikson, in case people needed some inspiration.
We use JIRA as our project management tool and since we were testing a whole Epic then it made sense to break the testing down on a story by story basis. Working with our lead developer, we took each Epic in turn and used that to form an exploratory testing charter stored as a Google doc. You can get an example version here. Using charters enabled us to define the boundaries of the testing; ensuring there was not too much overlap. It also meant that people who were not so familiar with the story were able to quickly learn about it, and areas to test, without needing to spend time reading stories and Epics in JIRA.
So, the charters were ready, the room was ready and the people were ready. Time to start.
We decided on two 45 minute sessions with a tea break in between. The whole team attended; PO, designers, UX, developers and myself. Everyone was a tester for the afternoon. We agreed who would pickup which charters and made a start.
I made a Google doc in which people could record potential issues which I kept projected up on the wall throughout, so that we could all see the progress that we were making and the issues that were being raised. I encouraged the team to add screenshots as well as text and this really helped to make things clear and to prevent duplicate issues making their way into the list. Discussion was also key to the success of the session; having everyone in the room at the same time meant that potential design issues could be shown to designers straight away for review.
The lead developer for the feature was also present, not testing but helping to explain architectural and development decisions, and also to fix some bugs on-the-fly.
The session went well. We got all the charters finished and the session concluded with a quick retrospective.
Having structure was critical to the bug bash, given that it was a new activity for the team, and they were not experienced with exploratory testing. It also meant that we got the most from the session and people were not all testing the same thing.
By including examples, and ideas on areas to test, people had a guide to start them off. Given that they were not experienced testers then this really helped.
This meant that it was clear what was to be tested and there was a clear link back to the project management tool people were used to.
This clearly shows the value of group pairing for me.
I had hoped we would get some data on setup time and test execution time so had added that to the charters but it confused most people.
Although it as clear to them which charter a person was working on, we didn’t keep a visible record of who was working on what. Next time I’ll write this down and pin it up on the wall.
For each 45 minute session one person would pick up and test against a charter. Swapping round could add value and a new set of eyes onto the area.
This was great to see since it showed that the team saw the value in the approach.
This will be our next step for the Bug Bash sessions. Pairing people up could help to add more value by giving multiple viewpoints.
Some charters were bigger than others. People naturally started to go off-charter once their charter was done and this gave multiple viewpoints across the whole feature.
By being one team, with one goal, isolated from our day-to-day activities, we were able to effectively test and find many issues in the feature, without the usual distractions that break concentration and flow. Having lots of cake also helped 🙂
We will definitely do the Bug Bash again, and I see it as becoming a really important part of our delivery processes, even when we get a full time tester back on the team. It really helped everyone to understand the feature, to explore it and to test together, giving multiple viewpoints on potential issues.
With some guidance the whole team made a great contribution. I’d encourage everyone to try a similar approach,
I had the pleasure of attending the London Tester Gathering Workshops last week, organised by Tony Bruce and the team at Skillsmatter. It was a good couple of days, and a good break from the presentation led conferences that I have mostly attended in the past.
As an offshoot from the London Tester Gatherings, the purpose of the workshops were to enable testers to get more hands-on and practice in a group setting, with support from some great testers and presenters. For me it was a good opportunity to get back to being a bit more hands-on, and to improve my knowledge of security testing in particular.
If you were wondering what the venue or some of the attendees looked like 🙂 then take a look at Tony’s blog. He took the pictures, I spent the time learning and talking testing.
There were a couple of workshops that really looked interesting. Black Ops Testing, run by Tony, James Lyndsay, Steve Green and Alan Richardson, and Security Testing for Mobile Apps, run by Bill Matthews.
Black Ops Testing focused on scouting, intrusion and extraction. Or, as the intro said – if you don’t like military metaphors: Thinking, Exploration, Diagnosis. It focused around exploratory techniques and a whole lot more. Using a variety of techniques on a test server, meant that we were able to quickly put into practice what we were learning. Sadly I have lost the mindmap I wrote so you’ll have have to take my word for it, and wait for the blog post from Dan Ashby.
The Black Ops Testing workshop continued on in the afternoon but sadly clashed with Bill Matthew’s Security Testing for Mobile Applications workshop. Given my focus on mobile testing, both professionally and otherwise, then this one couldn’t be missed. Bill focused the session around the Mobisec VM and gave us all a large number of hints and tips on security testing for mobile applications.
We focused on testing for Android applications, learning basic tools and techniques alongside some application security concepts. It was very useful to be able to setup the Mobisec VM in particular, and then use that to test an application with known vulnerabilities Security Compass Exploit me – they have a set of labs you can follow as well on their site. Using a VM meant we got all the tools we needed in one package, and Bill was on hand to explain, answer questions and make sure we were heading in the right direction. It was a good session with lots to takeaway and practice.
If you have an interest in mobile security then I would definitely recommend that you take a look at the Mobisec VM, and then head over to the Security Compass site. They also have an iPhone version, together with labs you can go through to help learn the main concepts.
The day concluded with the London Tester Gathering, which is always a good opportunity to meet old friends and new one’s over a beer or two.
Day 2 was all about security testing again. Firstly Bug Hunting for Fun and Profit with Martin Hall, then The Evil Testers Guide to Http Proxies with Alan Richardson.
Bug Hunting for Fun and Profit was all about the tools and techniques that would enable testers to find security exploits in popular websites and applications, in order to make some money from bug bounty programs. Martin clearly knew his stuff – he gave us a lot of examples, a whole bunch of tools, and a lot of supporting information on which sites run bounty programs, the best way to approach them, and how to make some spare cash.
I mindmapped my ideas from the workshop, although, like Bill Matthew’s workshop the day before, this was just the start of things. There’s a lot of practice to do, both using the tools and the techniques before going onto any live sites. Fortunately there are a number of sites that one can practice on, and Martin gave us some great tools to use.
The afternoon was spent with Alan Richardson, talking about The Evil Testers Guide to Http Proxies. Having spent both Bill and Martin’s sessions using proxies then it was great to have Alan give his ideas and helpful advice. The session was organised around testing the Gruyere web application, a vulnerable app designed for practicing web security testing. Alan gave us a lot of documentation and support, far more than I can go through in one blog post.
The London Tester Gathering workshops were a great couple of days. I learnt a lot, and I now have a lot of great opportunities to learn and practice. The presenters were all very knowledgeable, and were happy to share that knowledge and a lot of useful tools, slides and experience. I met a lot of good testers who were keen to learn and improve their skills. It was great to meet some old friends, but equally it was good to see so many testers in the workshops that I haven’t met before. Sometimes the testing community can seem a little cliquey and this workshop certainly was not.
Thanks to Tony and all the other organisers and presenters. If you didn’t go to the workshop this year then make sure you check it out next year. It’s well worth it.
I’ve recently been reading through my ISEB Practitioner notes, which I got when attending a course organised by Grove Consultants a few years back, as I mentioned in my previous post. It’s got me thinking about test cases, and in particular the four criteria of a good test case. Having attended both Rapid Software Testing and Rapid Test Management recently, and having rolled out an Exploratory and Session Based Test Strategy in my teams then it’s caused me to question again the validity of test cases and the need for them.
So, to a good test case. Reading my notes, a good test case is, in no particular order, apparently:
Straight from the ISTQB/ ISEB of course. But not without merit. Are these still relevant?
Good test cases can test more than one condition at the same time. This is one good reason for taking the time to design test cases in the first place. Just writing test cases because “it’s the done thing” or “because the policy says so” is time wasting, but designing them so that when the testing is carried out it is done so in the most efficient way requires exemplary test cases and can add value. It is also the case that test cases are shared and it’s difficult, especially in large organisations, to ensure that all testers have the same basic level of ability. Having test cases can help.
From a contextual point of view perhaps a good test case is not written down at all but merely in the testers head, and driving the testing into particular areas that the tester feels are worthy of time and effort. Cases where the software is the specification are becoming increasingly common; in-sufficient or non-existent requirements documentation which requires the tester to apply their previous knowledge of the system under test in order to effectively test it. Clearly in this case, if documented test cases are required then they will be need to evolve. As Rapid Software Testing mentions “How do you invent the right tests at the right time – evolve them with an exploratory strategy”.
Time is money and often in testing we have little time and sometimes little money. So using that time and that money in the most efficient way means we minimise the economic impact. Of course, sometimes the best way to get maximum value is to have a purely exploratory approach and spend more of the time and money with the software in hand. A choice which is key to a good test strategy and highly dependant on the industry area one is testing within.
Clearly a good test cases should be effective. We are not in the business of wasting time, particularly when time is precious as it often is in testing projects.
I’d argue that the definitions given in ISEB/ ISTQB are still relevant and can be a good guide as to what is required of a test case. In a lot of industries test cases are still very much required and, particularly where there is strict regulation such as in areas of financial software and even in mobile software. The ability to write a good test case is a skill which should not be forgotten.
I’m excited to be able to announce that I’ll be speaking, and sitting on a panel, at the Next Generation Testing Conference which is on Wednesday 23rd May in London.
The conference is in its seventh year and is trying out a new format this year, with more emphasis on panels and discussions, together with case studies. This should hopefully mean that there’s some great audience participation and discussion on software testing.
I’ll be on Panel 1: Testing Today – What are the main challenges? It’ll be moderated by Dr Richard Sykes and they’ll be a group of us representing various areas of software testing, including Tony Bruce of London Tester Gathering fame. We’ll be debating various topics including:
I’m then be presenting a case study: Mobile Testing – That’s Just a Smaller Screen, Right? This will go into the background behind mobile device testing, how it differs from the desktop world, and giving some pointers towards areas to consider when formulating a mobile device strategy.
Hope to see you there. You can find out more about the event at http://next-generation-testing.com/
Last week I had the pleasure of attending Rapid Software Testing training, organised by The Ministry of Testing. Rapid Software Testing is a technique popularised by James Bach and Michael Bolton and hopefully is not something new to you, but in case it is then I’d recommend looking at James Bach’s website. He explains it much better than me 🙂
The course was in Cambridge in the UK, and after a quick and easy train ride up then it was easy to find the hotel, dump the bags, and then go out to the Software Testing Club Meetup. This was a good start to the course, an initial networking event which James also attended, as well as a lot of local testers who were not attending the course. We had some great discussions and I met a lot of new people; you can see some photos from the event which have been posted up by Rosie, the organiser.
Then to the first day of the course itself. From the moment the course started it was apparent that this was not your typical technical course. James’ style is well—known, just search YouTube if you want to see him in action, and he carried this into the course itself. He certainly knows his stuff, and presents in typical provocative style and is capable of causing many eureka moments. It’s very enjoyable, but initially tough, stuff.
We focused mostly in the first day on what Rapid Software Testing is, and the overall philosophy of the techniques. Rapid Software Testing is most useful to encourage testers to defend and think for themselves from a position of technical authority, especially useful in periods of uncertainty. Plenty of examples were given and James was able to draw upon his many years of experience in testing, both hardware and software. The time went by quickly and there was plenty of audience participation. James’ style is very much to put the audience members on the spot and ask some very difficult and blunt questions in order to replicate the pressures that testers can feel as part of project teams. To a few this comes naturally, but to most of the audience, this was a long way out of the comfort zone. We tried to help out whoever had been picked for a particular challenge, in order that the class as a whole could benefit.
The day concluded with an exercise on testing some everyday objects. Sounds simple, right? Well no. In case you will go on the course yourself then I will not give too much away, but suffice to say that there is much more to testing something which appears simple, than one at first thinks. It’s these sorts of exercises that open the mind and help learning.
After a good dinner with some new software testing friends, and a decent night’s sleep, it was time for day 2. Here we went into more details of Rapid Software Testing and the relevant testing models. Again the examples given were general, intended to make you think like a tester irrespective of your background, and plenty of pressure was applied to those who James selected from the audience. We looked at the differences between scripted and un-scripted testing and exploded some myths about both areas. We also talked a lot about oracles and why they are essential in testing. As an example, I was surprised to find that a person can be considered as oracle.
We also discussed heuristics a lot. Rapid Software Testing has many heuristics, the fact that James can remember and explain them all straight from his head is somewhat impressive. As with a lot of the techniques and information, a fair amount of common sense thinking was clearly applied when inventing the heuristics, but it was good to get names put to techniques that I was using already, for consistency if nothing else. There is a danger of quoting too many heuristics of course, especially when dealing with other’s within project teams and management. James’ view seems to be that by bombarding those outside of testing with information and explanations, using the relevant heuristics, that testers gain legitimacy. I do not agree with his approach to the length that he presents it – clearly testers need to be able to explain themselves – but there is a danger of losing credibility if too many heuristics are invented and then explained, which merely represent ‘day-to-day’ work. Take a look at James’ slides and see if you agree.
By the end of the day we were questioning practically everything about testing and about the way we were working. There is a danger from this course that one starts to question too much but one needs to start small and work up I think. That’s certainly what I intend to do.
The final day of the course started bright and early with more of the same. We focused on exploratory testing again, with more details, and talked a lot about documentation, metrics and information. The idea of focusing on a particular testing task, using some heuristics, but knowing when it is not working and de-focusing at this point, was a great learning for me. We also went into more detail on exploratory and session based techniques, something which I wish we had spent a bit more time on in previous days.
The main exercise for the day was based around finding a pattern for a system based upon dice. I won’t go into too many details on this (it’s explained pretty well at Better Testing) and also I do not want to give away a potential solution to anyone. But suffice to say it was a great opportunity to put into practice some of the techniques that we had learnt. Our group were not the quickest but neither were we the slowest, and it was certainly a good challenge.
The day then concluded with a wrap-up and overview of what we had learnt. Then some brief goodbyes and swapping of LinkedIn invites, and home to try and make sense of a busy three days and how what I had learnt could be applied to myself and the team members in my teams.
If you get the chance to go on Rapid Software Testing then go. Don’t think too hard about it, the course if very worthwhile and you will get a lot out of it. It is not easy, you will most likely feel uncomfortable at times with the training approach and some of the content may well seem obvious on first pass. But once you think more, and you start to question your own approach, with the techniques, tools, and even just the words, to back-up what you already know, then this course should make you a better tester. It would have been good to have seen a little more on session based techniques in detail and more about the tools that can be used, but I understand James does a separate course on this.
Thanks also go to Rosie Sherry, the course organiser. This was the first course that The Ministry of Testing have organised and if this first one is anything to go by then Ministry of Testing has a bright future. The venue and organisation was great, there was a really friendly, small company feel about things, and it was very easy to meet new people and learn together. Definitely three days well spent.
Tomorrow I’m leaving the safety of the South to journey far North* for something that I’ve been looking forward to for a long time. I’ve been fortunate enough to be able to sign-up for James Bach’s Rapid Software Testing course which has been arranged by The Ministry of Testing in Cambridge.
To say I’ve been looking forward to this for a while would be an understatement. Unfortunately due to budget constraints then it’s not been possible for me to get on courses like this in the past few years but ‘fortunately’ now that things are closing, then there’s a bit more money available for training and this will be money very well spent.
First stop is the Software Testing Club meetup tomorrow in Cambridge then on Wednesday the course starts. I don’t know what to expect but if it’s three days of hard but rewarding learning then I will be very happy. Having already taken a look at the course outline and slides then I’m sure it will be.
I’ll try and blog daily about my experiences, assuming I have the time and brain power left to do so 🙂
* (non-UK readers – we have a big North-South joke thing going on in the UK. If a place is north of Watford, which is a bit north of London, then us native southerners joke we’re out of the safety of the south and that it’s grim up north 🙂 Even though it isn’t and Cambridge is not even in really in the north anyway).
No not the short, funny coloured alien guy….
Yesterday I attended a exploratory testing session for the latest change that my team are testing, on new feature phones. So we’re talking mass market devices, ones where a bug let out into the field can wipe out your profit and destroy your reputation. So we like to get our QA right, you could say.
Overall the session went well. We found some bugs even though the feature under test is pretty close to release maturity now. There was a great mix of developers, testers and some of the release ops team (aka the CI guys). We like to use these sessions to bring the team together. Using Agile sdlc’s helps here anyway, but having everyone testing together and helping each other is great for the team spirit and the togetherness. It was good fun.
So here’s what went well and what to try differently next time:
* Make sure you do not limit your sessions to just the testers in your company. Get everyone together, it’s fun.
* The leader of the session is important to keep things flowing. Our test lead brought ready flashed phones. And food. That helped.
* Make sure you have a structure to the session. We used charters.
And some areas to learn from and change:
* Getting results and statistics from the session wasn’t easy. Next time we will try session based (as I’ve used in other teams).
* Get some commitment from participants and make sure they come on time. People who drift in and out break the focus of those testing.
* Make sure there are chargers and SIMs with relevant features available.
Overall it was most enjoyable. I can’t wait for the next one.
*and in the true spirit of mobile, I’ve written this whole post on a Nokia E7 using cutepress. Yep, I’m still a Nokia user, and still mobile obsessed 🙂