Category Archives: blog

West London Lean Coffee – 15th Dec

We had another great West London Lean Coffee session on 15th December with some excellent discussions.

Here’s what we talked about.

Things We Discussed

Online Boards Vs Physical Boards

We talked about whether it was better for a team to use an online board or a physical one. We felt that for visibility then a physical board could be better, however the use of large touch screen boards means that the line between physical and virtual can now be blurred. The act of physically moving cards on a board has an important effect on people and shouldn’t be discounted. However there is more chance of losing cards from physical boards.

We also talked about how there were various JIRA plugins that could allow you to photograph your physical board and update the JIRA project automatically which are useful. Here’s an example.

Top Three Benefits of Agile For the Customer

We talked about earlier demonstration of value, the adaptive nature of agile to change and how that could help the customer influence a product. There’s also a greater chance of good quality when using agile methodologies

Challenges Implementing OKRs

OKRs are Objectives and Key Results. We talked about how roadmaps could be themed to link results and deliverables to metrics and also the customer impact.

The Best Test For a Service Company

We talked about how validating a service company really shouldn’t be seen as different from a product one, particularly when one looks at how to identify the customer base and test assertions and business strategies on it.

Things We Didn’t Get To

  • Top 3 Benefits Of Agile For People In The Business
  • Lean Today – Is The Waste Really Eliminated?
  • Estimation Costs – Waterfall vs Agile

The next Lean Coffee will be on January 26th. Hope to see you there.

Automated Functional Testing – A Test Activity?

If you are a functional test automation expert then times are good. There’s big bucks to be made in the contracting game, companies are desperate for candidates to ‘automate everything’ and to get to this oddly perceived test automation nirvana that those who are either mis-informed or have hidden agenda’s seem to feel fit to promote.

This has made me think. Primarily about how we have got to this state? Is it because, as the testing community, we have wanted to own test automation? Is it because those outside of the test community see test automation as less important than the production code that it tests? Is it that we just built up an expertise and then protected it just for the money?

Some might say that what has actually happened is that we now have a situation where second rate developers now have a great way to stay in the development game. There is a danger, in the apparent supply-side crisis that we find the industry in, that companies merely employ anyone who says they know something about test automation, without doing the same due diligence that one would do for a development position. This would be a mistake.

In my mind there is a solution to all these problems, and that solution comes from treating test automation just like production code. And that means primarily using developers to write it. Sure, you may choose to have testers involved as well, where they have the skills and expertise, but let’s not try and force skills on people who don’t want them, and let’s not accept second rate people just because they can ‘do some test automation’. One advantage to using developers is that test automation becomes a team thing, and you are less likely to spend time playing catch-up when development slips. One downside; it’s going to look like the team has slowed down. Believe me, it hasn’t. It’s just got more effective, and is playing to the right skill-sets.

Don’t believe me? 🙂 Here’s a couple more examples from Rob Lambert and Amy Phillips which show where continuous or more frequent delivery has been successfully rolled out at New Voice Media and Songkick. The common thread – in both cases the test automation is a development activity.

Fundamentals of a Mobile Testing Lab

Although there can be no doubt that testing mobile applications and websites is a major growth area in the industry at the moment, there are a number of challenges that mobile testing introduces that are unique to the discipline. In particular, there is the need to ensure application functionality and compatibility across a wide range of devices and in a wide range of different situations. In order to do this efficiently and effectively then it is essential that companies maintain a suitable mobile testing lab.

What Is A Mobile Test Lab?

A mobile test lab is a broad term for a collection of the materials and articles that a tester or test group will require in order to effectively test software that is intended to be accessed via a mobile device. This may include a number of different items required to support the testing itself, as well as items that are required in order to replicate certain specific test conditions. Many test conditions, especially those for situations like low signal strength and low battery levels are unique to mobile devices and are unlikely to have been considered if a testers experience is in another testing domain.

What Should You Include In One?

The contents of a mobile test lab will of course vary depending on the applications and devices under test. A test lab should contain a suitable number of desktop machines with internet access via Wi-Fi and should be situated in an area where there is access to suitable cellular signals (2G/3G/4G). Situating a mobile lab close to the test team is also a sensible choice.

A wide selection of mobile devices

Probably the most difficult part of a mobile testing strategy is ensuring that there is sufficient device coverage. This is a particular problem when testing the Android ecosystem where there are a large number of different OS versions and a number of different vendors who produce a large number of different sorts of devices. However it is also an important consideration when testing on other OS’s as well. The primary focus should be on the devices that the application is targeted towards, as well as the version of the OS with the largest deployment in the field. For example, although Jelly Bean is the most recent version of Android, 64% of devices are still running the older Gingerbread version. Ignoring the devices in the field would be a critical mistake.

It is important to ensure that the different screen sizes and mechanical styles are covered, in particular differences between portrait and landscape screens, and hardware QWERTY keypads and touch screens. Also important is to ensure that devices from a variety of different carriers are included. Mobile device manufacturers typically adapt their software to the needs of different carriers, particularly in the US, and therefore they may work differently, even if the overall OS version is the same. A suitable suit of mobile devices can represent a significant outlay to a company or individual and another solution is to use a cloud based service although these are not as easy or quick to use as having the device in the hand. A low cost option is to make a device library by using the devices owned by the testers in the team. Obviously this is not ideal since some test cases may require user data to be removed or device software to be changed, for some applications then it can be a low cost solution. Whatever solution is chosen, a suitable indexing and booking system should be used in order to ensure that the usage and whereabouts of each device is tracked.

A suitable selection of SIM cards

Not all SIM cards are created equally and some have more functionality than others. SIM cards which support cellular data and particular network supported functionality are likely to be required.   It is also important to ensure that the test lab has SIM cards that are not only active and usable in the country in which the lab is situated, but also that are relevant to the particular region in which the application will launch.

SIM cards vary greatly in their access speeds and in particular, when testing contacts and address book applications, which may access the SIM card, then it is important to ensure that the test lab contains known slower cards which can uncover timing related bugs. If the test strategy also intends to make use of cloud based services then it is possible to also get access to a number of SIM cards in the cloud based devices. However, be aware that it is difficult to generate timing related issues on cloud based services, primarily due to the slower speed of device access.

Memory cards

It is important to ensure that the mobile test lab contains a number of memory cards which can be used to generate test conditions such as full cards, card removed during read or write operation or card inserted during a specific operation.

Apparatus to simulate low signal strength areas of both the cellular and Wi-Fi signals

One commonly overlooked area of mobile testing is that of low signal strength. Applications are typically tested in the perfect environment of the office, with good cellular and Wi-Fi signals, and bugs are only uncovered after launch when the applications are used in real-world environments. Applications often fail when the signal strength is low, or in areas where there are frequent handovers from the 2G to 3G cellular signals or Wi-Fi. Testing for low signal strength can be carried out in a variety of ways. It may well be that ‘dead spots’, areas with no suitable Wi-Fi or 2G/3G cellular signal exist in the office, or these can be simulated by using shielded boxes or even old microwave ovens. Simulating handover from 2G to 3G cellular signals is more difficult and may require that the tester leave the test lab or office in order to find a suitable signal area outside. Mobile operators often have specific low signal and handover areas and routes are driven whilst testing, specifically for this purpose. It is also possible to purchase mobile network simulators from companies such as Anite and Anritsu which are able to simulate these conditions but they represent a significant financial outlay.

Apparatus to simulate low battery levels

Testing for battery life considerations is another area of testing which is often overlooked. Poorly written applications can have a significant effect on the battery life of devices, and applications themselves can often exhibit unwanted behaviour when battery levels on devices are low. A mobile test lab should contain both the ability to simulate low battery levels, and the apparatus to monitor the effect that an application can have on battery life. Simulating low battery levels can be achieved either via dummy batteries which are connected to power supplies on which the voltage and current can be varied, or a suitable supply of batteries with varying amounts of charge. Monitoring the effect an application has on battery life can be achieved by using the on device diagnostic tools or applications that are available for most popular mobile devices OS’s.

A suitable selection of peripherals

This should include a selection of headsets, both wired and Bluetooth, plus a selection of USB and other connectivity cables which can be used to connect devices to desktop PCs to test upgrade/ downgrade, back-up/ restore and to add test data such as address books, photos, videos and music files. There should also be a selection of chargers and enough power sockets to ensure that phones can remain on charge. This is especially important if the test automation solutions also reside within the test lab and mobile devices are required to remain on charge whilst testing is on-going. If appropriate, a mobile test lab could also contain suitable car kits or other peripherals such as Bluetooth speakers and other audio devices which can connect to the devices under test.

Access to a server or other device to send SMS/ MMS messages

Where it is required to send SMS/ MMS messages to the devices under test then it is important to ensure that the test lab supports this. Internet based servers can be used, or for smaller numbers of messages then sending can be carried out via other devices situated within the test lab.

Screen capture or screen recording facilities, provided by a video camera if the device OS does not allow screen capture

Capturing screen contents, particularly for inclusion in bug reports, is more difficult when testing mobile devices and applications. Some OS’s support screen capture and some do not, and it can be particularly difficult to capture screen grabs when reproducing complicated, timing dependant bugs since it is not then possible to stop the test in order to record the screen contents. For the reason it is advisable to equip the test lab with a small video camera which, together with a suitable white table top, can be used to video the screen and steps required in order to reproduce the bugs.

Keeping it up-to-date

Once a mobile test lab is setup and being used then it is important to ensure that its contents are regularly reviewed and updated. The pace of change in mobile is very fast, with new devices and OS versions being brought out frequently. Ensuring that the mobile test lab has the latest devices is important and regular reviewing of the contents is essential to ensuring continued use.

Putting It All Together

A mobile test lab can represent a significant financial outlay for a company and it is important that the money is spent wisely. Whilst it is possible to build an entire mobile test strategy around cloud based services, these are often slow and are currently not suitable to completely replace devices ‘in the hand’. Ensuring that there is a sufficient selection of devices is critical to ensuring that a mobile test lab adds value to an organisation, as is a suitable selection of different SIM cards and peripherals. Being able to replicate external situations such as low signal strength, handover between different cellular and Wi-Fi signals, and low battery life is also necessary in order to ensure that a mobile test strategy is comprehensive enough to cover these requirements, which are not typically tested for in the desktop world. Ensuring that a mobile test lab is suitably sized to support the testing team and applications under test is important. For simple applications then it may be enough to simply use personal devices, together with some facility to allow charging and screen capture, together with the ability to test low signal strengths. For more complicated applications, or where the application will be available on a wide number of OS versions or devices, then a more detailed approach will be required. A good mobile test lab adds value to the test organisation and the launch of a good quality mobile application is far less likely to succeed without one.

Facebook Overwriting Email Addresses – Breaking Rule #1

As you are probably aware, Facebook has let what looks to be a pretty serious bug out into the wild. First announced over the weekend, and confirmed yesterday, was the news that users who use Facebook as their primary storage for contact information such as email addresses, had found that these addresses on their mobile devices has been overwritten with @facebook.com addresses. Without them knowing or accepting any such change.

 

What Happened?

The official line goes something like this:

Contact synchronization on devices is performed through an API. For most devices, we’ve verified that the API is working correctly and pulling the primary email address associated with the users’ Facebook account.

However, for people on certain devices, a bug meant that the device was pulling the last email address added to the account rather than the primary email address, resulting in @facebook.com addresses being pulled.

We are in the process of fixing this issue and it will be resolved soon. After that, those specific devices should pull the correct addresses.

Now let’s be clear. This is very serious. I’ve worked in the mobile industry for the last 12 years and if there is one thing that is sacrosanct it is user data. You do not change, delete, update or generally mess around with anything that the user has stored on their device without giving them the opportunity to tell you to stop it first. Users have a much greater emotional attachment to their devices than they do to their desktops. They rely on them. Suddenly finding that you cannot contact someone is extremely annoying. Finding that is because another company has changed data that is yours is far more serious.

 

So Why Did This Happen?

I’ve been musing on why this bug may have got released. After all, this seems a pretty obvious use case. Of course Facebook is famous for a) not having testers as such and b) famous for adopting a test in production methodology. Could either of these be to blame?

Would these problems have been found by using manual test techniques? Facebook is a primary user of test automation and I can’t help but feel that a bug this obvious would be found by a few skilled testers adopting an exploratory test strategy.

I also wonder how much of a place the test in production has in the mobile world. In the desktop world, where users are always connected and backups are plentiful then rolling out updates and observing what happens is OK. Just roll back. It’s not so easy on mobile. Being on a mobile device means that you probably don’t have access to backups right away. You may be in a poor signal area or away from WiFi. And so you are stuck. Stuck out of the office or away from your friends and unable to contact people. And if you are stuck then you will become far more frustrated.

It seems to me that there is a typical gap in test strategy at work here. A bug that only manifested itself on mobile, uncovered as a result of changes made at server/ desktop level. When companies start to move onto mobile then this is pretty common. Failing to adopt a combined test strategy, treating mobile and desktop equally, or pushing more testing towards mobile can leave dangerous gaps.

It’ll be interesting to see how Facebook responds to this situation. I don’t know Facebook’s test strategies in detail but it seems to me that something needs changing to adapt to the world of mobile.

 

 

A Testers Hierarchy of Needs

As part of my role I manage testers and also those involved in delivery operations. This means thinking not only about testing and test techniques, but also about person management, and the tools and techniques that a good manager uses in order to have a happy and productive team. These two areas should not, of course, be treated in isolation since, in order to have a successful test team, it’s the managers job to ensure that the two areas fit well together. If this can be done in as seamless as possible a way, maybe even without the team members even being aware of it, then my experience tells me that you can hit that sweet spot where the team is technically excellent as well as being the sort of team that testers want to work in, and are proud to work in.

I’ve studied a lot of theories of management in the past but the one that I keep coming back to again and again is Maslow’s Hierarchy of Needs. At it’s simplest it seeks to explain the needs of human’s in their most basic form, expressed hierarchically in order of those needs.

So, for example, the theory puts forward the hypothesis that it is most important (and therefore at the bottom of the pyramid) that human’s experience a need for food, water, etc. Safety needs come next, followed by a need for belonging, and so on.

As a manager, keeping this simple theory in mind can really help. I’ve found numerous occasions in the past where it has helped me understand team members actions, and to help ensure the team is working effectively together.

But it has also got my thinking about how one might go about applying Maslow’s Theory of Needs to Testing. More specifically, how a tester in a team or project might experience and visualise those needs, as reflected by their status within the team. This lead me to produce this:

Testers Hierarchy of Needs

Test Mastery is a place I see the respected, happy tester sitting. Probably people like you, who interact and are involved with the software testing community, who self-learn and who are able to articulate the need for testing effectively.

This is just the start of the theory. I’m sure, as a community we can add more to this. What would you add to the different levels and why? Leave a comment below.

 

 

A Good Test Case Is?

I’ve recently been reading through my ISEB Practitioner notes, which I got when attending a course organised by Grove Consultants a few years back, as I mentioned in my previous post. It’s got me thinking about test cases, and in particular the four criteria of a good test case. Having attended both Rapid Software Testing and Rapid Test Management recently, and having rolled out an Exploratory and Session Based Test Strategy in my teams then it’s caused me to question again the validity of test cases and the need for them.

So, to a good test case. Reading my notes, a good test case is, in no particular order, apparently:

1. Exemplary.
2. Evolvable.
3. Economic.
4. Effective.

Straight from the ISTQB/ ISEB of course. But not without merit. Are these still relevant?

Exemplary

Good test cases can test more than one condition at the same time. This is one good reason for taking the time to design test cases in the first place. Just writing test cases because “it’s the done thing” or “because the policy says so” is time wasting, but designing them so that when the testing is carried out it is done so in the most efficient way requires exemplary test cases and can add value. It is also the case that test cases are shared and it’s difficult, especially in large organisations, to ensure that all testers have the same basic level of ability. Having test cases can help.

Evolvable

From a contextual point of view perhaps a good test case is not written down at all but merely in the testers head, and driving the testing into particular areas that the tester feels are worthy of time and effort. Cases where the software is the specification are becoming increasingly common; in-sufficient or non-existent requirements documentation which requires the tester to apply their previous knowledge of the system under test in order to effectively test it. Clearly in this case, if documented test cases are required then they will be need to evolve. As Rapid Software Testing mentions “How do you invent the right tests at the right time – evolve them with an exploratory strategy”.

Economic

Time is money and often in testing we have little time and sometimes little money. So using that time and that money in the most efficient way means we minimise the economic impact. Of course, sometimes the best way to get maximum value is to have a purely exploratory approach and spend more of the time and money with the software in hand. A choice which is key to a good test strategy and highly dependant on the industry area one is testing within.

Effective

Clearly a good test cases should be effective. We are not in the business of wasting time, particularly when time is precious as it often is in testing projects.

 

I’d argue that the definitions given in ISEB/ ISTQB are still relevant and can be a good guide as to what is required of a test case. In a lot of industries test cases are still very much required and, particularly where there is strict regulation such as in areas of financial software and even in mobile software. The ability to write a good test case is a skill which should not be forgotten.

 

 

Seeing the Wood For the Trees

I’ve recently been reading through my ISEB Practitioner notes, which I got when attending a course organised by Grove Consultants a few years back.

Don’t switch off yet. I know I mentioned ISEB. So before I go further, it’s worth stating my thoughts on the whole ISEB/ISTQB debate. I summed up my frustrations in a previous blog post; the fact that in the UK having ISTQB certification is practically the only way to get past the recruiters gate, but I also feel that there is some merit to the courses if taught properly, and followed up with a context driven approach such as Rapid Software Testing.

Reading back through my notes from Grove I can now see that this is what they were trying to work towards. At the time I had no idea how important the context driven school of testing was, nor the work of James Bach, Michael Bolton, Cem Kaner and others. But looking in the notes, the names are there. The techniques, albeit in nowhere near the detail that James or Michael teach in class of course, were hinted at and some approaches, particularly Exploratory Testing, are mentioned in some detail. Some slides are directly referenced from James and Michael’s work.

Unfortunately, due to the need to pass the exam, and with these areas being marked as ‘not exam’ then I didn’t pay them the attention that they warranted, and so it took a few more years to discover how and why the context driven approach can be so powerful. Which maybe shows the true problem with ISEB/ ISTQB certification after all.

More Experiences of Rapid Test Management

If you remember from the last post, I recently attended Rapid Test Management, taught by Michael Bolton in London. For a general overview and what happened in the first day then take a look at the previous post. As before, the following caveat applies:

Caveat – this is not a list of exactly what happens in a Rapid Test Management class, nor is it a list of exercises and course material. You won’t become good at Rapid Test Management by reading this post. Sometimes I forget things. If you really want to know about Rapid Test Management then you need to sign up for the class. It will be money well spent and this post will tell you a little about what might happen if you do take the class.

We started day two talking about test strategy and how one might incorporate Rapid Software Testing into a test strategy. The Heuristic Test Strategy model was introduced and we studied this in some detail, and followed it up with an exercise to define a strategy for a smartphone. I was happy with the choice of product under test, given my background then it made it easier to take into account my domain knowledge, and so our group could come up with a large number of options to consider. Equally interesting was what other groups had come up with and it was very interesting to see how each group had approached the problem differently and come up with different solutions. This goes to show that diversity in teams and experience is very important in testing.

Back on day 1 we had produced a list of areas that we, as a group, wanted the class to be focused upon and the major areas that so far had not been covered were risk and test coverage. As usual, Michael had some great experiences to share and course material to cover the risk areas and the details of risk based testing. In fact, the amount of class material was another one of the great things about this class – you get a lot, far more than you can look at or can be taught in the class itself. Together with some useful testing tools and also some more exercises and demo’s, this gives a great set of material to use afterwards. Continual learning is important and so to gain access to all the notes, examples and slides is just the start.

Day 2 was concluded with a look at test coverage and ways to visualise test coverage. Rapid Software Testing provides you with some great ways to do this, from the simple to the more complex. It got me thinking about how I might incorporate this into the Kanban project management processes that my own team use to manage our work.

So, should you sign-up for the class? You bet’cha. Rapid Test Management is a class that you should attend. If you want it to, and you take the time to study all the information and material that you receive, then it will make you a better tester and it will make you a better test manager. I’m on the way; I’m just beginning to take on-board all I’ve learnt and to read through the material we didn’t cover in class, but with everything I read I’m confident that it’s improving my skills.

Smartphone Sales Pass Feature Phone Sales in Japan for the First Time

Something interesting has just happened in the mobile market. You may have missed it or it may surprise you if you live in the US or Western Europe and are a middle class iPhone or Android owner.

What’s happened is that smartphone sales have passed feature phone sales for the first time. That may surprise you, you probably thought this happened years ago. After all, we’re all smartphone users now, right? Nope. And it didn’t: worldwide 70% of device sold are still feature phones (meaning cheaper devices running OS’s like Nokia’s S40 or other proprietary OS’s, typically with smaller screens and a lack of multi-tasking). Although the amount of money that manufacturers make on these devices is much smaller than smartphones, they ship in their millions and billions. Nokia recently shipped it’s 1.5 billionth S40 device for example.

You can read more in the MobiLens report which surveyed over 400 Japanese customers, and was compiled by market-watcher comScore  for the three months to February 2012. A quote from the report:

“Smartphones surpassed feature phones as the most acquired device type in February 2012, signalling an important shift in Japan’s mobile market,” said Daizo Nishitani, vice president of comScore Japan KK. “The rise in smartphone adoption opens the door to tremendous opportunity for publishers and advertisers to expand their reach and increase engagement with key consumer segments through this channel. Japanese mobile phone users were already highly engaged with their devices, but with the added functionality and higher levels of mobile media consumption we should expect to see significant changes in behaviour among the Japanese mobile population in 2012.”

Why Is This a Big Deal?

Japan typically leads the smartphone market and this is therefore a good indicator that slowly but surely the tide is beginning to turn towards smartphones in mature markets.

For testers this means more opportunity – smartphones typically mean a more open OS and therefore a significantly greater number of complicated applications that require testing. Feature phones are typically tested primarily by the manufacturers themselves; the only 3rd party runtime available is normally the Java ME platform and whilst there are a lot of applications launched written in Java (check out GetJar if you want some proof), there’s no evidence to suggest a large testing population at work ensuring that they work. Feature phones are also more likely to be lower powered, with smaller screens, ITU-T keyboards and generally lower spec without hardware like GPS.

However, with this move away from feature phones also comes further testing challenges; as the market switches to smartphones then it is inevitable that this will mean greater fragmentation of the OS’s themselves as manufacturers attempt to cover more and more price segments with different products. This will mean more display sizes, more hardware configurations and more differentiation in mechanics. For testers this will mean increased complication and mobile device testing strategies will need to evolve further than previously to cover a wider range of devices under test.

Also, as the market evolves then so does the installed base on devices. This presents additional challenges and further fragmentation issues. Ignore the devices in the field at your peril.

Test Leadership vs. Test Management – Is the Balance Right?

As I’ve blogged about recently, I’ve been recently studying for the Level 5 Certificate in Management and Leadership from the Chartered Management Institute. I’m now three sessions into the four session course, and it’s getting really interesting to see and think about how one might apply general management principles more specifically to the software testing area.

The most recent session was all about Management and Leadership. As part of this we did an exercise where we had to sort out a number of different phrases into groups that either apply to ‘Management’ or ‘Leadership’. No sitting on the fence, no spending hours deciding which to put where, but a quick exercise to make you think. There  were right and wrong answers (although primarily to seed further discussion).

The list, as I saw it, is below. What do you think? Do you agree?

Leadership Management
Enables others Implements and maintains
Inspires vision Focus on systems and structures
Encourages head and heart Adopt short term view
Acts authentically Completes transactions
Asks what and why Asks how and when
Has long range perspective Brings order and coordination
Focuses on doing the right thing Focuses on doing things right
Inspires trust Accomplishes tasks through others
Acts as an innovator Focuses on performance
Challenges Provides stability
Transforms Controls
Committed to the cause Imitates
Gives purpose and meaning Complies
Focus on people

After the exercise I got thinking about how I might apply this more generally to software testing. Specifically to Test Managements vs. Test Leadership, i.e. what separates those who run testing projects, probably as part of the design and delivery of a specific product, vs. those who lead groups and provide inspiration both inside those groups, and to the wider testing community. Are the skills needed in those situations different?

We clearly need within our community those who can challenge and innovate. Right now this is coming mostly from the context-driven community as I see it, where also the vision and long range perspective is clearly evident. These people are driving things along nicely in my opinion. As someone who has recently attending Rapid Software Testing with James Bach then I can say first hand that he’s certainly encouraging head and heart. I don’t need to mention challenging, right?

The question then comes, who is taking things further? Leaders need managers to take their vision and turn it into practice. They need someone who can focus on the short term view, give the control and ensure completion. They need people who implement. As a community do we currently have enough of the managers listening and implementing the vision that our leaders are sharing?