You need to know what's on the cutting-edge of technology. Find out what's coming and the unique Warptest POV with just one click on the "Blog" tile.

All posts tagged QA

Mobile Testing Is Different

Mobile Testing or Mobile QA has taught us that the traditional testing approaches and methodologies need tweaking or changing. What worked as a defined process for testing desktop, client-server or web applications didn’t exactly fit mobile.

If you are a tester who is just starting to test on mobile or simply want a fresh perspective (even dare I say it a mobile Developer) then this is for you. Of course, if you are simply interested in the subject feel free to read on as well.

12 Critical Mobile Testing Issues:

  1. Fragmentation: as a tester you may have to support not only Android and IOS but, multiple devices and OS versions. Android especially is a market with hundreds of devices of varying specifications. Screen size, resolution, processors, memory and more all have impact on device performance and behavior. Now factor in that Apple and Google both release new devices at least once a year but, depending on the countries in your market, there may be many Android OEM devices you want or need to support.
  2. Mobile Device Labs: if fragmentation is a problem, this is a solution worth investigating. Some testers have their own devices onsite but for many who don’t have the budget, or want a more expansive solution then SAAS solutions where the tester can connect to live devices thru a web application this delivers nicely. The tester can choose the devices and with some solutions can run both manual or automated test scenarios.
  3. Compliance: mobile testing of apps requires the tester to be fully familiar with the rules of the App Store and Play Store. (Of course there are other stores beyond just Apple and Google) In addition, compliance has grown to include GDPR and Disability support.
  4. App Types: Mobile allows us to create apps of different types and technological base: native apps, mobile web apps, hybrid apps and more recently Progressive Web Apps (PWA). Each of these has their own testing challenges, scenarios and scope.
  5. In-app Purchase: IAP is a niche in E-Commerce where the customer can make purchases from within the app. Examples can be E-books, multimedia, game purchases (allowing the game player to progress in the game faster) and real-world items. IAP has it’s own compliance standards for each store and can be more complex to test that run-of-the-mill E-Commerce.
  6. Alpha / Beta testing: if you are testing an app intended for the App or Play Store then how do you get this into the hands of a small group of alpha or beta testers without publishing the app? Apple allows Developer Account holders to use Testflight to do this and Google has Play Store Alpha / Beta testing. Once these apps are published and distributed to your selected testing group, you can easily publish to the Stores on completion of your test review or roll to a new bug-fix version as needed. In the case of testing features like in-app purchase, the tester needs this option to facilitate test purchases.
  7. Crash Logs and Analytics: mobile testing requires a skilled tester to have a strong familiarity with logs on devices and services that provide crash and user analytics. Crash logs can be accessed from test devices by connecting IOS devices to XCode and Android devices to Android Studio. There are other methods but these often are the fastest when you want to add the critical information to the bug you are reporting. Crash analytics are frequently a SAAS solution that will show the tester frequency of bugs, and which devices or OS version(s) the bug reproduces on. Each provide actionable intelligence at different scales. Crashlytics is one popular example which can be added to your apps.
  8. App Distribution: there are cases where you may not want to publish an app to the App / Play Store or you want to distribute an app for internal or enterprise use. In these cases solutions like HockeyApp, now renamed and revamped as Visual Studio App Center (after being acquired by Microsoft) offer a way to do so. App Center combines several of the other solutions listed here and is cross-device and cross-platform.
  9. Automated Testing: testers will ultimately seek out a framework that enables them to test UI and function, both rapidly and repeatedly. Appium, Protractor and others allow these testing scripts to be run on  real mobile devices. Automated testing on mobile has its own challenges and ROI. There are also cases where this will not work.
  10. UI, UX and User tolerance: these 3 are interconnected facets of the same issue. Ultimately users have a lower tolerance for poor UI or UX and are willing to remove an app that displeases them. Android and IOS have best practises for design and UI frameworks of their own. E.g. Material Design on Android OS. Testers should be aware of the design standards, what are acceptance criteria and be able to identify UI / UX bugs that can compromise the quality of the mobile app.
  11. Reverse compatibility: As mobile OS versions advance with time certain features, SDKs or APIs become deprecated and or out-of-date. Features and functionality that worked in previous versions may then stop working due to these changes. One example was the way IOS handled webview in Native apps; after IOS9 this changed (due to a major overhaul to remove serious bugs). Some developers and companies reduce this overhead by supporting only those OS versions after such a change, others only support the latest version of the OS.
  12. Non-functional testing: mobile apps live and die on their scalability, app performance and transparency if they function on low-spec mobile devices. This type of testing should play a critical part in optimal mobile testing to prevent nasty surprises when your app goes to production.

Mobile testing - infographic

The Warptest POV

Mobile testing shares as much as it doesn’t to other platforms we test on. The biggest issue is perhaps how rapidly the mobile landscape changes. As processors, GPUs shrink and battery life grows, screens lose their bezel, handsets lose earphone jacks or data / charging cables change their connectors. Meanwhile, wireless charging is becoming mainstream and of course camera resolution increase. Augmented, Virtual and Mixed Reality impact the capabilities, functionalities and spec of mobile devices.

Mobile testing has to evolve as rapidly as the devices we test on.

This post is a sampler or taste-test. The solutions listed are not exclusive, each exist in a competitive ecosystem which mobile testers need to be continually learning about. Some solutions are offered by Apple and Google, others by 3rd parties or Microsoft. Hopefully this is a starting point that wets your appetite for exploration. I’m always happy to hear from you about alternatives and learn something new.

Some of what I have written about will get their own, more detailed posts or Vlogs so keep watching this space.

That’s Right Unity Is Not Dealing With This Critical Issue

Unity are one of the biggest, if not biggest 2D & 3D game engines on the market. The company supports every conceivable, mainstream platform from mobile, thru desktop, Web to Virtual / Mixed Reality headsets.

Unity logo

This is the company that Apple & Google speak to when they want to role out ARCore & ARKit. By the time WWDC or Google annual conference role around, there is a beta version of Unity waiting in the wings with support for these changes.

Unity - ARCore
Unity - ARKit

If you have used a mainstream VR/MR app on Oculus Rift, HoloLens or HTC Vive then the likelihood is, that it was built in the Unity Editor.

Over the last year, a lot has changed for the company, especially instead of major version roll-outs, each subversion being released into Beta isn’t just bug fixes and iterations. Instead, these Beta sub-versions contain major new features and Unity isn’t shy about being transparent with their roadmap.

The big news is that Unity are making major inroads into the movie industry. They teamed with Neil Blomkamp to create the amazing animations for his OATS Studios movie shorts.

Many pundits believe that Unity is preparing themselves for an IPO with these changes, several major new strategic alliances and changes to the company board.

What Does Unity Need To Fix?

Ultimately, Unity provides a software development platform in their Editor. Code is written in C# and subject to the platform you are building your game or app on, the code is converted (e.g. conversion to C++ for IOS via IL2CPP). Notwithstanding the inability of the Editor to provide Developers with a mechanism to select all the platforms needed and run a cross-platform build script; this is not the major failing.

Their Editor has built in facility for Unit Testing via NUnit as part of Unity Test Runner.

Unity - Test Runner

QA can preview the scenes in the Editor but, Unity does not support Automated UI Testing on devices.

Once you have built your apps, you have no way except hands and eyeballs testing (more commonly argued over as “manual testing”) to test whether your IOS / Android / WebGL app functions or delivers the UI as expected.

Anyone who works in QA knows that standard practice is to incorporate automated UI tests using frameworks like Selenium for web testing and or, Appium for mobile apps.

These frameworks rely on the ability to recognize and map UI elements and objects with the app UI but Unity apps are a Black Box as far as Selenium or Appium are concerned. If you can’t map the UI elements and objects, then you can’t script clicks, swipes, text inputs or other simulations of real user behaviors.

This leaves game and app makers with 3 alternatives:

Unity - Testing

Manual testing alone is labor intensive, time consuming and repetitive. Cost aside it depends on skilled testers with the ability to catch and report bugs.

Customer testing is an oxymoron and often a disaster waiting to happen and yet some companies have no issue releasing their applications to their customers after only catching the critical issues.

Crowdsource testing is a good interim solution where companies lacking the testing personnel and is done by paying a 3rd party crowdsource company to deliver the warm bodies needed to test on personal devices for what amounts to first-to-find bug bounties.

The Warptest POV

Over the last year, I along with one of my QA Engineers tested several so-called automated testing solutions for Unity apps. Most didn’t make it out of the starting gate. Others showed early promise but needed extensive investment in development and testing to be anything more than a proof-of-concept.

All you need to do is search Unity’s Community Forums to see this is in high demand. Many companies and Unity personnel I spoke to online were interested to hear what we had discovered but if Unity want their community of 2D / 3D game, application and movie animation makers to deliver robust, well-tested products then automated UI testing needs to happen.

Unity - GDC

Today at 6.30pm Pacific Time, GDC, the Game Developers Conference kicks off. Don’t disappoint me Unity.


This is Unity’s summary blog post of their keynote at GDC. Color me disappointed, no mention of automated UI testing. Now I get it, automated testing in VR is a big challenge but choosing between doing nothing and at least supporting web / mobile automated testing on device, the choice is simple. FWIW if I had to choose between Unity and a platform that supports automated testing, the choice would be simple.

This is me throwing down the gauntlet Unity.

I had my first cup of coffee when I was 25 and that was it.

It was a cold rainy day, early morning, in the desert. I was on a training exercise with the Army and we had stopped our jeep for a break. One of the guys fished out a small gas stove, a tin pot and made Turkish Coffee with cardamom. He offered me a small glass full of coffee and a heaped spoon of sugar and I took my first sip. The rest as they say, is history.

As a Manchester boy, I grew up in a house where a nice hot cuppa tea was the staple. Usually PG Tips. Coffee in the 70’s, 80’s and even 90’s in England was Nescafe if you were lucky, and had no attraction at all.

After tasting my first strong, black, rich Turkish coffee I knew I needed to try more real coffee, and nothing with foam, frothed milk, syrups, flavourings; just shots of the good stuff. I tried espresso and I was totally hooked. Suddenly I was in a meaningful relationship with ground, brewed beans.

Luckily I lived in Israel, a country which takes its coffee seriously. This maybe one of the few issues the whole Middle East can agree on.

Over the last few months I’ve graduated from grinding store-bought coffee beans to getting interested in home roasting.

Home roasted coffee - software tester 1

Software Testing and Coffee Roasting?

As a software tester I approach new projects with research; online and word of mouth. I discovered that for the “hobbyist” the best start is to either use a pan on the gas or better a popcorn popper. As I’ve written in the past, testing is improved when it becomes like kata.

Of course, the beans are everything. I planned the following: –

Keep a note of all tests and test results: I used Microsoft Office for this (see the table below)


  1. Make a list of available green (unroasted beans)
  2. Test the quantity of beans in the popcorn popper that produce optimum results
  3. Make sure all beans are bought equally fresh (as much as you can) and stored the same way. Fresh = flavor.
  4. Define optimum results: evenly roasted, the coffee bean oil still present on the beans, no burnt taste. All beans ground for 11 seconds in the same Bosch coffee grinder.

The popcorn popper has a functional constraint, after 3 minutes or if overloaded it would overheat and shut down until it cooled off.


2:00 min

75 grams

2:30 min

75 grams

3:00 min

75 grams

2:00 min

150 grams

2:30 min

150 grams

3:00 min

150 grams

Kenya AA


Costa Rica




Why do I mention these constraints? The last time I roasted I was in a hurry and overloaded the popcorn popper. It subsequently shut off to cool down at 1:45 min. The beans were under roasted so I siphoned off half into my cast iron skillet, turned on the gas and roasted half in the skillet for another minute and the rest in the popcorn popper when it cooled down and would restart.

The Warptest POV

If the popper is science, using the skillet is an art. You are roasting the curve of the bean against the flat skillet. It heats up to a higher heat and roasts quicker. You need to keep the beans moving and flip them over to get an even roast.

This slideshow requires JavaScript.

By comparison, using the skillet gave better results. You can see exactly what’s happening in the skillet whereas the popcorn popper has a translucent, orange cover.

As for the beans, I got a better espresso from the Kenya AA but, that’s always been my favorite. Family and friends have been treated to espressos, cappuccinos, iced coffees and the ubiquitous Israeli Hafuch when visiting.

My plan is to finish the Sumatra and order Puerto Rican or Colombian green beans next and keep on testing. One thing, home roasting is seductive in its own way. I’ve found myself on Amazon and specialty coffee sites absentmindedly pondering 5kg bean roasters and bulk coffee grinders.

When I find my perfect roast I’ll be sure to let you know.

The World of Testers Has Something to Learn from James Bond…

CAUTION: SPOILERS ahoy. If you haven’t seen SPECTRE yet, you may not want to read this post.

It’s that time of year when we roll out the same tired, old arguments:

  • The Agile purists try to drive a stake thru the role of QA Manager.
  • Outsource companies say having in-house QA is redundant.
  • The Crowdsourcers agree but say crowdsource beats outsource hands down.
  • The Automated Testing purists take potshots at the Manual Testing crowd for the huge investment to provide test coverage that their scripts grant faster.
  • The Manual Testing purists snipe back at Automated Testing for ramp-up time and a several other alleged flaws.

Testers Arguing - James Bond

Don’t get me wrong, there is validity to multiple points of view and the testing industry like any requires challenging to grow and evolve but regurgitation is just that, the absence of new points of view on the same, weary subjects.

So, Where Does James Bond and SPECTRE come into it?

Here come those SPOILERS… turn back while you still can.

In the new James Bond film, SPECTRE we find Bond and MI6 assailed by the threat of obsolescence. HUMINT (Human acquired intelligence) has been declared redundant and a senior Whitehall official “C” is pushing for a unified ELINT (Electronic Intelligence) effort between 9 major nations, all under the umbrella of a shiny, hi-tech National Intelligence Center. Obviously, “C” will be the one running this multinational NSA like organization and the 00 Section is to be shut down because “C” sees no need for men like 00 agents in the field when tech can do all the work.

Testers James Bond SPECTRE

Meanwhile, Bond seems to have gone rogue, hunting a shadowy, criminal enterprise connected to his past. Faster than you can say “Goodbye Mister Bond” we discover this is SPECTRE and they and their leader, Franz Oberhauser (Bond’s pseudo foster brother) are the ones poised to take control of this unified ELINT center once it goes live.

Oberhauser or (redacted, I’m not going to spoil everything) Blofeld, is a staunch believer that pure ELINT will grant him control over the world.

Nutshell: SPECTRE, Oberhauser and “C” are the purists of automation that advocate replacement, obsolescence of eyes / hands-on testing. Real testers are not needed in their world. ELINT akin to automated testing can do it all (which is ironic considering the sheer number of armed henchmen SPECTRE employs, not even considering their assassin du jour, Mr. Hix).

Bond, M et al rely on Q to provide their automated solutions but acknowledge the world for what it is. Neither approach alone can get the job done. Only a holistic mix of an agent licensed to kill with tech backup will work just as only a holistic mix of both testing types will work. However, this is not the crucial lesson testers need to learn from James Bond.

The Warptest POV

Several years ago, I heard a kickass Marketing Professional talk about blogging to early stage Start Ups. The point he made was to blog about your niche, NOT you or your product.

Reading a post on a QA Outsourcing company’s site deriding in-house QA with the conclusion that you are better off taking their services is ridiculous and counter-productive. (You know who you are..)

Sometimes testers are our own worst enemy. These regurgitated arguments don’t benefit us. If there is nothing new to add to these issues, then let them lie.

Instead of the ability to evangelize a holistic approach, best practices and provide tailored testing solutions to suit each product, this reflects an immaturity in parts of our industry.

We need to do better because at the end of the day it’s all about ROI and demonstrating that testing is a mission critical investment. My hat is off to those testers who share, engage, encourage others and build a sense of community. This is clearly the way forward.

The Art of Software Testing Relies On…

Several critical truths. One of these is, “A bug not reported will never get fixed.”

The corollary of this according to Schroedinger-Murphy is, “This bug will return to bite you in the * at the worst possible time.”

Never Has There Been A Tale Of More Woe…

(Poetic license and changes of name and gender have been used to protect the innocent in this story)

Once upon a time, Bob the tester was working on a testing project with a new feature. Bob was testing this feature which relied on a 3rd party backend service and another 3rd party client plugin.

Bob had tested a prior version and declared the feature as working but in the latest version he found bugs with UI and function.

His manager, Jim got involved after he heard Bob explain to the Developer the problems and the Dev and their manager said, “This is an issue with the 3rd party integrations. We can’t do anything.”

Software Testing - We can't fix this

Jim asked Bob one question, “Are the issues documented in our Bug Tracking?”

Bob shook his head and could see Jim was not pleased.

Software Testing - Jim Khaaaan

Image screen captured from Youtube clip from Star Trek 3: The Wrath of Khan

“Bob. I must have said this a hundred times. Dev doesn’t decide if a bug gets reported, bug reporting means all bugs with the appropriate severity, Bob.”

Bob went back to his computer and was about to document the bugs when he said, “Hey Jim should these bugs all be reported as one bug?”

Jim came over and sat down with Bob, drained his coffee and said, “If they are all facets or symptoms of the same bug then maybe but ask yourself this Bob. If a Developer marks the bug fixed and you have multiple issues in there, how do you know which are fixed? More to the point, if some of these issues aren’t fixed what status does the bug acquire?”

Bob thought about it for a few seconds, grinned and told Jim he was going to open a bug for each. Jim slapped him on the back and went back to his desk.

The Warptest POV

Software Testing and Bug Reporting is somewhere between an art and a science. It is rule based and if you don’t want to cock-up, these fundamental rules need following.

What happens after you document bugs you discover and allocate the right priority is the next step in delivering a robust, sellable product that makes happy customers.

The basics of Software Testing can be learnt and then the skills and experience acquired through hands-on practice. Luckily, the nature of Software Testing is repetitive like Kata.

So as you sit down with your coffee to test the latest deliverable, make sure you are sharing information with good bug reporting.

Happy Testing.

Bug Reporting Is As Much An Art As A Science

… As a result sometimes running a refresher / brainstorming session on best practices in bug reporting for your team is a must.

As I’ve mentioned in the past, the testers and the person presenting can benefit hugely from the interaction.

The Primer

Embedded here is a primer presentation I use for this refresher on aspects of bug reporting I want my team to focus on:

The Warptest POV

Whether you are working with onsite developers or offshore, the need for sound observation and good bug reporting is critical.

A bug not reported or not reported properly will never get fixed. If your bug reports don’t give objective analysis or stress the severity / cost to the end-users then the bug may never get fixed.

So maximize your testing ROI and make sure every bug discovered and reported gets a fair chance a being fixed.

Do you refresh your bug reporting skills at least once a year?


Testing Isn’t Always Easy…

Once upon a time, in a testing lab far, far away was a young tester who sat each and every day testing his company’s apps.

(For the sake of argument) let’s call our tester Bill.

Bill was young and relatively new and had been assigned what he thought was the most repetitive and boring of all test plans.

However, Bill was not deterred and each day he would start anew and add every single problem, flaw, defect or bug in function, UI, UX, Load, Stress or against spec he could find to the company bug tracking system.

Bill’s greatest joy was adding these bugs to the bug tracking system and assigning severity. As Bill was still learning his job he was concerned not every bug would be fixed and so he marked each and every one as “critical”.

Bill Learns A Tough Testing Lesson…

After several days Bill was drinking his coffee and thinking how many times the Developers had come running over to talk to him about his bugs and strangely how many times they left with a grumpy look on their faces after claiming the bug was a feature, or worked according to spec, wasn’t critical at all or simply only happened under the rarest of conditions.

Bill was a little confused and didn’t really understand the negativity about his bugs or their severity.

Just then, Bill’s boss, the QA Manager walked in, gave him a big smile and sat down with his double espresso opposite Bill.

“So Bill, I hear you’ve been keeping our Developers busy with lots of bugs, right?” Bill’s boss gave him a huge grin.

“I guess so…” Bill replied.

“Well, I wanted to talk to you about the fact that I’m pleased that you are so dedicated and I know the bugs are all important but are they really all critical?”

Bill thought about this for a moment.

“How do I know?”

Bill’s boss sipped his espresso, “Well Bill, ask yourself what does the bug do to the App, to the user or to the system it’s running on. Once you look at the impact you can get a better idea of severity. Do you know why I’m telling you this?”

“Umm” Bill scratched his head, puzzled.

Bill’s boss put his finished espresso down, “If we mark every bug as critical then the Developers won’t take the really critical bugs seriously because we overused the definition and made them drop new work to fix some bugs that could wait. Luckily the Product Owner and I discuss the bugs and he sets priority with the R&D Manager but we need to check the spec to be sure if something is a bug or not as well… You know the story of the boy who cried Wolf right Bill?”

Bill nodded.

“As you get more experience you’ll learn not to be the boy who cries bug and be more confident about what severity each bug is. Today we are going to test together and see whether we agree on each bug or its severity. Let’s see how the Developers respond to that.”

From that day Bill worked harder than ever to learn what was and wasn’t a bug and to report each bug with the right severity. The QA Manager continued to be happy with Bill’s work, even had Bill train new testers and the Developers would treat each bug reported by Bill with seriousness.

… and they all worked happily ever after.

The Warptest POV

Learning how to write bug reports other than the uncompromising brevity derived from using Twitter also involves knowing if your observation is truly a bug and how to define its severity.

So the next time you are about to hit save on that bug, think of Bill and just review what you are reporting.

(This Grim tale is based on past, real life events. Names have been changed to protect the innocent).


Being A Tester Is A Profession…

Those of us in the profession who embrace it with passion sometimes see it as a calling and when we read certain stories we feel their pain but are often unsurprised.

Never Was A Tale Of More Woe

Techcrunch and other blogs have recently reported on two high profile Startups, Clinkle (over the last few days) and Snapchat (the end of December) suffering hacks / data breaches of differing scales.

A Venture Capitalist I follow on Twitter postulated that these events call into question the skill level of the Startup Devs, allowing user or payment data to be compromised.

The Usual Suspects

Usual Suspects - Tester

In both of these cases I found it hard to point the finger (exclusively) at the Devs and suspected that either: –

  1. The Startups had no testers and didn’t test.
  2. The Startups employed non-testers to do the testing: all hands on deck.
  3. The Startups had testers who reported the bugs but their reports went unheeded.

Testing? We’re Not There Just Yet.

Many startups are known for considering testing an activity that is best left until late in the day. Something the company just doesn’t have the money for but will get to, one of these days.

Money tester dudes

After looking at the LinkedIn profiles of Clinkle and Snapchat I couldn’t find any employees listed in either company as testers. The Techcrunch article on Clinkle refers to “employee testers” clearly they went for option 2 above; calling a bunch of random dudes in their employ testers without knowing what testing actually is.

The Warptest POV

I asked the question “How do you respond when a company says they aren’t ready for testing yet?” to my peers in the Israel QA / Software Testing Forum Facebook Group. The discussion is mainly in Hebrew but some people felt this was a reality to be accepted, others felt this was unacceptable and it was a fascinating insight.

My opinion is that if your product relies on the trust of the people exposed to it to build your user base then it is never too soon for testing but I’ll return to this premise.

The idea that Startup Devs are lacking if they allow these breaches fails to address one important fact: Devs are not Testers. They aren’t trained to be and in fact, they are trained to work with testers who provide backup / cover. In a nutshell, the testers are there to find bugs, report them and ensure the bug is dealt with.

The moment the founders remove these checks and balances then the whole product lifecycle is out of kilter and it is only reasonable to expect a major bug to slip through.

Remember, if you treat testing as a second class activity then don’t be surprised if you create a second class product.

Returning to my premise that it’s never too soon to test, does this mean Startups need to magically find the money to employ a full-time tester or testing team? If you are iterating a web / mobile application then contracting either an early stage, one shot testing cycle or testing on demand until you raise further funding is an affordable option. The testing will either be done for you or your non-testers can be guided and managed to provide better testing coverage.

If you need testing for your app then contact me and I can help you ensure your product doesn’t launch untested or reach critical bug mass.

Test Cases…

If the STP is the strategy we apply to our testing efforts then Test Cases (STC) define our tactical efforts.

In the past couple of years more and more testing projects require definition of tests at Test Case level.

Added Value?

Using a Test Case Management System or incorporating Test Cases into your ALM grants you: –

  • Traceability from Use Cases, Requirements, Specifications through to Test Cases.
  • Coverage analysis.
  • Metrics: whilst I have stated clearly in an earlier post that failing to understand test metrics can lead to abuse, there is incredible value in being able to monitor and evaluate the time and effort required to test either at feature or version level. Should you make changes to the Test Suite containing your Test Cases then prior test cycles will allow you to give a better estimate when planning the next iteration.
  • Analysis / BI: simply creating Test Cases that provide optimal test coverage isn’t enough. There is an ongoing process of refinement based on product maturation, bug discovery and tester experience (as the tester gains deeper understanding of the product / technology being tested they gain a deeper understanding of what and how the Test Cases should be defined and run).

All Test Cases Are Created Equal?

With a hat tip to George Orwell’s Animal Farm let’s say,

All Test Cases are equal, but some Test Cases are more equal than others.

In several cases I’ve encountered this concept on projects I’ve been hired to troubleshoot.

The final bullet above “Analysis/BI” offered me the best solution in these cases. Those Test Cases that are more equal than others are the ones built on shared steps.

If you are prone to what I called Testing Kata in my previous post then after analyzing the Test Cases various patterns should become apparent. If you haven’t built your test suite from scratch this way or as in my case, you inherited a project then you will discover Test Cases that have shared steps. This can lead to several choices: –

Shared Steps - Test Cases

If you don’t have a Test Case Manager / ALM that supports creating Shared Steps as distinct entities to add to Test Cases then don’t despair; fall back on using Excel to map them out.

The Warptest POV

The likelihood is that if this isn’t your first testing rodeo then at least you have Shared Steps and related tests mapped out (at least in your head) for UI Elements so you have a good starting point.

Like many things we do Test Cases involve repetition, both during writing and running. With good analysis and understanding you can extract the Shared Steps and save a lot of time.

The UI example I used in my previous post is a classic example of this:

UI STC Sample - Grid

Once you try this you might be surprised at what a difference having Shared Steps in your Test Cases will make to your efficiency.

If you are having some trouble with this then get in touch. Help is just an email away.

Automated Testing…

Is an incredibly powerful alternative to manual testing or is it?

Manual testing involves the employment of testing personnel to interact with the Application being tested and is considered labor and time consuming.

Automated Testing is considered a huge time saver and implementing it is seen as a way of Startups / Developers making big savings on limited resources.

Colonel Nathan Jessup Strikes Again…

The truth that most of you can’t handle is that pure Automated Testing is not a sound solution; it’s a magic bullet. Automated Testing has several drawbacks:

You can't handle the automated testing

  • It takes time to setup: you still need to write test cases that provide optimal coverage and then create the automated tests based on each of these.
  • It takes time to stabilize: if your code is still not stable and your product is evolving then today’s automated scripts can be made redundant by tomorrow’s iteration or pivot.
  • It can take time to analyze the results: manual testing sees the results during each individual test where the tester observes the bugs in situ and is able to report them within that context. Let’s say Tester “Bob” runs a test cycle overnight and the next morning has several hundred Automated Test results to analyze. Each failed test will require analysis, drill down and the tester getting into context to understand the results.
  • Technical Debt: when the maintenance of your Automated Testing demands more attention than developing the features or the time available to test, you may not be ready for Automation.

The Warptest POV

Don’t get me wrong I’m not saying abandon hope or Automated Testing as a viable option for testing your Application. Just don’t drink the Kool Aid. To run efficient testing you need to take a holistic approach combining targeted Manual and Automated Testing.

Not just automated testing

As soon as you begin creating Test Cases based on Use Cases, Requirements and Spec you should be planning which Test Cases logically can be automated and at what stage of the lifecycle.

If you know a feature or UI will not be stable until an expected date then the target should be to aim for optimum Automated Testing leading up to this date. By ensuring that efficient Manual Testing is coupled with this effort you are able to provide better test coverage than with just Automated Testing alone.

Some of you are no doubt disagreeing with this. To the rest who have been sold on the idea that Automated Testing is indeed a magic bullet let me state,

“There are scenarios where a skilled, attentive Manual Testing expert will give you speedier results. There is NO magic bullet here. Each product demands a tailored testing strategy.”

So in a nutshell I’m suggesting that if you are developing your App / Software and you are ready to invest in QA: –

  1. Don’t assume that you are at the stage you need Automated Testing.
  2. Don’t assume that Automated Testing alone is the best solution.
  3. Let whoever you are hiring to run your QA get the earliest possible look at your product and roadmap so you have matched expectations regarding testing strategy.

If this brief insight into a holistic approach to testing intrigues you and you believe this offers you the best option for your product’s success then …

Contact me - automated testing


… Test Metrics Quantify Our Work

Simply put they allow Testers and Testing Managers to measure status at any given moment.

Usually these are standard queries or reports made in whichever ALM or Defect (Bug) Tracking Solution you have implemented.

Standalone Bug Tracking Systems can limit your effectiveness as there is a disconnect from Test Plans / Test Cases. How do you easily trace test coverage between your cases and your bugs?

There are solutions for this but let’s not digress…

A Cautionary Tale

Stop Sign Test Metrics

I was in a meeting a while ago discussing preparations for certification compliance with Bob (Product Owner), Lucy (Compliance Officer) and Mary (Certification Manager).

Mary was asking questions about procedures, documentation, methodology and then she arrived at test metrics…

I keep queries and reports to show:-

  • Test Case coverage vs Specifications
  • Test Case Progress / version by module / tester
  • Bugs / version by severity by module / tester
  • Total bugs / version by severity by status…and a few others.

Mary was happy and then the other boot dropped. She recommended I use my Test Metrics to enable monitoring which testers weren’t working fast enough or reporting enough bugs and which developers were generating just too many bugs.

The next few moments of discussion are best left to your imagination.

After demonstrating onscreen the fact that not all test cases are created equal and prior knowledge is required to make sense of the scale of each module’s test cases in relation to others Mary was willing to listen to a more detailed explanation of how these test metrics should be used in a constructive, ethical manner.

The Warptest POV

Test Metrics are a useful tool but QA / Testing Managers need to apply caution when defining the metrics they measure, who has access to the metrics and to ensure a well thought out, structured explanation of the impact of the test metrics and their indication of overall status at any given stage of testing.

Metrics man test metrics

To paraphrase Churchill,

There are lies, damn lies and then there are testing metrics.

Make sure your metrics aren’t misleading you but more importantly make sure no one is abusing or being misled by the data you are feeding them.

Names were changed to protect the innocent and slightly guilty.

No ALM or Bug Trackers were harmed in writing this post.


Testing Is An Ever Evolving Field …

… And yesterday I was able to clarify something that wasn’t exactly new to me but needed phrasing accurately. you might have noticed but recently I have been writing a bit more about testing (see here, here and here). Here we go again…

At 3am yesterday morning I woke up to the sound of water dripping in my apartment; never a good middle-of-the-night noise.

I discovered that while our laundry was running the main drainage pipe had spontaneously blocked and the backed up water was all over the kitchen and laundry room.

Luckily Israeli floors are marble tiles so the biggest problem was waking up enough to mop up the mess and not to wake up the kids.

A Good Story, What’s The Punchline?

  • Punchline#1 is I owe a big thank you to The Missus who stayed home and had to deal with the aftermath of the plumber snaking the drain with nary a gripe.
  • Punchline#2 is I spent part of the morning drinking large amounts of espresso to stave off my exhaustion and considering what the plumber was doing.

Our plumber explained to The Missus after snaking out some pretty vile stuff from the drains that due to a mind-bogglingly stupid construction flaw the drainpipe runs uphill and will block about once every couple of years.

The first conclusion is Doh! Whatever happened to building to spec? More importantly whoever defined the spec took for granted the “implementation team” knew that water cannot drain uphill for long without depositing the waste it carries. Having background (but basic) technical knowledge related to one’s job description is often taken for granted but this is a secondary point.

Everyone who tests software (client, web or mobile apps) knows to test functionality and I’m not talking about UI element verification or basic behavioral responses.

A second and slightly more intricate layer of testing is applying tests to business logic; the tester needs to get into the head of the different people who will use the product under test and understand the hows and whys of their use cases.

The Warptest POV

What occurred to me thinking about all this was that this was an accurate description of a third layer of testing, test the unimpeded flow of data from entry to exit point of each process. In an allegorical nutshell “no blocked pipes”.

Most if not all apps we encounter are data driven whether it’s a Checkin, a Status, a post, a purchase or something else. Whichever, data is valuable and as such our testing needs to reflect how it moves, behaves and is handled. The best methodology to apply is subject to a variety of factors but that’s for a different post.

I’m sure some of you are struggling not to yell out the other various types of testing I haven’t mentioned here. I’m not ignoring them nor do I when testing.

Thanks to my plumber I was able to visualize and grasp this idea properly:

So the next time you are writing your test cases keep your plumber in mind and if you need a good plumber and live nearby contact me, I have a strong recommendation for ours.