You need to know what's on the cutting-edge of technology. Find out what's coming and the unique Warptest POV with just one click on the "Blog" tile.

All posts tagged QA

I had my first cup of coffee when I was 25 and that was it.

It was a cold rainy day, early morning, in the desert. I was on a training exercise with the Army and we had stopped our jeep for a break. One of the guys fished out a small gas stove, a tin pot and made Turkish Coffee with cardamom. He offered me a small glass full of coffee and a heaped spoon of sugar and I took my first sip. The rest as they say, is history.

As a Manchester boy, I grew up in a house where a nice hot cuppa tea was the staple. Usually PG Tips. Coffee in the 70’s, 80’s and even 90’s in England was Nescafe if you were lucky, and had no attraction at all.

After tasting my first strong, black, rich Turkish coffee I knew I needed to try more real coffee, and nothing with foam, frothed milk, syrups, flavourings; just shots of the good stuff. I tried espresso and I was totally hooked. Suddenly I was in a meaningful relationship with ground, brewed beans.

Luckily I lived in Israel, a country which takes its coffee seriously. This maybe one of the few issues the whole Middle East can agree on.

Over the last few months I’ve graduated from grinding store-bought coffee beans to getting interested in home roasting.

Home roasted coffee - software tester 1

Software Testing and Coffee Roasting?

As a software tester I approach new projects with research; online and word of mouth. I discovered that for the “hobbyist” the best start is to either use a pan on the gas or better a popcorn popper. As I’ve written in the past, testing is improved when it becomes like kata.

Of course, the beans are everything. I planned the following: –

Keep a note of all tests and test results: I used Microsoft Office for this (see the table below)


  1. Make a list of available green (unroasted beans)
  2. Test the quantity of beans in the popcorn popper that produce optimum results
  3. Make sure all beans are bought equally fresh (as much as you can) and stored the same way. Fresh = flavor.
  4. Define optimum results: evenly roasted, the coffee bean oil still present on the beans, no burnt taste. All beans ground for 11 seconds in the same Bosch coffee grinder.

The popcorn popper has a functional constraint, after 3 minutes or if overloaded it would overheat and shut down until it cooled off.


2:00 min

75 grams

2:30 min

75 grams

3:00 min

75 grams

2:00 min

150 grams

2:30 min

150 grams

3:00 min

150 grams

Kenya AA


Costa Rica




Why do I mention these constraints? The last time I roasted I was in a hurry and overloaded the popcorn popper. It subsequently shut off to cool down at 1:45 min. The beans were under roasted so I siphoned off half into my cast iron skillet, turned on the gas and roasted half in the skillet for another minute and the rest in the popcorn popper when it cooled down and would restart.

The Warptest POV

If the popper is science, using the skillet is an art. You are roasting the curve of the bean against the flat skillet. It heats up to a higher heat and roasts quicker. You need to keep the beans moving and flip them over to get an even roast.

This slideshow requires JavaScript.

By comparison, using the skillet gave better results. You can see exactly what’s happening in the skillet whereas the popcorn popper has a translucent, orange cover.

As for the beans, I got a better espresso from the Kenya AA but, that’s always been my favorite. Family and friends have been treated to espressos, cappuccinos, iced coffees and the ubiquitous Israeli Hafuch when visiting.

My plan is to finish the Sumatra and order Puerto Rican or Colombian green beans next and keep on testing. One thing, home roasting is seductive in its own way. I’ve found myself on Amazon and specialty coffee sites absentmindedly pondering 5kg bean roasters and bulk coffee grinders.

When I find my perfect roast I’ll be sure to let you know.

The World of Testers Has Something to Learn from James Bond…

CAUTION: SPOILERS ahoy. If you haven’t seen SPECTRE yet, you may not want to read this post.

It’s that time of year when we roll out the same tired, old arguments:

  • The Agile purists try to drive a stake thru the role of QA Manager.
  • Outsource companies say having in-house QA is redundant.
  • The Crowdsourcers agree but say crowdsource beats outsource hands down.
  • The Automated Testing purists take potshots at the Manual Testing crowd for the huge investment to provide test coverage that their scripts grant faster.
  • The Manual Testing purists snipe back at Automated Testing for ramp-up time and a several other alleged flaws.

Testers Arguing - James Bond

Don’t get me wrong, there is validity to multiple points of view and the testing industry like any requires challenging to grow and evolve but regurgitation is just that, the absence of new points of view on the same, weary subjects.

So, Where Does James Bond and SPECTRE come into it?

Here come those SPOILERS… turn back while you still can.

In the new James Bond film, SPECTRE we find Bond and MI6 assailed by the threat of obsolescence. HUMINT (Human acquired intelligence) has been declared redundant and a senior Whitehall official “C” is pushing for a unified ELINT (Electronic Intelligence) effort between 9 major nations, all under the umbrella of a shiny, hi-tech National Intelligence Center. Obviously, “C” will be the one running this multinational NSA like organization and the 00 Section is to be shut down because “C” sees no need for men like 00 agents in the field when tech can do all the work.

Testers James Bond SPECTRE

Meanwhile, Bond seems to have gone rogue, hunting a shadowy, criminal enterprise connected to his past. Faster than you can say “Goodbye Mister Bond” we discover this is SPECTRE and they and their leader, Franz Oberhauser (Bond’s pseudo foster brother) are the ones poised to take control of this unified ELINT center once it goes live.

Oberhauser or (redacted, I’m not going to spoil everything) Blofeld, is a staunch believer that pure ELINT will grant him control over the world.

Nutshell: SPECTRE, Oberhauser and “C” are the purists of automation that advocate replacement, obsolescence of eyes / hands-on testing. Real testers are not needed in their world. ELINT akin to automated testing can do it all (which is ironic considering the sheer number of armed henchmen SPECTRE employs, not even considering their assassin du jour, Mr. Hix).

Bond, M et al rely on Q to provide their automated solutions but acknowledge the world for what it is. Neither approach alone can get the job done. Only a holistic mix of an agent licensed to kill with tech backup will work just as only a holistic mix of both testing types will work. However, this is not the crucial lesson testers need to learn from James Bond.

The Warptest POV

Several years ago, I heard a kickass Marketing Professional talk about blogging to early stage Start Ups. The point he made was to blog about your niche, NOT you or your product.

Reading a post on a QA Outsourcing company’s site deriding in-house QA with the conclusion that you are better off taking their services is ridiculous and counter-productive. (You know who you are..)

Sometimes testers are our own worst enemy. These regurgitated arguments don’t benefit us. If there is nothing new to add to these issues, then let them lie.

Instead of the ability to evangelize a holistic approach, best practices and provide tailored testing solutions to suit each product, this reflects an immaturity in parts of our industry.

We need to do better because at the end of the day it’s all about ROI and demonstrating that testing is a mission critical investment. My hat is off to those testers who share, engage, encourage others and build a sense of community. This is clearly the way forward.

The Art of Software Testing Relies On…

Several critical truths. One of these is, “A bug not reported will never get fixed.”

The corollary of this according to Schroedinger-Murphy is, “This bug will return to bite you in the * at the worst possible time.”

Never Has There Been A Tale Of More Woe…

(Poetic license and changes of name and gender have been used to protect the innocent in this story)

Once upon a time, Bob the tester was working on a testing project with a new feature. Bob was testing this feature which relied on a 3rd party backend service and another 3rd party client plugin.

Bob had tested a prior version and declared the feature as working but in the latest version he found bugs with UI and function.

His manager, Jim got involved after he heard Bob explain to the Developer the problems and the Dev and their manager said, “This is an issue with the 3rd party integrations. We can’t do anything.”

Software Testing - We can't fix this

Jim asked Bob one question, “Are the issues documented in our Bug Tracking?”

Bob shook his head and could see Jim was not pleased.

Software Testing - Jim Khaaaan

Image screen captured from Youtube clip from Star Trek 3: The Wrath of Khan

“Bob. I must have said this a hundred times. Dev doesn’t decide if a bug gets reported, bug reporting means all bugs with the appropriate severity, Bob.”

Bob went back to his computer and was about to document the bugs when he said, “Hey Jim should these bugs all be reported as one bug?”

Jim came over and sat down with Bob, drained his coffee and said, “If they are all facets or symptoms of the same bug then maybe but ask yourself this Bob. If a Developer marks the bug fixed and you have multiple issues in there, how do you know which are fixed? More to the point, if some of these issues aren’t fixed what status does the bug acquire?”

Bob thought about it for a few seconds, grinned and told Jim he was going to open a bug for each. Jim slapped him on the back and went back to his desk.

The Warptest POV

Software Testing and Bug Reporting is somewhere between an art and a science. It is rule based and if you don’t want to cock-up, these fundamental rules need following.

What happens after you document bugs you discover and allocate the right priority is the next step in delivering a robust, sellable product that makes happy customers.

The basics of Software Testing can be learnt and then the skills and experience acquired through hands-on practice. Luckily, the nature of Software Testing is repetitive like Kata.

So as you sit down with your coffee to test the latest deliverable, make sure you are sharing information with good bug reporting.

Happy Testing.

Bug Reporting Is As Much An Art As A Science

… As a result sometimes running a refresher / brainstorming session on best practices in bug reporting for your team is a must.

As I’ve mentioned in the past, the testers and the person presenting can benefit hugely from the interaction.

The Primer

Embedded here is a primer presentation I use for this refresher on aspects of bug reporting I want my team to focus on:

The Warptest POV

Whether you are working with onsite developers or offshore, the need for sound observation and good bug reporting is critical.

A bug not reported or not reported properly will never get fixed. If your bug reports don’t give objective analysis or stress the severity / cost to the end-users then the bug may never get fixed.

So maximize your testing ROI and make sure every bug discovered and reported gets a fair chance a being fixed.

Do you refresh your bug reporting skills at least once a year?


Testing Isn’t Always Easy…

Once upon a time, in a testing lab far, far away was a young tester who sat each and every day testing his company’s apps.

(For the sake of argument) let’s call our tester Bill.

Bill was young and relatively new and had been assigned what he thought was the most repetitive and boring of all test plans.

However, Bill was not deterred and each day he would start anew and add every single problem, flaw, defect or bug in function, UI, UX, Load, Stress or against spec he could find to the company bug tracking system.

Bill’s greatest joy was adding these bugs to the bug tracking system and assigning severity. As Bill was still learning his job he was concerned not every bug would be fixed and so he marked each and every one as “critical”.

Bill Learns A Tough Testing Lesson…

After several days Bill was drinking his coffee and thinking how many times the Developers had come running over to talk to him about his bugs and strangely how many times they left with a grumpy look on their faces after claiming the bug was a feature, or worked according to spec, wasn’t critical at all or simply only happened under the rarest of conditions.

Bill was a little confused and didn’t really understand the negativity about his bugs or their severity.

Just then, Bill’s boss, the QA Manager walked in, gave him a big smile and sat down with his double espresso opposite Bill.

“So Bill, I hear you’ve been keeping our Developers busy with lots of bugs, right?” Bill’s boss gave him a huge grin.

“I guess so…” Bill replied.

“Well, I wanted to talk to you about the fact that I’m pleased that you are so dedicated and I know the bugs are all important but are they really all critical?”

Bill thought about this for a moment.

“How do I know?”

Bill’s boss sipped his espresso, “Well Bill, ask yourself what does the bug do to the App, to the user or to the system it’s running on. Once you look at the impact you can get a better idea of severity. Do you know why I’m telling you this?”

“Umm” Bill scratched his head, puzzled.

Bill’s boss put his finished espresso down, “If we mark every bug as critical then the Developers won’t take the really critical bugs seriously because we overused the definition and made them drop new work to fix some bugs that could wait. Luckily the Product Owner and I discuss the bugs and he sets priority with the R&D Manager but we need to check the spec to be sure if something is a bug or not as well… You know the story of the boy who cried Wolf right Bill?”

Bill nodded.

“As you get more experience you’ll learn not to be the boy who cries bug and be more confident about what severity each bug is. Today we are going to test together and see whether we agree on each bug or its severity. Let’s see how the Developers respond to that.”

From that day Bill worked harder than ever to learn what was and wasn’t a bug and to report each bug with the right severity. The QA Manager continued to be happy with Bill’s work, even had Bill train new testers and the Developers would treat each bug reported by Bill with seriousness.

… and they all worked happily ever after.

The Warptest POV

Learning how to write bug reports other than the uncompromising brevity derived from using Twitter also involves knowing if your observation is truly a bug and how to define its severity.

So the next time you are about to hit save on that bug, think of Bill and just review what you are reporting.

(This Grim tale is based on past, real life events. Names have been changed to protect the innocent).


Being A Tester Is A Profession…

Those of us in the profession who embrace it with passion sometimes see it as a calling and when we read certain stories we feel their pain but are often unsurprised.

Never Was A Tale Of More Woe

Techcrunch and other blogs have recently reported on two high profile Startups, Clinkle (over the last few days) and Snapchat (the end of December) suffering hacks / data breaches of differing scales.

A Venture Capitalist I follow on Twitter postulated that these events call into question the skill level of the Startup Devs, allowing user or payment data to be compromised.

The Usual Suspects

Usual Suspects - Tester

In both of these cases I found it hard to point the finger (exclusively) at the Devs and suspected that either: –

  1. The Startups had no testers and didn’t test.
  2. The Startups employed non-testers to do the testing: all hands on deck.
  3. The Startups had testers who reported the bugs but their reports went unheeded.

Testing? We’re Not There Just Yet.

Many startups are known for considering testing an activity that is best left until late in the day. Something the company just doesn’t have the money for but will get to, one of these days.

Money tester dudes

After looking at the LinkedIn profiles of Clinkle and Snapchat I couldn’t find any employees listed in either company as testers. The Techcrunch article on Clinkle refers to “employee testers” clearly they went for option 2 above; calling a bunch of random dudes in their employ testers without knowing what testing actually is.

The Warptest POV

I asked the question “How do you respond when a company says they aren’t ready for testing yet?” to my peers in the Israel QA / Software Testing Forum Facebook Group. The discussion is mainly in Hebrew but some people felt this was a reality to be accepted, others felt this was unacceptable and it was a fascinating insight.

My opinion is that if your product relies on the trust of the people exposed to it to build your user base then it is never too soon for testing but I’ll return to this premise.

The idea that Startup Devs are lacking if they allow these breaches fails to address one important fact: Devs are not Testers. They aren’t trained to be and in fact, they are trained to work with testers who provide backup / cover. In a nutshell, the testers are there to find bugs, report them and ensure the bug is dealt with.

The moment the founders remove these checks and balances then the whole product lifecycle is out of kilter and it is only reasonable to expect a major bug to slip through.

Remember, if you treat testing as a second class activity then don’t be surprised if you create a second class product.

Returning to my premise that it’s never too soon to test, does this mean Startups need to magically find the money to employ a full-time tester or testing team? If you are iterating a web / mobile application then contracting either an early stage, one shot testing cycle or testing on demand until you raise further funding is an affordable option. The testing will either be done for you or your non-testers can be guided and managed to provide better testing coverage.

If you need testing for your app then contact me and I can help you ensure your product doesn’t launch untested or reach critical bug mass.

Test Cases…

If the STP is the strategy we apply to our testing efforts then Test Cases (STC) define our tactical efforts.

In the past couple of years more and more testing projects require definition of tests at Test Case level.

Added Value?

Using a Test Case Management System or incorporating Test Cases into your ALM grants you: –

  • Traceability from Use Cases, Requirements, Specifications through to Test Cases.
  • Coverage analysis.
  • Metrics: whilst I have stated clearly in an earlier post that failing to understand test metrics can lead to abuse, there is incredible value in being able to monitor and evaluate the time and effort required to test either at feature or version level. Should you make changes to the Test Suite containing your Test Cases then prior test cycles will allow you to give a better estimate when planning the next iteration.
  • Analysis / BI: simply creating Test Cases that provide optimal test coverage isn’t enough. There is an ongoing process of refinement based on product maturation, bug discovery and tester experience (as the tester gains deeper understanding of the product / technology being tested they gain a deeper understanding of what and how the Test Cases should be defined and run).

All Test Cases Are Created Equal?

With a hat tip to George Orwell’s Animal Farm let’s say,

All Test Cases are equal, but some Test Cases are more equal than others.

In several cases I’ve encountered this concept on projects I’ve been hired to troubleshoot.

The final bullet above “Analysis/BI” offered me the best solution in these cases. Those Test Cases that are more equal than others are the ones built on shared steps.

If you are prone to what I called Testing Kata in my previous post then after analyzing the Test Cases various patterns should become apparent. If you haven’t built your test suite from scratch this way or as in my case, you inherited a project then you will discover Test Cases that have shared steps. This can lead to several choices: –

Shared Steps - Test Cases

If you don’t have a Test Case Manager / ALM that supports creating Shared Steps as distinct entities to add to Test Cases then don’t despair; fall back on using Excel to map them out.

The Warptest POV

The likelihood is that if this isn’t your first testing rodeo then at least you have Shared Steps and related tests mapped out (at least in your head) for UI Elements so you have a good starting point.

Like many things we do Test Cases involve repetition, both during writing and running. With good analysis and understanding you can extract the Shared Steps and save a lot of time.

The UI example I used in my previous post is a classic example of this:

UI STC Sample - Grid

Once you try this you might be surprised at what a difference having Shared Steps in your Test Cases will make to your efficiency.

If you are having some trouble with this then get in touch. Help is just an email away.

Automated Testing…

Is an incredibly powerful alternative to manual testing or is it?

Manual testing involves the employment of testing personnel to interact with the Application being tested and is considered labor and time consuming.

Automated Testing is considered a huge time saver and implementing it is seen as a way of Startups / Developers making big savings on limited resources.

Colonel Nathan Jessup Strikes Again…

The truth that most of you can’t handle is that pure Automated Testing is not a sound solution; it’s a magic bullet. Automated Testing has several drawbacks:

You can't handle the automated testing

  • It takes time to setup: you still need to write test cases that provide optimal coverage and then create the automated tests based on each of these.
  • It takes time to stabilize: if your code is still not stable and your product is evolving then today’s automated scripts can be made redundant by tomorrow’s iteration or pivot.
  • It can take time to analyze the results: manual testing sees the results during each individual test where the tester observes the bugs in situ and is able to report them within that context. Let’s say Tester “Bob” runs a test cycle overnight and the next morning has several hundred Automated Test results to analyze. Each failed test will require analysis, drill down and the tester getting into context to understand the results.
  • Technical Debt: when the maintenance of your Automated Testing demands more attention than developing the features or the time available to test, you may not be ready for Automation.

The Warptest POV

Don’t get me wrong I’m not saying abandon hope or Automated Testing as a viable option for testing your Application. Just don’t drink the Kool Aid. To run efficient testing you need to take a holistic approach combining targeted Manual and Automated Testing.

Not just automated testing

As soon as you begin creating Test Cases based on Use Cases, Requirements and Spec you should be planning which Test Cases logically can be automated and at what stage of the lifecycle.

If you know a feature or UI will not be stable until an expected date then the target should be to aim for optimum Automated Testing leading up to this date. By ensuring that efficient Manual Testing is coupled with this effort you are able to provide better test coverage than with just Automated Testing alone.

Some of you are no doubt disagreeing with this. To the rest who have been sold on the idea that Automated Testing is indeed a magic bullet let me state,

“There are scenarios where a skilled, attentive Manual Testing expert will give you speedier results. There is NO magic bullet here. Each product demands a tailored testing strategy.”

So in a nutshell I’m suggesting that if you are developing your App / Software and you are ready to invest in QA: –

  1. Don’t assume that you are at the stage you need Automated Testing.
  2. Don’t assume that Automated Testing alone is the best solution.
  3. Let whoever you are hiring to run your QA get the earliest possible look at your product and roadmap so you have matched expectations regarding testing strategy.

If this brief insight into a holistic approach to testing intrigues you and you believe this offers you the best option for your product’s success then …

Contact me - automated testing


… Test Metrics Quantify Our Work

Simply put they allow Testers and Testing Managers to measure status at any given moment.

Usually these are standard queries or reports made in whichever ALM or Defect (Bug) Tracking Solution you have implemented.

Standalone Bug Tracking Systems can limit your effectiveness as there is a disconnect from Test Plans / Test Cases. How do you easily trace test coverage between your cases and your bugs?

There are solutions for this but let’s not digress…

A Cautionary Tale

Stop Sign Test Metrics

I was in a meeting a while ago discussing preparations for certification compliance with Bob (Product Owner), Lucy (Compliance Officer) and Mary (Certification Manager).

Mary was asking questions about procedures, documentation, methodology and then she arrived at test metrics…

I keep queries and reports to show:-

  • Test Case coverage vs Specifications
  • Test Case Progress / version by module / tester
  • Bugs / version by severity by module / tester
  • Total bugs / version by severity by status…and a few others.

Mary was happy and then the other boot dropped. She recommended I use my Test Metrics to enable monitoring which testers weren’t working fast enough or reporting enough bugs and which developers were generating just too many bugs.

The next few moments of discussion are best left to your imagination.

After demonstrating onscreen the fact that not all test cases are created equal and prior knowledge is required to make sense of the scale of each module’s test cases in relation to others Mary was willing to listen to a more detailed explanation of how these test metrics should be used in a constructive, ethical manner.

The Warptest POV

Test Metrics are a useful tool but QA / Testing Managers need to apply caution when defining the metrics they measure, who has access to the metrics and to ensure a well thought out, structured explanation of the impact of the test metrics and their indication of overall status at any given stage of testing.

Metrics man test metrics

To paraphrase Churchill,

There are lies, damn lies and then there are testing metrics.

Make sure your metrics aren’t misleading you but more importantly make sure no one is abusing or being misled by the data you are feeding them.

Names were changed to protect the innocent and slightly guilty.

No ALM or Bug Trackers were harmed in writing this post.


Testing Is An Ever Evolving Field …

… And yesterday I was able to clarify something that wasn’t exactly new to me but needed phrasing accurately. you might have noticed but recently I have been writing a bit more about testing (see here, here and here). Here we go again…

At 3am yesterday morning I woke up to the sound of water dripping in my apartment; never a good middle-of-the-night noise.

I discovered that while our laundry was running the main drainage pipe had spontaneously blocked and the backed up water was all over the kitchen and laundry room.

Luckily Israeli floors are marble tiles so the biggest problem was waking up enough to mop up the mess and not to wake up the kids.

A Good Story, What’s The Punchline?

  • Punchline#1 is I owe a big thank you to The Missus who stayed home and had to deal with the aftermath of the plumber snaking the drain with nary a gripe.
  • Punchline#2 is I spent part of the morning drinking large amounts of espresso to stave off my exhaustion and considering what the plumber was doing.

Our plumber explained to The Missus after snaking out some pretty vile stuff from the drains that due to a mind-bogglingly stupid construction flaw the drainpipe runs uphill and will block about once every couple of years.

The first conclusion is Doh! Whatever happened to building to spec? More importantly whoever defined the spec took for granted the “implementation team” knew that water cannot drain uphill for long without depositing the waste it carries. Having background (but basic) technical knowledge related to one’s job description is often taken for granted but this is a secondary point.

Everyone who tests software (client, web or mobile apps) knows to test functionality and I’m not talking about UI element verification or basic behavioral responses.

A second and slightly more intricate layer of testing is applying tests to business logic; the tester needs to get into the head of the different people who will use the product under test and understand the hows and whys of their use cases.

The Warptest POV

What occurred to me thinking about all this was that this was an accurate description of a third layer of testing, test the unimpeded flow of data from entry to exit point of each process. In an allegorical nutshell “no blocked pipes”.

Most if not all apps we encounter are data driven whether it’s a Checkin, a Status, a post, a purchase or something else. Whichever, data is valuable and as such our testing needs to reflect how it moves, behaves and is handled. The best methodology to apply is subject to a variety of factors but that’s for a different post.

I’m sure some of you are struggling not to yell out the other various types of testing I haven’t mentioned here. I’m not ignoring them nor do I when testing.

Thanks to my plumber I was able to visualize and grasp this idea properly:

So the next time you are writing your test cases keep your plumber in mind and if you need a good plumber and live nearby contact me, I have a strong recommendation for ours.

angry tester

Software Testing…

…can often be perceived as much as an art as a science but there are certain patterns and phrases that reproduce from project to project: –

Image thanks to Office365.

  1. This bug doesn’t reproduce in my Development Environment. The flip side of this is, “It’s a known bug”: when either of these are uttered you can pretty much assume a unicorn just died… horribly.
  2. Automated testing will solve all our testing issues and find all the bugs.
  3. We have a zero tolerance attitude to bugs.
  4. Unit testing? We don’t need that here, it only slows things down.
  5. Who is Manuel Testing?
  6. Can you please stop talking to the developers about bugs? It stops them working.
  7. We invented Agile but we have a unique blend of it with Waterfall and bits of CMMI.
  8. No, no, no! I’m telling you. It’s a feature not a bug.
  9. Why do you need to understand how the customer will use the software? You’re Testing not Product or Sales.
  10. We maintain a proprietary Bug Tracking System we built on top of Excel.

But my all time favorite for real world testing albeit not software testing is:

Here’s your bulletproof vest. If it doesn’t work then bring it back and we’ll exchange it.

Think … about … it. (This really happened to me).

The Warptest POV

Whilst these are amusing to encounter they are also warning signs particularly for the Software Testing professional who works on projects. It is beyond importance to be aware of the work culture and management buy-in / comprehension of what Testing entails, why a specific tool or methodology is being recommended and whether all your efforts will result in a positive change or not.

When I was doing my certification as a Scrum Master the group were asked the best response to this scenario:

Your team are working successfully and to schedule based on their commitments. A manager walks in and takes two members of your team away mid-sprint. When you discuss changes to the schedule due to loss of resources the answer you get is, “We expect you to cope and meet the deadlines regardless”.

Many of the class came up with creative responses involving negotiation, refactoring schedules etc. but the Agile Coach giving the class responded,

“You are all wrong. You update your resume.”

There are software testing jobs like any other jobs that you simply shouldn’t take unless you are prepared for failure on the way.

Sometimes it is possible to pull a testing rabbit out the hat but it requires a strong team and most likely a high level of collaboration from the Developers you are working with.

Is this something unique to Software Testing or have you encountered this elsewhere?

Waptest Eye Test

You Know You Need Testing…

Like it says below, “How do you envision your product success?”

Waptest Eye TestSo if your product is in development and you are either uncertain how to give it the testing it deserves or you don’t have the resources then drop me a line.

Don’t let your software, web site, web or mobile app fail for lack of optimal testing.