You need to know what's on the cutting-edge of technology. Find out what's coming and the unique Warptest POV with just one click on the "Blog" tile.

All posts in Testing

Automated QA Yada Yada Yada

Another year of recurring discussions online how the silver bullet of automated QA is killing manual QA. This ongoing trend has had several impacts on the market, not all of them positive.

The Impact of Automated QA

1. Craftmanship is dying not manual testing. Do you know one person who is a craftsman in anything or have they been replaced by mass industrialized process? This is progress you say?

Junior testers feel driven to dive head-first into automated QA before they have learnt the craft of testing. They see automation as the end product, not as a means to implementing the tests and methodology they never had a chance to learn about. What do they spend their time learning, how the product works under the hood or the intricacies of their chosen automated QA framework?

2. QA Managers are super-powered, high octane, ninjas but, at a high cost. If you are a QA Manager then you had best be a full-stack test professional, who can manage a team, function as mentor, be an oracle for the product technology, generate dashboards, manage DevOps / ALM tools but also dedicate yourself to hands-on implementation of automated QA yourself.

Why? Because QA professionals have sold to companies that it is possible and optimal that one person is capable of doing this without compromising quality (ironic much?) of management. QA Managers are being forced to sacrifice their strategic responsibilities for tactical operations for what? Burn rate or simply because this idea stands unopposed?

Analogous to this, when is the last time your Development Manager spent a meaningful portion of their work day writing code?

3. Don’t shoot the messenger. Did we forget that the essence of our job is to report our findings? Often we report defects, overall quality of a build but also we raise red flags. We provide preventive treatment for issues before they go to production but if our underlying methodology is flawed where does that leave us?

When the silver bullet doesn’t live up to expectations, the messenger is not always viewed favorably.

The Warptest POV

We frequently cite Michael Bolton on his philosophy regarding testing vs checking and where automated QA falls into this. Here is one of his tweets, yet do we really work this way, evangelize this philosophy to our companies? It doesn’t seem so.

In recent days they asked Elon Musk about delivery problems in his company Tesla and his answer, I built too much on automation capabilities. If Elon Musk can admit that automation is not the magical solution (read the tweet below) then isn’t it time we considered the same?

Elon Musk’s admission has much further reaching implications than simply admitting over-reliance on automation. Musk admits that the role of human workers in successful delivery is underrated.

How do we refactor our methodology to redeem the human role in testing and QA?

In a nutshell, the perceptual bias towards automated QA needs a stake driving thru its heart.

Whilst I have used existing terminology and semantics to avoid confusion (manual vs automated QA, testing, etc.) this is clearly a major reason why Michael Bolton goes to great lengths to use accurate and appropriate terminology (again see his tweet above and many other of his posts, tweets etc for more).

In fact, there are technologies that cannot be tested with Automated QA, the existing frameworks aren’t mature enough to provide a solution. We can create partial solutions but we run the risk of becoming over-enamored with the solution as a product and not executing our tests on the actual product.

If we can get past this misconception about how we test, then maybe we can get to meaningful discussion about how we reframe optimal use and understanding of automated QA.

Two ideas I discussed today on a Facebook group for Israeli testers (in Hebrew) were: –

  • Test planning and design is an umbrella that provides coverage for your tests:

automated QA - umbrella

  • Testing debt – testing should be agile but not just in the sense of testers in the agile team. Not just in the sense of testing as part of the sprint. Every tester, regardless of the tools they use must be aware of how they and their tests integrate into the testing process.

It’s 2018, are you ready to focus on the optimal and correct way to use automated QA?

Boston Dynamics Can Solve Autonomous Vehicles Biggest Problem

The Boston Dynamics robots are successors to Robby the Robot  and they are not cute and often just a bit worrying to see in action.


Visions of robot uprisings, SkyNet and Terminators aside, these robots are astounding and can fill many roles that will complement humans or protect them. The US Navy announced SAFFIR, their firefighting robot prototype in 2015. The idea being SAFFIR can go into enclosed, smoke filled spaces aboard ships and fight fires instead of risking sailors.

Big Dog or the LS3 was a DARPA robot designed to carry heavy loads into the field, accompanying soldiers or marines.

We have military applications of cutting-edge robotics and yes, these robots make a lot of people feel uncomfortable but, military technology (when not weaponized) often migrates into the civilian market. How can these robots play a meaningful role in civvy street?

What’s the problem?

Yesterday, I posted about the first autonomous vehicle related fatality to occur and how technological disruption without ethical exploration of impact could be our Frankenstein’s Monster. In a related Facebook discussion with friends it occurred to me that calling the phase of releasing these vehicles onto city streets for “testing”, albeit with human monitors is optimistic at best. That said, I have found it very hard to find details online about the end-to-end testing methodology employed. The one source I did find was the California DMV site regulations for testing autonomous vehicles.

Autonomous vehicle testing - Boston Dynamics

I came away with several big questions:

Testing questions - Boston Dynamics

Besides questions there are some assumptions:

Stages of testing: one can safely assume that software is unit tested by developers and then the autonomous integrated systems are tested through simulators. What other stages of testing are there prior to testing in the real-world?

What is being tested: One can assume that the comprehensive list of features that allow an autonomous vehicle to function and interact in real-time are under test.

Test-cases: the different scenarios tested will range from functional tests, thru load and stress of the system into emergency scenarios.

Testing success: what is the pass / fail criteria for approving an autonomous vehicle to be released for general, real-world use? One assumes the tolerance for error is almost zero.

The Warptest POV

Autonomous vehicles certainly fit the description of technological disruption and their impact on the real world can be wondrous or catastrophic. A lot of which, depends on the depth at which they are tested.

Whilst I am certain crash-test dummies were used as in any automotive testing, this does not deliver the level of testing that IMHO is needed. Boston Dynamics has the solution. The testing stage before real-world testing where human drivers in other cars, bikes, trucks and pedestrians are all involved would be to build a testing environment that replicates the real world and to mitigate the risk to human testers, use Boston Dynamics robots to be the test-data used to run the different test cases.

The test-cases would have to provide optimal coverage of every conceivable scenario, but that data is waiting to be analyzed and derived by a good data scientist. Every recorded traffic mishap, accident, crime or fatality is a test-case that needs testing and this can be done in a testing environment that can replicate all weather conditions (and other variables). Another layer of testing will have to be the behavioral algorithms that allow autonomous vehicles to make critical decisions. If a vehicle is placed in a no-win scenario where either a passenger or pedestrian is sure of being hurt or killed, does the vehicle respond as expected and what is expected behavior? Is it based on learning or something else?

The good news is Boston Robotics or someone like them can provide a critical facet to this testing so that spontaneous pedestrian actions can be tested without risk.

SkyNet not - Boston Dynamics

Image via YouTube: with thanks to Terminator: Judgement Day.

Instead of being creepy robots that make some think we are one step away from SkyNet, these robots can be our path to safer autonomous vehicles.

Let me know if you think Boston Dynamics can solve this, if these robots creep you out or if you are building your post-SkyNet bunker after seeing the videos above.

That’s Right Unity Is Not Dealing With This Critical Issue

Unity are one of the biggest, if not biggest 2D & 3D game engines on the market. The company supports every conceivable, mainstream platform from mobile, thru desktop, Web to Virtual / Mixed Reality headsets.

Unity logo

This is the company that Apple & Google speak to when they want to role out ARCore & ARKit. By the time WWDC or Google annual conference role around, there is a beta version of Unity waiting in the wings with support for these changes.

Unity - ARCore
Unity - ARKit

If you have used a mainstream VR/MR app on Oculus Rift, HoloLens or HTC Vive then the likelihood is, that it was built in the Unity Editor.

Over the last year, a lot has changed for the company, especially instead of major version roll-outs, each subversion being released into Beta isn’t just bug fixes and iterations. Instead, these Beta sub-versions contain major new features and Unity isn’t shy about being transparent with their roadmap.

The big news is that Unity are making major inroads into the movie industry. They teamed with Neil Blomkamp to create the amazing animations for his OATS Studios movie shorts.

Many pundits believe that Unity is preparing themselves for an IPO with these changes, several major new strategic alliances and changes to the company board.

What Does Unity Need To Fix?

Ultimately, Unity provides a software development platform in their Editor. Code is written in C# and subject to the platform you are building your game or app on, the code is converted (e.g. conversion to C++ for IOS via IL2CPP). Notwithstanding the inability of the Editor to provide Developers with a mechanism to select all the platforms needed and run a cross-platform build script; this is not the major failing.

Their Editor has built in facility for Unit Testing via NUnit as part of Unity Test Runner.

Unity - Test Runner

QA can preview the scenes in the Editor but, Unity does not support Automated UI Testing on devices.

Once you have built your apps, you have no way except hands and eyeballs testing (more commonly argued over as “manual testing”) to test whether your IOS / Android / WebGL app functions or delivers the UI as expected.

Anyone who works in QA knows that standard practice is to incorporate automated UI tests using frameworks like Selenium for web testing and or, Appium for mobile apps.

These frameworks rely on the ability to recognize and map UI elements and objects with the app UI but Unity apps are a Black Box as far as Selenium or Appium are concerned. If you can’t map the UI elements and objects, then you can’t script clicks, swipes, text inputs or other simulations of real user behaviors.

This leaves game and app makers with 3 alternatives:

Unity - Testing

Manual testing alone is labor intensive, time consuming and repetitive. Cost aside it depends on skilled testers with the ability to catch and report bugs.

Customer testing is an oxymoron and often a disaster waiting to happen and yet some companies have no issue releasing their applications to their customers after only catching the critical issues.

Crowdsource testing is a good interim solution where companies lacking the testing personnel and is done by paying a 3rd party crowdsource company to deliver the warm bodies needed to test on personal devices for what amounts to first-to-find bug bounties.

The Warptest POV

Over the last year, I along with one of my QA Engineers tested several so-called automated testing solutions for Unity apps. Most didn’t make it out of the starting gate. Others showed early promise but needed extensive investment in development and testing to be anything more than a proof-of-concept.

All you need to do is search Unity’s Community Forums to see this is in high demand. Many companies and Unity personnel I spoke to online were interested to hear what we had discovered but if Unity want their community of 2D / 3D game, application and movie animation makers to deliver robust, well-tested products then automated UI testing needs to happen.

Unity - GDC

Today at 6.30pm Pacific Time, GDC, the Game Developers Conference kicks off. Don’t disappoint me Unity.

UPDATE

This is Unity’s summary blog post of their keynote at GDC. Color me disappointed, no mention of automated UI testing. Now I get it, automated testing in VR is a big challenge but choosing between doing nothing and at least supporting web / mobile automated testing on device, the choice is simple. FWIW if I had to choose between Unity and a platform that supports automated testing, the choice would be simple.

This is me throwing down the gauntlet Unity.

I had my first cup of coffee when I was 25 and that was it.

It was a cold rainy day, early morning, in the desert. I was on a training exercise with the Army and we had stopped our jeep for a break. One of the guys fished out a small gas stove, a tin pot and made Turkish Coffee with cardamom. He offered me a small glass full of coffee and a heaped spoon of sugar and I took my first sip. The rest as they say, is history.

As a Manchester boy, I grew up in a house where a nice hot cuppa tea was the staple. Usually PG Tips. Coffee in the 70’s, 80’s and even 90’s in England was Nescafe if you were lucky, and had no attraction at all.

After tasting my first strong, black, rich Turkish coffee I knew I needed to try more real coffee, and nothing with foam, frothed milk, syrups, flavourings; just shots of the good stuff. I tried espresso and I was totally hooked. Suddenly I was in a meaningful relationship with ground, brewed beans.

Luckily I lived in Israel, a country which takes its coffee seriously. This maybe one of the few issues the whole Middle East can agree on.

Over the last few months I’ve graduated from grinding store-bought coffee beans to getting interested in home roasting.

Home roasted coffee - software tester 1

Software Testing and Coffee Roasting?

As a software tester I approach new projects with research; online and word of mouth. I discovered that for the “hobbyist” the best start is to either use a pan on the gas or better a popcorn popper. As I’ve written in the past, testing is improved when it becomes like kata.

Of course, the beans are everything. I planned the following: –

Keep a note of all tests and test results: I used Microsoft Office for this (see the table below)

 

  1. Make a list of available green (unroasted beans)
  2. Test the quantity of beans in the popcorn popper that produce optimum results
  3. Make sure all beans are bought equally fresh (as much as you can) and stored the same way. Fresh = flavor.
  4. Define optimum results: evenly roasted, the coffee bean oil still present on the beans, no burnt taste. All beans ground for 11 seconds in the same Bosch coffee grinder.

The popcorn popper has a functional constraint, after 3 minutes or if overloaded it would overheat and shut down until it cooled off.

Bean

2:00 min

75 grams

2:30 min

75 grams

3:00 min

75 grams

2:00 min

150 grams

2:30 min

150 grams

3:00 min

150 grams

Kenya AA

Sumatra

Costa Rica

Colombia

Brazil

Ethiopia

Why do I mention these constraints? The last time I roasted I was in a hurry and overloaded the popcorn popper. It subsequently shut off to cool down at 1:45 min. The beans were under roasted so I siphoned off half into my cast iron skillet, turned on the gas and roasted half in the skillet for another minute and the rest in the popcorn popper when it cooled down and would restart.

The Warptest POV

If the popper is science, using the skillet is an art. You are roasting the curve of the bean against the flat skillet. It heats up to a higher heat and roasts quicker. You need to keep the beans moving and flip them over to get an even roast.

This slideshow requires JavaScript.

By comparison, using the skillet gave better results. You can see exactly what’s happening in the skillet whereas the popcorn popper has a translucent, orange cover.

As for the beans, I got a better espresso from the Kenya AA but, that’s always been my favorite. Family and friends have been treated to espressos, cappuccinos, iced coffees and the ubiquitous Israeli Hafuch when visiting.

My plan is to finish the Sumatra and order Puerto Rican or Colombian green beans next and keep on testing. One thing, home roasting is seductive in its own way. I’ve found myself on Amazon and specialty coffee sites absentmindedly pondering 5kg bean roasters and bulk coffee grinders.

When I find my perfect roast I’ll be sure to let you know.

The World of Testers Has Something to Learn from James Bond…

CAUTION: SPOILERS ahoy. If you haven’t seen SPECTRE yet, you may not want to read this post.

It’s that time of year when we roll out the same tired, old arguments:

  • The Agile purists try to drive a stake thru the role of QA Manager.
  • Outsource companies say having in-house QA is redundant.
  • The Crowdsourcers agree but say crowdsource beats outsource hands down.
  • The Automated Testing purists take potshots at the Manual Testing crowd for the huge investment to provide test coverage that their scripts grant faster.
  • The Manual Testing purists snipe back at Automated Testing for ramp-up time and a several other alleged flaws.

Testers Arguing - James Bond

Don’t get me wrong, there is validity to multiple points of view and the testing industry like any requires challenging to grow and evolve but regurgitation is just that, the absence of new points of view on the same, weary subjects.

So, Where Does James Bond and SPECTRE come into it?

Here come those SPOILERS… turn back while you still can.

In the new James Bond film, SPECTRE we find Bond and MI6 assailed by the threat of obsolescence. HUMINT (Human acquired intelligence) has been declared redundant and a senior Whitehall official “C” is pushing for a unified ELINT (Electronic Intelligence) effort between 9 major nations, all under the umbrella of a shiny, hi-tech National Intelligence Center. Obviously, “C” will be the one running this multinational NSA like organization and the 00 Section is to be shut down because “C” sees no need for men like 00 agents in the field when tech can do all the work.

Testers James Bond SPECTRE

Meanwhile, Bond seems to have gone rogue, hunting a shadowy, criminal enterprise connected to his past. Faster than you can say “Goodbye Mister Bond” we discover this is SPECTRE and they and their leader, Franz Oberhauser (Bond’s pseudo foster brother) are the ones poised to take control of this unified ELINT center once it goes live.

Oberhauser or (redacted, I’m not going to spoil everything) Blofeld, is a staunch believer that pure ELINT will grant him control over the world.

Nutshell: SPECTRE, Oberhauser and “C” are the purists of automation that advocate replacement, obsolescence of eyes / hands-on testing. Real testers are not needed in their world. ELINT akin to automated testing can do it all (which is ironic considering the sheer number of armed henchmen SPECTRE employs, not even considering their assassin du jour, Mr. Hix).

Bond, M et al rely on Q to provide their automated solutions but acknowledge the world for what it is. Neither approach alone can get the job done. Only a holistic mix of an agent licensed to kill with tech backup will work just as only a holistic mix of both testing types will work. However, this is not the crucial lesson testers need to learn from James Bond.

The Warptest POV

Several years ago, I heard a kickass Marketing Professional talk about blogging to early stage Start Ups. The point he made was to blog about your niche, NOT you or your product.

Reading a post on a QA Outsourcing company’s site deriding in-house QA with the conclusion that you are better off taking their services is ridiculous and counter-productive. (You know who you are..)

Sometimes testers are our own worst enemy. These regurgitated arguments don’t benefit us. If there is nothing new to add to these issues, then let them lie.

Instead of the ability to evangelize a holistic approach, best practices and provide tailored testing solutions to suit each product, this reflects an immaturity in parts of our industry.

We need to do better because at the end of the day it’s all about ROI and demonstrating that testing is a mission critical investment. My hat is off to those testers who share, engage, encourage others and build a sense of community. This is clearly the way forward.

The Art of Software Testing Relies On…

Several critical truths. One of these is, “A bug not reported will never get fixed.”

The corollary of this according to Schroedinger-Murphy is, “This bug will return to bite you in the * at the worst possible time.”

Never Has There Been A Tale Of More Woe…

(Poetic license and changes of name and gender have been used to protect the innocent in this story)

Once upon a time, Bob the tester was working on a testing project with a new feature. Bob was testing this feature which relied on a 3rd party backend service and another 3rd party client plugin.

Bob had tested a prior version and declared the feature as working but in the latest version he found bugs with UI and function.

His manager, Jim got involved after he heard Bob explain to the Developer the problems and the Dev and their manager said, “This is an issue with the 3rd party integrations. We can’t do anything.”

Software Testing - We can't fix this

Jim asked Bob one question, “Are the issues documented in our Bug Tracking?”

Bob shook his head and could see Jim was not pleased.

Software Testing - Jim Khaaaan

Image screen captured from Youtube clip from Star Trek 3: The Wrath of Khan

“Bob. I must have said this a hundred times. Dev doesn’t decide if a bug gets reported, bug reporting means all bugs with the appropriate severity, Bob.”

Bob went back to his computer and was about to document the bugs when he said, “Hey Jim should these bugs all be reported as one bug?”

Jim came over and sat down with Bob, drained his coffee and said, “If they are all facets or symptoms of the same bug then maybe but ask yourself this Bob. If a Developer marks the bug fixed and you have multiple issues in there, how do you know which are fixed? More to the point, if some of these issues aren’t fixed what status does the bug acquire?”

Bob thought about it for a few seconds, grinned and told Jim he was going to open a bug for each. Jim slapped him on the back and went back to his desk.

The Warptest POV

Software Testing and Bug Reporting is somewhere between an art and a science. It is rule based and if you don’t want to cock-up, these fundamental rules need following.

What happens after you document bugs you discover and allocate the right priority is the next step in delivering a robust, sellable product that makes happy customers.

The basics of Software Testing can be learnt and then the skills and experience acquired through hands-on practice. Luckily, the nature of Software Testing is repetitive like Kata.

So as you sit down with your coffee to test the latest deliverable, make sure you are sharing information with good bug reporting.

Happy Testing.

Bug Reporting Is As Much An Art As A Science

… As a result sometimes running a refresher / brainstorming session on best practices in bug reporting for your team is a must.

As I’ve mentioned in the past, the testers and the person presenting can benefit hugely from the interaction.

The Primer

Embedded here is a primer presentation I use for this refresher on aspects of bug reporting I want my team to focus on:

The Warptest POV

Whether you are working with onsite developers or offshore, the need for sound observation and good bug reporting is critical.

A bug not reported or not reported properly will never get fixed. If your bug reports don’t give objective analysis or stress the severity / cost to the end-users then the bug may never get fixed.

So maximize your testing ROI and make sure every bug discovered and reported gets a fair chance a being fixed.

Do you refresh your bug reporting skills at least once a year?

 

Zombie testing applies…

To a state of mindlessness in the tester where certain scenarios or observations are missed.

Sadly, it can be incredibly infectious in the workplace.

When does this happen?

This can happen for a variety of reasons but let’s examine a special case where names have been changed to protect the innocent and guilty:

Elizabeth Bennet was a testing manager and was discussing web application testing with Darcy and Catherine, established members of her team.

She asked about cross browser compatibility testing and was greeted with a variety of the usual trollish comments about Internet Explorer.

Internet Explorer - zombie testing

One of the many examples of this kind of IE trolling

Elizabeth felt Darcy was especially annoying on the subject but suppressed her irritation and scheduled a test session for the team to test the web app cross browser.

Her suspicions were confirmed when they discovered a variety of hereto undiscovered defects in IE.

Elizabeth sat with her testing team and asked each tester what web apps they work with and on which browsers. No one on the team actually used IE and once again Darcy piped up confidently stating that the “lame” browser had such little market share it wasn’t relevant to test.

George the Product Manager was passing by and heard Darcy. He was quick to jump in and correct this misconception and stated that he was glad Elizabeth had decided to correct this oversight as many of the company’s end-users were IE users.

Bingley the web app Developer was building a new version based on all the new defects found.

Darcy came over to Elizabeth privately to apologize for his mistake and asked for responsibility for testing this in future.

Elizabeth was pleased and agreed but only if they pairwise tested the next version.

The Warptest POV

Testing requires a variety of skills, some of which I’ve addressed in the past but a tester cannot afford to compromise their objectivity by bringing their prejudices into the workplace.

Doing so may leave them open to being infected with the kind of zombie testing mindlessness mentioned in the story above.

pride prejudice and zombie testing

Image with thanks to Amazon.com

In a nutshell, good testing is not zombie testing and it’s worth asking yourself if there is a product or technology you have a blindspot or bias against.

The question you need to ask yourself is if you don’t use it for your own work, will you even think of including it in your testing efforts?

So check the pride, check the prejudice and stow the zombie testing.

 

Regression Testing…

I recently discovered that some team members needed a basic run through of the ideas behind this. With this in mind I used the incredible new Office web application, Sway to create a presentation that I could use to get across the basics.

Sidebar: Sway

Sway is available to anyone with a Microsoft account (which also means you have free Outlook.com OneDrive and Office.com) it comes with a strong Warptest recommendation.

This is not Powerpoint online (again you can find that in Office.com) it is clearly still in beta but allows the creation of simple, elegant presentations that can be shared or embedded.

Sway - regression testing 101

If you don’t know about the free, yes FREE version of Office.com then check it out ASAP.

Regression Testing 101

The Sway presentation works better with the accompanying talk but the core ideas behind the Regression Testing 101 talk are:

CIR- regression testing 101

 

The Warptest POV

After running through two sessions on regression testing the issue is refreshed in my mind and clear to the testers who needed the extra information.

Sway makes for an easy to use tool for creating elegant visual presentations online.

Clearly the benefits of creating presentations and training talks are two-fold: for the person giving and receiving the information.

 

 

Testing Isn’t Always Easy…

Once upon a time, in a testing lab far, far away was a young tester who sat each and every day testing his company’s apps.

(For the sake of argument) let’s call our tester Bill.

Bill was young and relatively new and had been assigned what he thought was the most repetitive and boring of all test plans.

However, Bill was not deterred and each day he would start anew and add every single problem, flaw, defect or bug in function, UI, UX, Load, Stress or against spec he could find to the company bug tracking system.

Bill’s greatest joy was adding these bugs to the bug tracking system and assigning severity. As Bill was still learning his job he was concerned not every bug would be fixed and so he marked each and every one as “critical”.

Bill Learns A Tough Testing Lesson…

After several days Bill was drinking his coffee and thinking how many times the Developers had come running over to talk to him about his bugs and strangely how many times they left with a grumpy look on their faces after claiming the bug was a feature, or worked according to spec, wasn’t critical at all or simply only happened under the rarest of conditions.

Bill was a little confused and didn’t really understand the negativity about his bugs or their severity.

Just then, Bill’s boss, the QA Manager walked in, gave him a big smile and sat down with his double espresso opposite Bill.

“So Bill, I hear you’ve been keeping our Developers busy with lots of bugs, right?” Bill’s boss gave him a huge grin.

“I guess so…” Bill replied.

“Well, I wanted to talk to you about the fact that I’m pleased that you are so dedicated and I know the bugs are all important but are they really all critical?”

Bill thought about this for a moment.

“How do I know?”

Bill’s boss sipped his espresso, “Well Bill, ask yourself what does the bug do to the App, to the user or to the system it’s running on. Once you look at the impact you can get a better idea of severity. Do you know why I’m telling you this?”

“Umm” Bill scratched his head, puzzled.

Bill’s boss put his finished espresso down, “If we mark every bug as critical then the Developers won’t take the really critical bugs seriously because we overused the definition and made them drop new work to fix some bugs that could wait. Luckily the Product Owner and I discuss the bugs and he sets priority with the R&D Manager but we need to check the spec to be sure if something is a bug or not as well… You know the story of the boy who cried Wolf right Bill?”

Bill nodded.

“As you get more experience you’ll learn not to be the boy who cries bug and be more confident about what severity each bug is. Today we are going to test together and see whether we agree on each bug or its severity. Let’s see how the Developers respond to that.”

From that day Bill worked harder than ever to learn what was and wasn’t a bug and to report each bug with the right severity. The QA Manager continued to be happy with Bill’s work, even had Bill train new testers and the Developers would treat each bug reported by Bill with seriousness.

… and they all worked happily ever after.

The Warptest POV

Learning how to write bug reports other than the uncompromising brevity derived from using Twitter also involves knowing if your observation is truly a bug and how to define its severity.

So the next time you are about to hit save on that bug, think of Bill and just review what you are reporting.

(This Grim tale is based on past, real life events. Names have been changed to protect the innocent).

 

What Can Twitter Possibly Teach Testers?

When working with testers who are still learning (all of us, right?), one of the challenges is finding a common language between testers and with Developers. This came up today and sparked this post:

This is especially important when it comes to writing test cases and reporting bugs.

In an age of “You had me at …” it’s crucial to get the point across rapidly and efficiently without compromising the information.

So.. Twitter Huh?

Twitter with its 140 character restriction on tweets is a natural teaching tool for anyone who needs to learn the discipline of brevity.

Twitter logo - testers

The thing is when reporting a bug it’s important to remember that you are imparting a story; a story that allows the Developer to know what should be fixed and under what scenario(s).

Imagine an ALM that was built on Twitter functionality: –

  • User stories, spec, test cases and bug reports all limited to 140 characters
  • Hashtags to make all of the above searchable by keywords.
  • Groups and Lists based on teams (e.g. QA, Dev, Product, Sales) and team members with usernames prefixed by the Twitter @.
  • Retweet, favorite or Direct Message (DM) other users.
  • Attachable images and URL shortening.
  • Trending subjects based on traffic within the ALM.
  • Analytics (of course).

The Warptest POV

As you read this keep one eye on the Agile Manifesto and see how Twitter Teaches Testers in an Agile manner. If you don’t see it, then it’s time to reread the manifesto:

agile manifesto - testers

The beauty of this method is not just the brevity but the rapid manner that it allows your testing to progress as everyone gets onboard with this manner of communication.

Obviously, there are exceptions, intricate or complex issues that require greater detail but the rule of thumb is,

“If you can’t sum up your bug or test case in 140 characters then it might be more than one issue.”

Does this speak to you? Feel free to offer your own experiences and ideas on the subject.

 

 

Testing Is Not Just For Amateurs…

… When is the last time you heard someone say,

“I know we’ll do the basic coding but get in someone with no experience to finish our disruptive app for us”?

Right. NEVER.

What Am I Doing Wrong?

Since you asked, here’s a short list to read, digest and nail with a riveting gun to your hiring manager’s table-top: –

Testing - 5 wrong things

So what do you do?

The Warptest POV

The value of Beta Testing is in taking the observations you receive and turning it into actionable intelligence for your Developers. Having a tester who adds these observations as bugs in your bug tracking and ensures they aren’t duplicates will make your life a lot easier.

Having a tester to plan and execute your testing in a professional manner will ensure a logical, efficient effort is made to provide optimal test coverage.

As for Cross Browser compatibility testing, I singled it out after reading yet another blog post about Best Cross Browser Testing Tools. Once again it’s Mashable writing an article that implies testing can be performed by any random bunch of dudes.

To their credit they incorporate two of my favorites BrowserStack and Browsera. Both are elegant in their simplicity if all you want to do is perform basic visual validation or in BrowserStack’s case add a layer of automation thanks to integration with Selenium Web Driver (an automation API to drive the browser natively).

BrowserStack supports 300 or so desktop and mobile browsers. Browsera makes it easy to test sites with logins, compare layouts and find scripting errors.

This slideshow requires JavaScript.

The big problem with all these blog posts on best tools … for Cross Browser compatibility is that they fail to explain the psychological impact of layout bugs on your users.

Content is king but if it’s displayed in a messy, visually irritating manner then you just lost the user who you forced to endure those bugs.

All because you didn’t understand the ramifications of not having these tests performed, or knowing which platforms / browsers / browser versions are your priority.

Isn’t this a laborious process? If planned and executed efficiently even by a manual tester then subject to the complexity of your site or app this shouldn’t be too time consuming.

So what have we learnt?

  • Don’t expect the same results from using amateurs as using professionals.
  • Use the right tools for the job.
  • Plan the work and work the plan.
  • Gareth Mallory says it best in the latest James Bond movie, Skyfall when addressing 007:

So what are you going to do to ensure your product’s success?