As IT consultants, we’re often asked to help test software we build or configure or to act as a neutral
third party evaluating software or configurations developed by others. Here are seven of the most common
mistakes we see software executives make and tips for avoiding those pitfalls.
1. SAYING “LET’S SHORTEN TESTING” WHEN A PROJECT IS FALLING BEHIND.
Any time a project schedule is delayed, testing is the first thing people want to shorten to stay on track and
make the launch date. Adding more testers isn’t a good choice either because there are always defects, and the
developers need time to fix them before the testers can re-execute.
Testing needs to be thorough. If you want to find and fix all the bugs, allot enough time to do the job right. A
variety of variables change when you move or change a project schedule. Be careful deciding how to proceed. Give
yourself adequate time to test so your customers and clients get the best possible product or service you can
deliver.
2. THE BALANCE BETWEEN YOUR MANUAL AND AUTOMATIC TESTING IS OFF.
Some companies think automation is a silver bullet, eliminating the need for manual testing. But there’s a lot
you can’t automate and plenty more that doesn’t make sense to automate because the setup is too complicated or
too expensive to maintain. Front-end versus back-end testing should also play into your overall strategy between
automation and manual testing. I’ve seen companies that spend more time maintaining an automation suite than
they would doing testing manually. Develop a strategy first and aim to strike a balance between automatic and
manual testing.
3. YOU DON’T TEACH YOUR TESTERS YOUR BUSINESS.
The goal of the testers is to find defects before the business testers find them during user acceptance testing
(or once you’re in production). If the testers don’t know your business, they won’t function like a true user,
and your test results will be skewed. When your testers know your business, you get more accurate results,
uncover more potential problems, and allow your actual business testers to get back to helping run your company
versus troubleshooting.
Invest in high-quality testers, teach them your business, and let them execute test scripts as if they are
business users. If money is an issue, you can leverage offshore talent as long as you know what you’re doing.
4. SAYING “CHEAP” AND “TESTING” IN THE SAME SENTENCE.
It floors me when companies purchase testers based on price and not capabilities. I once had a client move from
one testing company to another company to save $1 an hour, even though the first company was better. Hiring
cheap testers is like putting cheap tires on your car. The difference between poor tires and good tires could be
the difference between having an accident and avoiding one.
Hiring testers based on price may save money in the short term, but you run the risk of the software going to
production and failing. Repairing the damage to your product’s and your company’s reputation can cost far more
than you saved with the cheap testers.
Testing is the last stop before heading into production. Hiring testers based on price may save money in the
short term, but you run the risk of the software going to production and failing. Repairing the damage to your
product’s and your company’s reputation can cost far more than you saved with the cheap testers.
5. THE BALANCE BETWEEN RECORDING TEST CASES AND TRUSTING TESTERS IS OFF.
Recording the keystrokes of your testers, while beneficial, can be time-consuming and expensive, but it’s the
best way to ensure a test case has passed. After an audit or test failure, it’s great to be able to pull up a
recording of the tester running the test script and check to see if the tester did the right steps. One caveat:
You can end up with hundreds or even thousands of recordings. Create a naming convention everyone understands so
you can quickly locate needed recordings.
The alternative is to skip recordings, put your trust solely in your testers, and assume everyone is honest and
good.
The middle-of-the-road solution is to categorize the test scripts as “to be recorded” versus “not.”
That saves time and money while giving you the comfort of knowing your testing was done correctly and was
successful.
6. SKIPPING PARALLEL TESTING.
When you’re doing a large-scale upgrade or installation, parallel testing uncovers flaws and provides system
verification against production. Done properly, parallel testing can be easy and produce results against all
your production systems.
Your approach needs to be well-thought-out, setup is critical, and the right tools make an enormous difference.
The value of parallel testing led 1Rivet to create 1DataServices, a product that automates parallel testing
across newly changed systems and production systems to eliminate the need for validation across both systems.
7. ONLY TAKING THE HAPPY PATH RATHER THAN TRYING TO BREAK THE PRODUCT.
The more users you plan to have, the more likely it is one of them will break your application.
The goal of testing is to find problems with your software based on the expected path of your users, as well as
any negative paths they might stumble down (otherwise known as negative testing). After you test for what a
normal user would do, test for what a normal user should not do.
Can you enter numbers in a field where you should be entering letters? Can you send negative invoices to
customers? Can you click “approved” in a workflow before it should be approved? Ensure your company has the
right level of negative test cases. That full inventory of test scripts ensures the testing is 100 percent
covered.