Effective Application
Testing Methodology & Standards
Edward Van Orden,
Dulcian, Inc.
Overview
Effective testing is a critical success
factor for any development effort, large or small. Thorough testing is often
more difficult with a small development team. Both a solid testing methodology
and proper testing techniques are needed to ensure final project success. This
paper outlines a testing methodology tailored to the needs of a small
development team working on large development projects. The following topics
will be discussed:
I. Components of an effective testing
plan
A. Finding and
classifying bugs
Bugs can be grouped into a number of
different classifications:
1. What must be known
to successfully locate bugs?
The most important system aspects for the
tester to be familiar with are the business rules supported by the application.
If the tester isn’t familiar with the requirements that the application is
intended to satisfy, then an accurate round of testing can’t be successfully
accomplished. A complete understanding of Windows screen items is another
crucial factor for effective testing. Knowing what each GUI item type is used
for aids the tester in understanding the application.
2. What are some of
the problem areas where bugs lurk?
The following are situations where bugs are
frequently found:
3. How are bugs
prioritized?
To get a system up and running, it is not
always possible to fix all of the bugs that are found. The first version of the
system is considered complete when all type 1 bugs have been fixed. Type 1 bugs
are bugs that prevent the system from going into production. For example, bugs
that corrupt the data or prevent basic system functionality are type 1. Type 2
bugs represent a failure to meet a system requirement. Examples include
inadequate performance or a bug in a non-critical system feature. Type 3 bugs
represent previously undiscovered system requirements that would significantly
improve the effectiveness of the system. Type 4 bugs are any other desirable
modifications to the system. Whether type 2 bugs will be fixed for the first
system production release or not is a decision to be made by the project
leader. Type 3 and 4 bug fixes are typically deferred to later versions of the
system.
B. GUI design standards
A complete discussion of GUI design standards
would fill a book. However, the following paragraphs present some basic
principles and strategies relative to GUI standards to keep in mind when
testing applications. There is no industry standard for GUI development.
Different companies have radically different standards of how screens should
look. Thinking through how screens should be created helps avoid fundamental
differences in the way screens look within the same application. In the old
character-based environment, screens within the same company usually had a
reasonably consistent look and feel. With GUI applications, organizations often
have many applications that are grossly dissimilar. Look and feel standards
should be determined for each type of application.
All books about GUI design recommend keeping
the number of colors and fonts to a minimum. Many hours can be wasted thinking
about appropriate fonts and colors. The best the developer can hope for is that
no one hates the choices. It often pays to make boring selections, since
different machines with different video boards (or different browsers for
web-enabled applications) display things differently. The monitor used also affects
the display. The developer should create samples and test them on the client
machines with the users.
The following design elements are important
in the testing process:
1. Why are GUI design
standards important to the tester?
Every time users interact with software, they
are interacting with the GUI designs. Eventually, the user will notice even the
smallest inconsistencies within the system. When a client notices that their
production applications aren’t internally consistent, it makes the people who
created them appear incompetent. Thus, it is important for client satisfaction
that testers check carefully for uniformity of GUI standards across
applications. Also, users will more than likely be using core applications for
hours in any one given sitting. Therefore, uniformity of standards makes
applications more user-friendly. Having consistent GUI design standards is an
integral part of some applications. For example, if the application requires a
master-detail relationship and the corresponding GUI notation isn’t present,
then this would be a bug. Although, if the tester doesn’t know that there is a
GUI standard designated for this purpose, then this bug will be overlooked.
2. What GUI design
standards should the tester look for?
A good tester should look for the following
elements in the applications being tested:
C. Understanding
Application Functionality
There are numerous benefits realized when the
tester has a complete understanding of the application’s functionality and
purpose. For example, a time consuming dilemma can arise when the tester
doesn’t understand application functionality and the developer isn’t available
to answer questions. More often than not, the tester may guess at intended
functionality or spend an inordinate amount of time trying to figure out what
the application is supposed to do. This scenario is counter-productive. The
tester must be able to get substantive answers to all of his/her questions when
testing of any given application begins. A session should be scheduled where
the tester can ask the developers questions about the application(s). Also, the
developer should include all relevant information in the description sheet that
accompanies the application. A thorough understanding of application
functionality also enables the tester to learn how and why applications perform
the way they do, making future testing sessions easier. This is because many
systems have similar requirements and common business rules that applications
are designed to support.
The tester must check to see that all core
application functionality is working properly and that all declared business
rules are supported by that application as defined in its corresponding
description sheet. The tester should also go out of his/her way to tweak the
application in order to make it break. The idea is to make the module to
produce errors so that every possible instance of any type of bug can be
identified and subsequently fixed. A good method of implementing this strategy
is to make the application perform operations that go beyond the norm by
attempting to use the application in ways that may not be originally intended.
If the tester or the developer doesn’t find these subtle bugs, the user will.
II. Levels of testing
You must perform unit-level tests application
by application. The testing process should be meticulous, using test data,
automated testing scripts, code walkthroughs, and interviews with the system
developer(s).
There are three levels of testing
differentiated by complexity, namely, low, middle and advanced. These levels
are differentiated from one another because, as testing complexity increases,
so does the need for increased technical knowledge. For instance, in order for
the tester to verify correct locator results, he/she must also be reasonably
competent with Structured Query Language (SQL). A locator within a form means
that the user can enter multiple types of search criteria to find parent
records that the application supports. This is accomplished by dynamically
building a WHERE clause for the result block based upon what data have been
chosen as search constraints. The tester will be required to duplicate the
locator query in Oracle SQL*Plus or SQL Navigator in order to guarantee that
data returned from a query is correct and valid.
A. Low Level Testing
An illustration of low level testing is
ensuring that tab orders are logical and correct in both forward and reverse
orientations. Other example’s of low level testing include making sure that the
header field at the top of the module stays current when Update and Insert
functions are performed on their attached block and ensuring that the module’s
"Back" button goes back to the computer system’s main menu and
doesn’t produce errors. GUI design flaws are often discovered during low level
testing.
B. Middle Level Testing
An instance of middle level testing is
checking to see that inactive reference table records don’t show up in other
areas of the system. Another case of middle level testing would be in a Forms7
block scrolling canvas recursive structure. The top level parent block is
intended to only contain records where the recursive foreign key is null. This
restriction needs to be placed on the WHERE clause of that block’s Property
Palette Where clause Property. If this clause is omitted, then the parent block
doesn’t perform properly; however, this bug doesn’t prevent a module from going
into the first production release of the system that it is a part of.
C. Advanced Level
Testing
Advanced Level Testing includes tasks such as
ensuring that the module’s locator driving query is logically sound and
correct. An error that is often overlooked is forgetting to place an outer join
on the join condition of nullable foreign key identifiers of the table being
queried. This type of query or any other complex query needs to be thoroughly
inspected by the tester in order to determine that the query is correct.
Another type of advanced level testing
involves a running total. The running total has been one of my most challenging
structures to code. This type of program needs to be thoroughly tested and
tweaked to make sure that the proper calculations are performed when a new
value is entered or changed, when a new record is added or when a record is
deleted. This running total procedure needs to updated on many different
triggers in all instances in order to keep the total amount current with the
module at all times.
The previous examples are a few of the
important items that a software tester needs to be concerned with.
III. The Application Testing Process
Testing is one of the most important but
usually most poorly conducted phases in the system design process. The key to
proper testing is to use multiple tests. No single test, no matter how
carefully conducted, will find all of the errors in a system. It is better to
perform several different tests less carefully; these usually catch more errors
at less cost to the organization. Testing can be done by both technical and
non-technical people. In fact, non-technical users can be very effective
testers. In some organizations the Quality team may participate in the Testing
phase. One of the alternatives available in the Test phase is to employ users
as testers. This is often appropriate for small systems if the development team
has a close enough relationship with the new system users. This can also be
done if other testers are unavailable. However, some users may not test as
thoroughly as dedicated testers. What is required are individuals who will
click on every button and try every function and combination of buttons and
functions and find as many bugs as possible. Leaving testing solely to users
will mean that bugs might not show up for weeks or even months but perhaps long
after the people who developed the code are gone. There also exist automated
testing tools that can help with many aspects of the testing process.
1. What role should
developers play in a successful testing program?
In order to implement a successful testing
strategy, developers need to perform certain tasks before passing applications
to the tester(s). First, the developer should include full comments throughout
the code attached to triggers as well as a descriptive paragraph at the top of
each package, procedure and function which precisely explains that object’s
purpose in the overall application. Each application should be accompanied by a
thorough explanation of its functionality, features and any other pertinent
information. Each developer is responsible for performing his/her own unit
testing. For example, every IF condition and LOOP in the application should be
thoroughly tested by the developer before the application is handed off to the
tester. The developer is responsible for testing all queries contained within a
particular application. Each individual bug should have a detailed description
and placed into its own bug log sheet. These bugs should be addressed one by
one due to the fact that logic errors can become very time consuming to fix.
The idea is to keep open logic errors to an absolute minimum. All of these
types of testing operations performed by developers can be referred to as
application internal testing.
Documenting findings
The tester should always convey any findings
in a concise, clearly written document. In most cases, when the developer goes
to de-bug the application, the tester isn’t present to clarify any ambiguities.
The overall goal is to keep the written bug log document as the sole
communications vehicle between the tester and the developer(s). If testers and
developers do work in close proximity, it may be very tempting to orally
communicate some findings. However, these can often be lost or easily forgotten
resulting in a flawed final product. When the tester encounters error messages,
he/she should consult the appropriate Messages and Codes guide and write down
the exact explanation of the error message encountered. Bug explanations should
be kept as straightforward as possible without any subjective slant.
At Dulcian, we have developed a precise
methodology for documenting bugs in the applications we develop. Each bug is
logged on an individual log sheet. Second, all log sheets are sorted by module
name and inserted into the client notebook under a separate tab entitled "Name
of the application_bugs." When the tester fills out fifteen log
sheets, these sheets are bundled and handed off to the developer. Fifteen
viewed as a manageable number of log sheets for any one application at one
time. At this point, all testing on an application is halted until the bugs
listed are remedied. In the meantime, the tester can begin testing another
application or move on to other tasks. When the fifteen log sheet bugs have been
remedied, testing for an application can resume. When the developer declares
that a bug has been fixed, that bug sheet is given back to the tester for a
second round of testing. This is to ensure that each bug has definitely been
eliminated. Any recurring bugs are logged on the bug sheet and handed back to
the developer for correction.
Eventually, the tester will tell his or
herself that a particular application is ready for production. At this point,
the application is to be handed to the Director of Quality Assurance for the
final round of testing. If the QA Director finds bugs or other inconsistencies
within an application, they’ll write up bug sheets for each and then hand those
sheets back to the tester who will then send those sheets through the initial
testing process. Then, the testing process for that application will start all
over again. The only way an application will be handed to a client and declared
as production is with a signature of approval from the Quality Assurance
Director. After the application has been declared production, the tester
will gather all associated bug sheets and log them into the bug log tracking
system.
This testing program is set to up to where
the tester can have no technical knowledge whatsoever, but yet still be able to
successfully test highly sophisticated software modules. With successful
implementation of the testing methodologies presented herein, any small
development shop will be able to achieve a higher degree of quality within
their software development efforts. An additional benefit is that this testing
methodology allows for a lower degree of post-production application
maintenance.
Conclusions
Application testing is just part of the
overall Test Plan for any system. The integration of the applications must be
clean and seamless. The most complex and failure-prone portions of the
application are the interfaces between parts of the new application and the
existing systems. Even though all systems have passed unit testing, you need to
ensure that all of the interfaces are correct. The essence of system testing is
not only to test the individual modules. Instead, entire business transactions
must be processed through the system.
The application must also be tested at full
production loads--not just at today's production capacity but also at projected
levels of capacity for the life of the system.
There are many ways of checking the
application portion of the system (that is, the code). The principle behind
good testing is the one auditors use to find errors in large accounting
systems. Many, many tests are run looking for the same errors in different
ways. The logic is that if errors are not caught one way, they will be caught
in another. Therefore, applications can be tested by running lots of little
tests and looking for evidence that shows how well the system is working.
About the Author
Edward Van Orden is a Developer and Director
of Quality Assurance for Dulcian, Inc. He is proficient in the Oracle Developer
2000 and Designer 2000 arenas. He has several years of application testing
experience This is his first ECO presentation. He can be contacted at evanorden@dulcian.com or through Dulcian’s Website at www.dulcian.com.
© 1999 Dulcian, Inc.