Session Based Test Management Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Bach has done extensive work on developing a management system specifically for Exploratory Testing, called Session Based Test Management and we will briefly describe some of its highlights here. A key element of SBTM is the Session Report. In a session report, basic information is gathered about the test session, such as the names of the testers, the title of the test charter, and the date, time, and duration of the session.

In addition, a sentence summarizing the hypothesis and a list of the tests that were performed listed as well. One of the most important parts of the session report however, is a list of open questions and issues that came up during the exploration. One of the goals is to learn about the software and coming up with a list of questions to go back and ask the subject matter expert is a great way to learn. Some of those open questions will have simple answers and can be dismissed, but others will be authentic bugs, which will get entered into the bug tracking system.

SBTM also keeps track of certain metrics about the test session. Since testers are encouraged to explore, and exploring testers can very easily find themselves off on a tangent somewhere, there needs to be a balance between focused work on the test charter, and roaming into unknown territory. For example, it is interesting to see what percentage of time is spent on-charter vs. off-charter. Exploratory managers would like to know how much time was spent pursuing the goals of the test session, as opposed to how much effort was spent exploring new areas. Another word for off-charter testing is test opportunity, because that is the core of exploration.

Experienced exploratory testers have developed a skill about how long to explore in opportunity mode, before saying, "let's take note of this opportunity because we just found a new test charter for our next test session, and let's get back to testing our original charter".

Another metric that SBTM tracks is the division of time between Testing, Bug Hunting and Reporting, and Setup.

Testing refers to the tasks done to check the software, both on-charter and opportunity. When done correctly, this will often lead to periods of Bug Hunting, where you notice something is wrong with the software and you run on-the-spot experiments to try to reproduce the issue. This is one of the most rewarding aspects of exploratory testing - when the testers find and learn how to reproduce a bug.

Setup means anything that needs to be done in advance so that testing may continue, including system or network configuration. Often times, the setup of tests take longer than the tests themselves, especially during the first few test sessions of the project.

This is information that is very interesting to the test manager, and it indicates that the product might be in its early stages and we may need to plan for additional test sessions. If there is a lot more time spent on Bug Hunting than testing, you may want to schedule another test session with the same charter, or, the test charter may have been to broad, so you'll want to narrow it down a bit.

The testing mission is the underlying motivation for your testing. To know what your test mission is, you need to provide a clear, articulated, and above all, bluntly honest answer to the question, "Why am I testing this?" You may need to Drill down a few levels by continuously asking "why" three or four times in order to get to the real underlying mission. Without a doubt, the most important prerequisite for successful testing is for all the testers on the team to not only know, but understand and appreciate the test mission.

Test Charters

ET is a skilled and disciplined approach to testing. And one of the skills exploratory testers master is the ability to manage the scope of testing so that the software is tested in a thorough and appropriate manner. Testers manage the scope of exploratory testing using a concept called a "charter."

A charter is a mission statement consisting of two or three sentences to guide your testing for the next 90 minutes. A charter might say "Analyze the X function. Make note of any risks, claims in the spec, or areas of instability. Be on the lookout for latency when all "Submit" buttons are pressed."

Charters are statements of what aspects of the system are to be tested. Charters, unlike what are called "scripts" in scripted testing, do not specify how the system is to be tested, only that some aspect of the system is to be tested. For example, a charter might say "test the login functionality" where a script might say "Type Administrator into the User field, and "h@XX0rz" into the Password field." A charter leaves the actual steps of the testing up to the skilled and disciplined tester. The reason for this is that a skilled and disciplined tester might notice that an extra character at the end of the password still lets the user log on, where a scripted test would never expose such an error.

Designing charters for exploratory testing is one of the most difficult aspects of the work. It is hard to know how much testing is enough, or what aspects of the system need more coverage, or how long a tester should spend examining any particular aspect of the system. However, agile software projects often produce exactly the charters that an exploratory tester needs to work.

The charter is designed to be open-ended and inclusive, prompting the tester to explore the application and affording opportunities for variation. Charters are not meant to be comprehensive descriptions of what should be done, but the total set of charters for the entire project should include everything that is reasonably testable.

Test charters might be feature-driven, component-driven, or test-driven. For example, a feature-driven test charter might be to check a system's login feature under different user permissions, or to test the "checkout" function in an online shopping cart.

A component-driven charter might be to check the accuracy of the system's computation engine, or check the GUI presentation layer for ease of use and accessibility. Other examples of component-driven charters could be "Check every error message", or "Check what happens when dataflow between components becomes interrupted". Sometimes, test charters can be driven by the tests themselves, such as, "Check the test coverage of the automated test suite", or "Try to reproduce the bugs marked as irreproducible in the bug database". The level of generality or detail in a test charter corresponds to how long the testing takes.

Test Session

Testing occurs in special time-boxes called test sessions. A test session is a period of uninterrupted time where exploration occurs, usually 60-120 minutes long. The time box is some period of time between 45 minutes and 2 ¼ hours, where a short session is one hour (+/- 15 minutes), a long session is two, and a normal session is 90 minutes. The intention here is to make the session short enough for accurate reporting, changes in plans (such as a session being impossible due to a broken build, or a session changing its charter because of a new priority), but long enough to perform appropriate setup, to get some good testing in, and to make debriefing efficient. Excessive precision in timing is discouraged.

A time box is a defined period of time during which a task must be accomplished. Time boxes are commonly used to manage software development risk. Development teams are repeatedly tasked with producing a releasable improvement to software, time boxed to a specific number of weeks. In case of ET time box signifies the time taken to do the testing of software. This is usually in minutes.

As testers begin charters, they make note of the time. With Session-Based Test Management, keeping precise time with a stopwatch is not important, but they'll need a general sense of how long the session is taking. Sometimes lessons only last 30 minutes, sometimes they can take two hours. Longer than this might mean that the charter is too vague. Shorter than 30 minutes may mean the charter is too specific - that is, it may not have fostered much of an exploration.

Sometimes test sessions are done individually, where the tester sits at the computer and becomes engaged with the software, exploring the test charter. Other times, exploration is done in pairs, where one tester is sitting at the keyboard and explaining the test ideas and hypothesis which emerge, while another sits alongside, taking notes and suggesting additional ideas along the way.

Paired exploratory testing has proven to be quite a valuable approach. The goal is to work on one test charter per session. What often happens, as is the case with exploration, is that as testing occurs, it becomes evident that additional test charters are necessary. This is a classic example of the exploration feedback loop with the emphasis on learning. The testers have learned about a new area of the software which needs to be tested in a certain way and hasn't been thought of before. This is one of the biggest benefits of the exploratory approach.

Reviewable result

The reviewable result takes the form of a session sheet, a page of text (typically ASCII) that follows a formal structure. This structure includes:

Charter

Coverage areas (not code coverage; typically product areas, product elements, quality criteria, or test techniques)

Start Time

Tester Name(s)

Time Breakdown

session duration (long, normal, or short)

test design and execution (as a percentage of the total on-charter time)

bug investigation and reporting (as a percentage of the total on-charter time)

session setup (as a percentage of the total on-charter time)

charter/opportunity (expressed as a percentage of the total session, where opportunity time does not fit under the current charter, but is nonetheless useful testing work)

Data Files

Test Notes

Bugs (where a "bug" is a problem that the tester and the test manager reasonably believe represents a threat to the value of the product)

Issues (where an "issue" is a problem that threatens the value of the testing process-missing information, tools that are unavailable, expertise that might be required, questions that the tester might develop through the course of the session)

There are two reasons for this structure. The first is simply to provide a sense of order and completeness for the report and the debrief. The second is to allow a scripting tool to parse tagged information from the session sheets, such that the information can be sent to other applications for bug reporting, coverage information, and inquiry-oriented metrics gathering.

Debrief

The debrief is a conversation between the tester who performed the session and someone else-ideally a test lead or a test manager, but perhaps simply another tester. In the debrief, the session sheet is checked to make sure that it's readable and understandable; the manager and the tester discuss the bugs and issues that were found; the manager makes sure that the protocol is being followed; and coaching, mentoring, and collaboration happen. A typical debrief will last between five to ten minutes, but several things may add to the length. Incomplete or poorly written session sheets produced by testers new to the approach will prompt more questions until the tester learns the protocol. A highly complex or risky product area, a large number of bugs or issues, or an unfamiliar product may also lead to longer conversations. Scheduling time for debriefings is difficult when there are more than three or four testers reporting to the test manager or test lead, or when the test manager has other responsibilities. In such cases, it may be possible to have the testers debrief each other.

Metrics

In the report, the tester estimates their time related to three tasks: Test Execution and Design (T), Session Setup (S) and the time it took to investigate and report any bugs (B). These "gut feeling" estimates or "TBS metrics" are a way to give stakeholders an idea of how the test effort is going.

Setup (S)

With charter-in-hand, the tester makes note of the time and starts testing. If they need to print out any docs that help them fulfill their charter, they do it. This is just one of many Setup activities they might need to do depending on their testing style and what's helpful to you. Others may include configuring the machine, installing the build, or changing product settings.

Test Execution and Design (T)

As they test, they think of ideas and questions to ask the software, just like in manual testing, because this *is* manual testing -- just governed by time and a charter.

Bug Investigation and Reporting (B)

Bugs found during testing need to be logged into the session report *and* the bug database. I recommend writing up the bug right there in the session report and then cutting and pasting it into the DB after the session is over. This allows the details to remain fresh.

In their best estimation, the tester asks themselves how often they stopped to investigate something weird and take the time to write it up. If they took any time at all, this interrupted testing. This isn't a bad thing in and of itself, but it is meaningful to report because it stopped testing coverage for awhile.

The same is true for setup activities. How much time did they spend on setting up and configuring for their session once it started? Was there any time during the session that they stopped testing to set something up or reconfigure? That time interrupted testing, too. So, in effect, B and S time during a session is an interrupting to the third metric: T. A manager might look at a session report where a tester reported 50% B time and 30% S time, which means they would have spent 20% on T time. That's important to know because high B and S times may provoke them to harass Programming to give them better builds with less bugs or think of resources to give them so that setup doesn't take so much time.

It all comes down to T. Test Design and Execution time is the amount of time a tester spent covering their charter. T is the progress they made. If T time is high, that may mean the thing they were testing wasn't all that buggy or that setup was minimal or non-existent. So, T, B, and S together is our best idea to represent what testers actually do when they explore.

Test Heuristics

The Heuristic Test Strategy Model is a set of patterns for designing a test strategy. The immediate purpose of this model is to remind testers of what to think about when they are creating tests. Ultimately, it is intended to be customized and used to facilitate dialog, self directed learning, and more fully conscious testing among professional testers.

Test ideas are experiments which testers perform to provide evidence for, or do disprove a hypothesis about the software. Test ideas are usually driven by a set of heuristics, which have been defined as "a fallible idea or method which may help you simplify or solve a problem". In other words, heuristics can be thought of as rules-of-thumb which can be used to drive your test ideas. As an example, imagine that you are testing a database-driven application such as an inventory management system with a front end GUI interfaces and a relational database on the back end. You may be familiar with the CRUD heuristic for database operations - CRUD stands for the different operations which a database application performs on its records: Create, Read, Update and Delete. This heuristic will drive your test ideas serving as a reminder to explore what happens to this inventory management system when each of these operations are done. There are many heuristics available to the exploratory tester, too many to list here in detail. James Bach, a major proponent of the exploratory testing approach, famously talks about a mnemonic he uses to remember heuristics of different test aspects of any product, called "San Francisco Depot", or SFDPOT:

Structure

Function

Data

Platform

Operations

Time

Each of these categories are exploration paths; that is, areas in which tests can be developed and executed in real time. According to Bach, each of these unique dimensions of software products should drive a set of test charters to reduce the possibility of missing important bugs.

While exploratory testing does rely on the skill and freedom of a tester to think of meaningful test ideas and execute them, it is not "random" testing or "thoughtless" testing, though it can seem that way. For certain, it is not unmanageable or unmeasurable.

When to Apply Exploratory Testing

The main question to the mind of a tester and management comes is when one should go for the ET, when can it be ideally useful. While the following list is not exhaustive, it suggests several ideas as to where ET can best be incorporated into the overall test approach:

It's an agile project, there is no scope of the documentation and so ET technique is followed. A new tester enters the team: ET can make the learning phase an active testing and exploration experience. A new tester with a little help of the other senior tester in his team or a developer can learn the new application. A quick assessment of the application is needed: ET offers fast insight into product quality in the short term when there is no time for test preparation.

Validation for the work of another tester is required: ET lets us explore the feature that the other tester has tested.

There is no test basis: ET is useful when there is no documentation or other sources that can define clear, expected results for the tests.

want to isolate and investigate a particular defect.

want to determine the status of a particular risk in order to evaluate the need for ST in that area.

It's an early iteration in which the product is not stable enough for ST.

It's a beta test, where users are invited to provide early feedback on a prototype or a preliminary test version.

New information comes to light during the execution of scripted tests: The new information might suggest another test strategy that would warrant switching to an exploratory mode.

want to augment ST to diversify testing: The combination of ST and ET is appropriate for testing features with a high risk and/or priority profile.

***********

A More Formal Structure for Exploratory Testing

Making Exploratory Testing Interesting and Effective for Your Testing

There is a myth that there is no end to testing a product or an application in the testing industry. Over the last two decades testing has evolved as a separate discipline. These days software testing includes quality consulting, test strategy, test planning, traceability and requirements analysis which are breaking this myth. Exploratory testing has withstood the test of time yielding good results and also helping the tester think out of box. It is being relied upon completely over formal testing techniques, in situations where the product requirements isn't clear or the release time is very short. When it is of such importance and value, how can you encourage your teams to use this technique and how can you better interpret and communicate the results of exploratory testing.

Here are some simple yet effective techniques to motivate your team into exploratory testing:

Quite often there is no room for exploratory testing since the team is busy running scripted tests. Set aside half hour each day for exploratory testing during which the team solely focuses on that. Encourage creativity and put your team in end user's shoes. Exploratory testing may not be required on a daily basis and can be done periodically.

In case of heavy role based product that may need simultaneous interactions assign roles to your testers during the exploratory phase and also periodically swap roles among the testers.

Conduct bug bashes whether you are a product or a services company. This adds value to your overall project engagement with your customer. Give prizes to testers for various categories of bugs bashes. Even appreciating the tester in front of the team or giving a certificate can motivate the entire team.

Cross assign testers across products. If a project doesn't have restrictions imposed on external team members, encourage cross sharing of testers to promote more creativity. This motivates the tester and breaks monotony and brings in fresh perspectives to test the product.

Try involving non testers to play around with the product. In my past when we tested a mobile application, we had test managers and directors come participate in the bug bash which helped us emulate some user scenarios .

Encourage exploratory testing for all kinds of testing assignments - black, gray and white box. It is misconstrued that exploratory testing can be done only black box. If you understand exploratory testing as something that promotes the tester to be extempore rather than being bound by a formally written test case, you will appreciate this technique in all testing assignments.

I next want to touch upon how to interpret the results of exploratory testing as unless this is done in an informed manner, to the untrained eye, it may reflect badly on the overall test coverage:

1. Interpreting the results is as important as it timing. Some teams take on this testing technique even before a formal test pass can start. Teams also take on this testing after formal testing has been completed as a supplemental method to find bugs. Most teams that adopt an agile methodology follow the former approach. Regardless of the approach most bugs would be found through exploratory testing but this is in no way reflective of poor test coverage.

2. Scripted tests often follows product specifications. However exploratory testing knows no bounds and often finds bugs in the unbeaten path. Thus a combination of scripted and exploratory testing helps drive the quality of the product. To make tests repeatable and constant, the tester should add test cases for valid exploratory bugs filed which ensure they are not missed during the regression cycle.

3. Bugs found towards product release, greatly determine the product's ship date. When analyzing bugs from bug bashes involve a triage team, representing the program and project management teams, development and test teams.

4. However exploratory testing has one important limitation. In scripted testing, formal design and review methods are implemented and test cases vetted for validity before they are executed. This stringent reviews ensure that the bugs found have a very high validity rate. While exploratory testing does not follow such set bounds, there is a higher chance for false positives in the bugs filed. This may bring down the value of the exploratory team amongst the rest. To ensure such false positives are kept low, peer reviews or reviews with the test lead / manager is often recommended for exploratory bugs.

Bug Advocacy

As testers, we all agree to the fact that the basic aim of the Tester is to decipher bugs. Whenever a build appears for testing, the primary objective is to find out as many bugs as possible from every corner of the application. To accomplish this task as perfection, we perform testing from various perspectives. We strain the application before us through various kinds of strainers like boundary value analysis, validation checks, verification checks, GUI, interoperability, integration tests, functional - business concepts checking, backend testing (like using SQL commands into db or injections), security tests, and many more. This makes us to drill deep into the application as well as the business.

We would agree to the fact that Bug Awareness is of no use until it is well documented. Here comes the role of BUG REPORTS. The bug reports are our primary work product. This is what people outside the testing group notices. These reports play an important role in the Software Development Life Cycle - in various phases as they are referenced by testers, developers, managers, top shots and not to forget the clients who these days demand for the test reports. So, the Bug Reports are remembered the most.

Once the bugs are reported by the testers and submitted to the developers to work upon, we often see some kinds of confrontations - there are humiliations which testers face sometimes, there are cold wars - nonetheless the discussions take the shape of mini quarrels - but at times testers and developers still say the same thing or they are correct but the depiction of their understanding are different and that makes all the differences. In such a situation, we come to a stand-apart that the best tester is not the one who finds most of the bugs or the one who embarrasses most programmers but is the one who gets most of the bugs fixed.

Bug Reporting - An Art:

The first aim of the Bug Report is to let the programmer see the failure. The Bug Report gives the detailed descriptions so that the programmers can make the Bug fail for them. In case, the Bug Report does not accomplish this mission, there can be back flows from the development team saying - not a bug, cannot reproduce and many other reasons.

Hence it is important that the BUG REPORT be prepared by the testers with utmost proficiency and specificity. It should basically describe the famous 3 What's, well described as:

What we did:

Module, Page/Window - names that we navigate to

Test data entered and selected

Buttons and the order of clicking

What we saw:

GUI Flaws

Missing or No Validations

Error messages

Incorrect Navigations

What we expected to see:

GUI Flaw: give screenshots with highlight

Incorrect message - give correct language, message

Validations - give correct validations

Error messages - justify with screenshots

Navigations - mention the actual pages

Pointers to effective reporting can be well derived from above three What's. These are:

1. BUG DESCRIPTION should be clearly identifiable - a bug description is a short statement that briefly describes what exactly a problem is. Might be a problem required 5-6 steps to be produced, but this statement should clearly identify what exactly a problem is. Problem might be a server error. But description should be clear saying Server Error occurs while saving a new record in the Add Contact window.

2. Bug should be reported after building a proper context - PRE-CONDITIONS for reproducing the bug should be defined so as to reach the exact point where bug can be reproduced. For example: If a server error appears while editing a record in the contacts list, then it should be well defined as a pre-condition to create a new contact and save successfully. Double click this created contact from the contacts list to open the contact details - make changes and hit save button.

3. STEPS should be clear with short and meaningful sentences - nobody would wish to study the entire paragraph of long complex words and sentences. Make your report step wise by numbering 1,2,3…Make each sentence small and clear. Only write those findings or observations which are necessary for this respective bug. Writing facts that are already known or something which does not help in reproducing a bug makes the report unnecessarily complex and lengthy.

4. Cite examples wherever necessary - combination of values, test data: Most of the times it happens that the bug can be reproduced only with a specific set of data or values. Hence, instead of writing ambiguous statement like enter an invalid phone number and hit save…one should mention the data/value entered….like enter the phone number as 012aaa@$%.- and save.

5. Give references to specifications - If any bug arises that is a contradictive to the SRS or any functional document of the project for that matter then it is always proactive to mention the section, page number for reference. For example: Refer page 14 of SRS section 2-14.

6. Report without passing any kind of judgment in the bug descriptions - the bug report should not be judgmental in any case as this leads to controversy and gives an impression of bossy. Remember, a tester should always be polite so as to keep his bug up and meaningful. Being judgmental makes developers think as though testers know more than them and as a result gives birth to a psychological adversity. To avoid this, we can use the word suggestion - and discuss with the developers or team lead about this. We can also refer to some application or some module or some page in the same application to strengthen our point.

7. Assign severity and priority - SEVERITY is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Urgent/Show - stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major area of the users system is affected by the incident and It is significant to business processes.

Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.

Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes

State exact error messages.

PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:

High: This has a major impact on the customer. This must be fixed immediately.

Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.

Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

Provide Screenshots - This is the best approach. For any error say object references, server error, GUI issues, message prompts and any other errors that we can see - should always be saved as a screenshot and be attached to the bug for the proof. It helps the developers understand the issue more specifically.

Sessions: 90-minute time boxes for ET

Charters: A clear mission for the session describing what to test, how to test, what bugs to look for, what risks are involved, and what documents to examine.

Session sheets: Reviewable results of a session, including notes, bugs, issues, and basic metrics such as time spent on set-up, test execution, and bug reporting

Session logs: Hansel and Gretel-like 'breadcrumb trails' used during test execution

Debriefings: Meetings with the test manager

Dashboards for reporting purposes

References

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.