The demand for Quality Assurance has grown significantly for enterprises of today. In parallel, dynamic market trends and the ever-changing technology landscape with the emergence of disruptive technologies such as cloud, mobility, and social media have made the profound impact that has necessitated a change in approach, methodology, and delivery of Testing Services. SoftServ’s Testing Services are backed by a strong legacy of testing expertise and endorsed for its wide range of testing services spectrum, quality of resources, wider delivery capabilities and strongly recognized as a sound choice for its customers by leading analysts. We bring a flexible, experienced and dedicated testing QA team which provides service and solution to deliver the quality application. Our services and solutions help you to build robust, dependable, scalable & secure software products/applications. We combine our extensive industry experience and sophisticated tools (Commercial as well as Open Source) for performing comprehensive software testing to uncover all the risks associated with your software product/application.
Project Testing Process
A product’s test strategy ties the project’s release and sign-off criteria to its business objectives. The overall testing strategy is defined in collaboration with the customer. In order to take key dependencies into account, test planning, Test case design, test automation, and test execution are aligned with the development schedule.
We identify the features, components, sub-components, and items to be tested and the range of tests to be carried out. In addition to available automation, we also estimate other required and possible automation. We catalog the tools used by the customer, potentially useful off-the-shelf tools, and internal tools that may be used for the project.
To summarize, our key steps for Test strategy are as follows:
- Define project scope & commitments
- Define terms of reference
- Set customer expectations
- Tie together the business objectives of our project with the release/sign off criterion and associated testing activity
- Integrate the processes with development lifecycle
- Partition the problem into manageable test plans
- Identify key dependencies & trade-offs
- Scope resource requirements
- Defining release criteria
- Outline and prioritize the testing effort.
- Chart test automation requirements
- Identify resource requirements at various stages of testing
- Set up a calendar-based activity plan
- State reporting mechanism & establish communication model
- Configure team including number, type, and seniority of resources and length of time required, mapped each resource onto the activity plan.
- Prepare comprehensive test plan specifications and test cases for each level of testing.
- Review all test plans and test cases
- Prepare test data and test logs.
- Set up the test environment so that all operations can be completed promptly, accurately, and efficiently.
- Execute Error/Trap tests to ensure testers accuracy.
- Execute tests as described, noting where test cases require revision and updating.
- Report all bugs in the manner agreed upon with the customer, following all defect management protocols, informing the customer of current status, monitoring and driving to resolution all red-flag conditions, and ensuring that communication between all parties is complete and accurate.
- Run spot checks to ensure quality.
- Update weekly the Project Health Status document for Internal Audit & Tracking.
- When the project has been completed, review all aspects of the project and submit a Project Retrospective report to the customer that objectively evaluates the project’s execution.
The best approach to testing is to start with basic functionality and gradually add levels of complexity at each successive stage. As each test is completed, the results should be documented and verified against the project requirements. Any problems should be investigated and resolved. We follow the above well-documented testing model for our client testing. We can plug into your testing cycle at any stage.
Unit Testing consists of a focused set of tests that specifically target a single operation, function, or process. Here the primary goal of SoftServ testing team is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules.
Integration Testing is expressed as a logical extension of unit testing. Here two or more units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one functional unit. This testing method reduces the number of possible reasons, permitting a far simpler level of analysis for our projects.
System Testing confirms that all code modules work as specified, and the system as a whole performs adequately on the platform on which it will be deployed. It is performed by our testers who are trained to plan, execute, and report on application and system code. They are aware of scenarios that might not occur to the end user, like testing for null, negative, and format inconsistent values.
Acceptance Testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system.
Sustained Engineering With an ever-changing technological landscape you need a strategic edge in the form of Sustained Engineering Services to emerge as the leader. But with your resources locked into maintaining existing product-lines, you have limited time and energy to devote to productizing your vision into a product targeted for future release. SoftServ QA team will take care of your non-core testing requirements and your company’s in-house QA team might focus on its core functions.
Regression Testing, Any time we modify an implementation within a program, we must do regression testing. Regression testing must include tests written to verify bugs that have been fixed during the product cycle. Adequate coverage without wasting time should be a primary consideration when conducting regression tests.
Documentation & reportingOur project manager sets QA Metrics during the design phase of the software. These are based on the software quality requirements described in the SRS. Some common examples of software quality metrics are as follows:
- Code Coverage
- Defect distribution and density
- Defect trend analysis
- Meantime to discover next K faults
- Mean time to failure
- Meantime to recover
- Failure analysis
- Root-cause analysis
- Quality gap analysis (release criteria vs. actual test status)
- DRE(Defect Removal Efficiency) i.e. Internal Defect Count/(Internal Defect count+Leaked Defect)
- Defect Density i.e. Number of Value Add defects/ Total no of test cases
- Effort Variance i.e. (Actual Effort – Planned Effort)/Planned Effort
- Test Execution Coverage i.e. Test cases executed/Test cases planned
- Cost Productivity Index i.e. (Total Execution Hrs/# of test cases)*No of test cases executed)/Actual Cost(# of hrs exhausted for Test Execution)
- NVA (Non-Value Add) Defects i.e. Total Defects-Value Add Defects/Total Defects
- QA Cost (as %age of SDLC) i.e. Total number of QA hrs/Total number of SDLC hrs