Service Validation and Testing Tutorial

Service Validation and Testing

Welcome to lesson 4 of the ITIL Intermediate RCV tutorial, which is a part of ITIL Intermediate RCV Foundation Certification course. This lesson introduces the service validation and testing (SVT) process and how it contributes to RCV.

Let us look at the objectives of this lesson.

Objectives

By the end of this ‘Service Validation and Testing’ lesson, you will be able to:

  • Describe the complete overview of the purpose, objectives, scope, and importance of SVT as a process, the various test models, test and validation conditions.

  • Explain the SVT policies, principles, concepts, activities, methods, and techniques in relation to RCV practices and building/achieving quality of service.

  • Review the efficient use of SVT metrics in terms of business value contribution and internal efficiency.

Moving on, in the next section, we will look at the purpose and objective of SVT.

Purpose of Service Validation and Testing Process (SVT)

The purpose of the Service Validation and Testing process is to:

  • Ensure that a new or changed IT service matches its design specification and will meet the needs of the business.

The objectives of Service Validation and Testing are to:

  • Plan and implement a structured validation and test process that provides objective evidence that the new or changed service will support the customer’s business and stakeholder requirements, including the agreed service levels

  • Quality assure a release, its constituent service components, the resultant service and service capability delivered by a release

  • Identify, assess and address issues, errors, and risks throughout Service Transition.

  • Provide confidence that a release will create a new or changed service or service offerings that deliver the expected outcomes and value for the customers within the projected costs, capacity and constraints

  • Validate that the service is ‘fit for purpose’ – it will deliver the required performance with desired constraints removed

  • Assure a service is ‘fit for use’ – it meets certain specifications under the specified terms and conditions of use

  • Confirm that the customer and stakeholder requirements for the new or changed service are correctly defined and remedy any errors or variances early in the service lifecycle as this is considerably cheaper than fixing errors in production.

As we are now aware of the purpose and objective of SVT, let’s learn about its scope in the next section.

Are you curious to know what ITIL Intermediate RCV is all about? Consider watching our Course Preview here!

Scope of the Service Validation and Testing Process

The service provider takes responsibility for delivering, operating and/or maintaining customer or service assets at specified levels of warranty, under a service agreement.

The scope of the Service Validation and Testing Process are:

  • Service Validation and Testing can be applied throughout the service lifecycle to quality assure any aspect of the service and the service providers’ capability, resources and capacity to deliver a service and/or service release successfully.

  • In order to validate and test an end-to-end service the interfaces to suppliers, customers and partners are important. Service provider interface definitions define the boundaries of the service to be tested, e.g., process interfaces and organizational interfaces.

  • Testing is equally applicable to in-house or developed services, hardware, software or knowledge-based services. It includes the testing of new or changed services or service components and examines the behavior of these in the target business unit, service unit, deployment group or environment. This environment could have aspects outside the control of the service provider, e.g., public networks, user skill levels or customer assets.

  • Testing directly supports the release and deployment process by ensuring that appropriate levels of testing are performed during the release, build and deployment activities.

  • It evaluates the detailed service models to ensure that they are fit for purpose and fit for use before being authorized to enter Service Operations, through the service catalog. The output from testing is used by the evaluation process to provide the information on whether the service is independently judged to be delivering the service performance with an acceptable risk profile.

In the next section let us understand SVT process as value to the business.

Value to Business of Service Validation and Testing Process

Service failures can harm the service provider’s business and the customer’s assets and result in outcomes such as loss of reputation, loss of money, loss of time, injury and death.

The key value to the business and customers from Service Testing and Validation is in terms of the established degree of confidence that a new or changed service will deliver the value and outcomes required of it and understanding the risks.

Successful testing depends on all parties understanding that it cannot give, indeed should not give, any guarantees but provides a measured degree of confidence. The required degree of confidence varies depending on the customer’s business requirements and pressures of an organization.

Let us look at the policies of this process in the next section.

Service Validation and Testing Policies

Policies that drive and support Service Validation and Testing include service quality policy, risk policy, Service Transition policy, release policy and Change Management policy. Let us discuss the policies below:

Service quality policy

Senior leadership will define the meaning of service quality. Service Strategy discusses the quality perspectives that a service provider needs to consider.

In addition to service level metrics, service quality takes into account the positive impact of the service (utility) and the certainty of impact warranty.

The Service Strategy publication outlines four quality perspectives:

  • Level of excellence

  • Value for money

  • Conformance to specification

  • Meeting or exceeding expectations

One or more, if not all four, perspectives are usually required to guide the measurement and control of Service Management processes. The dominant perspective will influence how services are measured and controlled, which in turn will influence how services are designed and operated. Understanding the quality perspective will influence the Service Design and the approach to validation and testing.

Risk policy

Different customer segments, organizations, business units and service units have different attitudes to risk. Where an organization is an enthusiastic taker of business risk, testing will be looking to establish a lower degree of confidence than a safety critical or regulated organization might seek.

The risk policy will influence control required through Service Transition including the degree and level of validation and testing of service level requirements, utility and warranty, i.e., availability risks, security risks, continuity risks and capacity risks.

Service Transition policy

A policy defined, documented and approved by the management team, who ensure ¡t ¡s communicated across the organization, to relevant suppliers/ partners

Release policy

The type and frequency of releases will influence the testing approach. Frequent releases such as once-a-day drive requirements for re-usable test models and automated testing.

Change Management policy

The use of change windows can influence the testing that needs to be considered. For example, if there is a policy of ‘substituting’ a release package late in the change schedule or if the scheduled release package is delayed then additional testing may be required to test this combination if there are dependencies.

In the next section let us learn about Test Models of SVT.

Test Models

A test model includes a test plan, what is to be tested and the test scripts that define how each element will be tested. A test model ensures that testing is executed consistently in a repeatable way that is effective and efficient. The test scripts define the release test conditions, associated expected results and test cycles.

To ensure that the process is repeatable, test models need to be well structured in a way that:

  • Provides traceability back to the requirement or design criteria

  • Enables auditability through test execution, evaluation, and reporting

  • Ensures the test elements can be maintained and changed.

Now let us look at various validation and testing perspectives.

Various Validation and Testing Perspectives and Their Purpose

Effective validation and testing focuses on whether the service will deliver as required. This is based on the perspective of those who will use, deliver, deploy, manage and operate the service. The test entry and exit criteria are developed as the Service Design Package is developed.

Validation and Testing Perspectives
These will cover all aspects of the service provision from different perspectives including:

  • Service Design – functional, management and operational

  • Technology design

  • Process design

  • Measurement design

  • Documentation

  • Skills and knowledge

Service acceptance testing starts with the verification of the service requirements. For example, customers, customer representatives and other stakeholders who sign off the agreed service requirements will also sign off the service Acceptance Criteria and service acceptance test plan.

The stakeholders include:

  • Business customers/ or customer representatives

  • Users of the service within the customer’s business who will use the new or changed service to assist them in delivering their work objectives and deliver service and/or product to their customers

  • Suppliers

  • Service provider/service unit.

In this section we looked at validation and testing, now let us understand the perspectives of these depending on the stakeholders.

How about investing your time in ITIL Intermediate RCV? Take a look at our Course Preview!

The stakeholder groups' requirements to be addressed

The business involvement in acceptance testing is central to its success and is included in the Service Design package, enabling adequate resource planning.

From the business’s perspective, this is important in order to:

  • Have a defined and agreed means for measuring the acceptability of the service including interfaces with the service provider, e.g., how errors or queries are communicated via a single point of contact, monitoring progress and closure of change requests and incidents.

  • Understand and make available the appropriate level and capability of resource to undertake service acceptance.

From the service provider’s perspective, the business involvement is important to:

  • Keep the business involved during build and testing of the service to avoid any surprises when service acceptance takes place

  • Ensure the overall quality of the service delivered into acceptance is robust since this starts to set business perceptions about the quality, reliability, and usability of the system, even before it goes live

  • Deliver and maintain solid and robust acceptance test facilities in line with business requirements

  • Understand where the acceptance test fits into an overall business service or product development-testing activity.

We have already learned about test models. In the next section, let us learn about levels of testing and relevant test models.

Test levels and Test models

Testing is related directly to the building of service assets and products so that each one has an associated acceptance test and activity to ensure it meets requirements. This involves testing individual service assets and components before they are used in the new or changed service.

Each service model and associated service deliverable is supported by its own reusable test model that can be used for regression testing during the deployment of a specific release as well as for regression testing in future releases.

Test models help with building quality early into the service lifecycle rather than waiting for results from tests on a release at the end. The levels of testing that are to be performed are defined by selected test model.

In continuation to this, let us learn about the V model in the next section.

Test levels and Test models : Service V Model

Using a model such as the V-model builds in Service Validation and Testing early in the service lifecycle. It provides a framework to organize the levels of configuration items to be managed through the lifecycle and the associated validation and testing activities both within and across stages. The level of test is derived from the way a system is designed and built up.

The following image shows the V-model, which maps the types of test to each stage of development.

 Service V Model

The V-model provides one example of how the Service Transition levels of testing can be matched to corresponding stages of service requirements and design.

The left-hand side represents the specification of the service requirements down to the detailed Service Design.

The right-hand side focuses on the validation activities that are performed against the specifications defined on the left-hand side.

At each stage on the left-hand side, there is direct involvement by the equivalent party on the right-hand side. It shows that service validation and acceptance test planning should start with the definition of the service requirements. For example, customers who sign off the agreed service requirements will also sign off the service Acceptance Criteria and test plan.

Moving on, let us discuss the key activities of validation and testing process.

Service Validation and Testing Process : Key Activities

The testing process is represented graphically as shown below:
Key Activities of Service Validation and Testing

The picture shows that the test activities are not undertaken in a sequence. Several activities may be done in parallel, e.g., test execution can begin before and at the same time the test design is complete.

In the subsequent sections, we will understand the key activities of service validation and testing process.

Service Validation and Testing Process : Key Activities - Validation and Test Management

Test management includes the planning, control, and reporting of activities through the test stages of Service Transition. These activities include:

  • Planning the test resources

  • Prioritizing and scheduling what is to be tested and when

  • Management of incidents, problems, errors, non-conformances, risks, and issues

  • Checking that incoming known errors and their documentation are processed

  • Monitoring progress and collating feedback from validation and test activities

  • Management of incidents, problems, errors, non-conformances, risks, and issues discovered during the transition

  • Consequential changes, to reduce errors going into production

  • Capturing configuration baseline

  • Test metrics collection, analysis, reporting, and management.

Test management includes managing issues, mitigating risks, and implementing changes identified from the testing activities as these can impose delays and create dependencies that need to be proactively managed.

Test metrics are used to measure the test process and manage and control the testing activities. They enable the test manager to determine the progress of testing, the earned value, and the outstanding testing, and this helps the test manager to estimate when testing will be completed.

Good metrics provide information for management decisions that are required for prioritization, scheduling and risk management. They also provide useful information for estimating and scheduling for future releases.

In the next section, we will discuss plan and design test activity.

Service Validation and Testing Process : Key Activities - Plan and Design Test

Test planning and design activities start early in the service lifecycle and include:

  • Resourcing

  • Hardware, networking, staff numbers and skills, etc., capacity

  • Business/customer resources required, e.g., components or raw materials for production control services, cash for ATM services

  • Supporting services including access, security, catering, communications

  • Schedule of milestones, handover and delivery dates

  • Agreed time for consideration of reports and other deliverables

  • Point and time of delivery and acceptance

  • Financial requirements – budgets and funding.

Once the test planning and design is executed, the next activity would be to verify test plan and test design.

Let us learn more activities in the next section.

Service Validation and Testing Process : Key Activities - Verify Test Plan and Test Design

Verify the test plans and test design to ensure that:

  • The test model delivers adequate and appropriate test coverage for the risk profile of the service

  • The test model covers the key integration aspects and interfaces, e.g.,  at the SPIs

  • The test scripts are accurate and complete

To carry on these tests, let us look at preparing test environment activity in the next section.

Service Validation and Testing Process : Key Activities - Prepare Test Environment

The test environment is prepared by using the services of the build and test environment resource. Use the release and deployment processes to prepare the test environment where possible. Capture a configuration baseline of the initial test environment.

The next activity is to perform tests, let’s look into the details.

Service Validation and Testing Process : Key Activities - Perform Tests

The activities to perform tests are:

Carry out the tests using manual or automated techniques and procedures. Testers must record their findings during the tests. If a test fails, the reasons for failure must be fully documented. Testing should continue according to the test plans and scripts, if at all possible.

When part of a test fails, the incident or issues should be resolved or documented (e.g., as a known error) and the appropriate re-tests should be performed by the same tester.

The deliverables from testing are:

  • Actual results showing proof of testing with cross-references to the test model, test cycles, and conditions

  • Problems, errors, issues, non-conformances, and risks remaining to be resolved

  • Resolved problems/known errors and related changes

  • Sign-off

Let us now look at the evaluate exit criteria and report in the validation and testing process.

Service Validation and Testing Process : Key Activities - Evaluate Exit Criteria and Report

To evaluate the exit criteria and report, the actual results are compared to the expected results. The results may be interpreted in terms of pass/ or fail; risk to the business/ or service provider; or if there is a change in a projected value, e.g.,  higher cost to deliver intended benefits. To produce the report, gather the test metrics and summarize the results of the tests.

Examples of exit criteria are:

  • The service, with its underlying applications and technology infrastructure, enables the business users to perform all aspects of function as defined.

  • The service meets the quality requirements.

  • Configuration baselines are captured into the CMS.

The next activity is test clean up and closure.

Service Validation and Testing Process : Key Activities - Test Cleanup and Closure

Test clean up and closure is done to ensure that the test environments are cleaned up or initialized. Review the testing approach and identify improvements to input to design/ and build, buy/ and build decision parameters, and future testing policy/ or procedures.

 So far we have learned about the different activities of Validating and testing. Moving ahead, let us look at the triggers, inputs, and outputs of this process in the next section.

Service Validation and Testing Process Triggers, Inputs, and Outputs

Let us discuss the Triggers, Inputs, and Outputs one by one:

Trigger

The trigger is for testing is a scheduled activity on a release plan, test plan or quality assurance plan. Inputs

The key inputs to the process are:

  • The service package – This comprises a core service package and re-usable components, many of which themselves are services, e.g., supporting service. It defines the service’s utilities and warranties that are delivered through the correct functioning of the particular set of identified service assets. It maps the demand patterns for service and user profiles to SLPs.

  • Service Level Package (SLP) – One or more SLPs provides a definitive level of utility or warranty from the perspective of outcomes, assets, patterns of business activity of customers (PBA).

  • Service provider interface definitions – These define the interfaces to be tested at the boundaries of the service being delivered, e.g., process interfaces, organizational interfaces.

  • The Service Design package – This defines the agreed requirements of the service, expressed in terms of the service model and Service Operations plan. It includes:

  • Operation models (including support resources, escalation procedures and critical situation handling procedures)

  • Capacity/resource model and plans – combined with performance and availability aspects

  • Financial/economic/cost models (with TCO, TCU) o Service Management model (e.g., integrated process model as in ISO/IEC 20000

  • Design and interface specifications.

  • Release and deployment plans – These define the order that release units will be deployed, built and installed.

  • Acceptance Criteria – These exist at all levels at which testing and acceptance are foreseen.

  • RFCs – These instigate required changes to the environment within which the service functions or will function.

Outputs:

The direct output from testing is the report delivered to service evaluation. This sets out:

  • Configuration baseline of the testing environment

  • Testing carried out (including options chosen and constraints encountered)

  • Results from those tests

  • Analysis of the results, e.g.,  comparison of actual results with expected results, risks identified during testing activities.

After the service has been in use for a reasonable time, there should be sufficient data to perform an evaluation of the actual vs. predicted service capability and performance. If the evaluation is successful, an evaluation report is sent to Change Management with a recommendation to promote the service release out of early life support and into normal operation.

Other outputs include:

  • Updated data, information, and knowledge to be added to the service knowledge management system, e.g., errors and workarounds, testing techniques, analysis methods

  • Test incidents, problems and error records

  • Improvement ideas for Continual Service Improvement to address potential improvements in any area that impacts on testing:

  • To the testing process itself

  • To the nature and documentation of the Service Design outputs

  • Third-party relationships, suppliers of equipment or services, partners (co-suppliers to end customers), users and customers or other stakeholders.

In the next section, we will discuss Service validation and testing process interfaces with other lifecycle stages.

Service Validation and Testing Interfaces to Other Lifecycle Stages

Testing supports all of the release and deployment steps within Service Transition. Although this chapter focuses on the application of testing within the Service Transition phase, the test strategy will ensure that the testing process works with all stages of the lifecycle.

Working with Service Design is to ensure that designs are inherently testable and providing positive support in achieving this. Examples range from including self-monitoring within hardware and software, through the re-use of previously tested and known service elements through to ensuring rights of access to third-party suppliers to carry out inspection and observation on delivered service elements easily.

Working closely with CSI is to feed failure information and improvement ideas resulting from testing exercises.

Service Operation will use maintenance tests to ensure the continued efficacy of services; these tests will require maintenance to cope with innovation and change in environmental circumstances.

Service Strategy should accommodate testing in terms of adequate funding, resource, profile, etc.

In the next section, let us discuss the information management in SVT process.

Service Validation and Testing Information Management

The nature of IT Service Management is repetitive, and this ability to benefit from re-use is recognized in the suggested use of transition models. Testing benefits greatly from re-use and to this end it is sensible to create and maintain a library of relevant tests and an updated and maintained dataset for applying and performing tests.

The test management group within an organization should take responsibility for creating, cataloging and maintaining test scripts, test cases and test data that can be reused. Similarly, the use of automated testing tools (Computer Aided Software Testing – CAST) is becoming ever more central to effective testing in complex software environments. Equivalently standard and automated hardware testing approaches are fast and effective.

Test data - However well a test has been designed, it relies on the relevance of the data used to run it. This clearly applies strongly to software testing, but equivalent concerns relate to the environments within which hardware, documentation, etc. are tested. Testing electrical equipment in a protected environment, with smoothed power supply and dust, temperature and humidity control will not be a valuable test if the equipment will be used in a normal office.

Test environments - Test environments must be actively maintained and protected. For any significant change to a service, the question that should be asked is: ‘If this change goes ahead, will there need to be a consequential impact on the test data?’

If so, it may involve updating test data as part of the change, and the dependency of a service, or service element, on test data or test environment will be evident from the SKMS, via records and relationships held within the CMS.

Outcomes from this question include:

  • Consequential updating of the test data

  • A new separate set of data or new test environment, since the original is still required for other services

  • Redundancy of the test data or environment – since the change will allow testing within another existing test environment, with or without modification to that data/ or environment (this may, in fact, be the justification behind a perfective change – to reduce testing costs)

  • Acceptance that a lower level of testing will be accepted since the test data/ or environment cannot be updated to deliver equivalent test coverage for the changed service.

In the next section, let us learn about practices of maintaining them.

You too can join the high earners club. Enroll in our ITIL Intermediate RCV Course now!

Practices of maintaining Test Data and Test Environments

Maintenance of test data should be an active exercise and should address relevant issues including:

  • Separation from any live data, and steps to ensure that it cannot be mistaken for live data when being used, and vice versa (there are many real-life examples of live data being copied and used as test data and being the basis for business decisions)

  • Data protection regulations – when live data is used to generate a test database, if information can be traced to individuals, it may well be covered by data protection legislation that, for example, may forbid its transportation between countries

  • Backup of test data, and restoration to a known baseline for enabling repeatable testing; this also applies to initiation conditions for hardware tests that should be baselined. An established test database can also be used as a safe and realistic training environment for a service.

In the next section, like any other process let us look at the CSFs and KPIs of SVT process.

Critical Success Factors (CSF's) And Key Performance Indicators (KPI's)

The following list includes some sample CSFs for Service Validation and Testing Process. Each organization should identify appropriate CSFs based on its objectives for the process. Each sample CSF is followed by a small number if typical KPIs that support the CSF. These KPIs should not be adopted without careful consideration.

Each organization should develop KPIs that are appropriate for its level of maturity, its CSFs and its particular circumstances. Achievement against KPIs should be monitored and used to identify opportunities for improvement, which should be logged in the continual service improvement (CSI) register for evaluation and possible implementation.

The following table shows a few samples of CSFs and lists their supporting KPIs.  

CSF

KPI

Understanding the different stakeholder perspectives that underpin effective risk management for the change impact assessment and test activities

  • Roles and responsibilities for impact assessment and test activities have been agreed and documented

  • Increase in the number of new or changed services for which all roles and responsibilities for customers, users and service provider personnel have been agreed and documented

  • Increase in the percentage of impact assessments and test activities where the documented roles have been correctly involved

  • Increase in satisfaction ratings in stakeholder survey of the service validation and testing process.

Building a thorough understanding of risks that have impacted or may impact successful service transition of services and releases

  • Reduction in the impact of incidents and errors for newly transitioned services

  • Increased number of risks identified in service design or early in service transition compared to those detected during or after testing

  • Increased ratio of errors detected in service design compared to service transition, and of errors detected in service transition compared to the service operation.

Encouraging a risk management culture where people share information and take a pragmatic and measured approach to risk

  • Increase in the number of people who identify risks for new or changed services

  • Increase in the number of documented risks for each new or changed service

  • Increase in the percentage of risks on the risk register which have been managed

Providing evidence that the service assets and configurations have been built and implemented correctly in addition to the service delivering what the customer needs

  • Increased percentage of service acceptance criteria that have been tested for new and changed services

  • Increased percentage of services for which build and implementation have been tested, separately to any tests of utility or warranty

Developing reusable test models

  • Increased number of tests in a repository for reusable tests

  • Increased number of times that tests are re-used

Achieving a balance between cost of testing and effectiveness of testing

  • Reduced variance between test budget and test expenditure

  • Reduced cost of fixing errors, due to earlier detection

  • Reduction in business impact due to delays in testing

  • Reduced variance between the planned and actual cost of customer and user time to support testing

Moving on let us now learn about the challenges and risks faced by SVT process.

Challenges and Risks

Challenges to effective Testing:

Still the most frequent challenges to effective testing are based on lack of respect and understanding for the role of testing. Traditionally testing has been starved of funding, and this results in:

  • Inability to maintain test environment and test data that matches the live environment

  • Insufficient staff, skills and testing tools to deliver adequate testing coverage

  • Projects overrunning and allocated testing time frames being squeezed to restore project go-live dates but at the cost of quality

  • Developing standard performance measures and measurement methods across projects and suppliers

  • Projects and suppliers estimating delivery dates inaccurately and causing delays in scheduling Service Transition activities.

Risks to successful Service Validation and Testing include:

  • Unclear expectations/objectives

  • Lack of understanding of the risks means that testing is not targeted at critical elements that need to be well controlled and therefore tested

  • Resource shortages (e.g., users, support staff) introduce delays and have an impact on other Service Transitions.

With this we have come to the end of module 4, let us quickly summarize.

Summary

Here is the recap of the SVT module:

  • The purpose of the Service Validation and Testing (SVT) process is to plan and implement a structured validation and test process that provides objective evidence that the new or changed service will support the customer’s business and stakeholder requirements, including the agreed service levels.

  • The scope of the SVT process includes the testing of new or changed services or service components and examines the behavior of these in the target business unit, service unit, deployment group or environment.

  • SVT policies include service quality policy, risk policy, service transition policy, release policy, and change management policy.

  • Testing supports all of the release and deployment steps within Service Transition, and the test strategy will ensure that the testing process works with all stages of the lifecycle.

  • Testing is about measuring the ability of a service to perform as required in a simulated (or occasionally the actual) environment, and so to that extent is focused on measurement.

Now let’s move to module 5 on Release and Deployment Management.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Request more information

For individuals
For business
Name*
Email*
Phone Number*
Your Message (Optional)
We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Email*
Phone Number*
Company*
Job Title*