home Agile Testing Testing Agility in the Cloud: The 4Cs Framework

Testing Agility in the Cloud: The 4Cs Framework

LGMweb.201512.Mehrota.SS_96470825

Application development and delivery teams are under constant pressure to release quality features as quickly as possible. CIOs rate delivering applications faster, with higher quality and with strong control on application development as their key priorities. What’s more, supporting this type of agile environment is particularly complex to IT teams that are also tasked with supporting multiple, older versions of applications.

Moving faster, with higher quality and stronger control on costs is a common mantra in enterprise application development and delivery (AD&D) teams today. However, these requirements often pull teams in different directions. To release things faster, teams often skip various pieces of testing to compress the timelines that result in costly customer issues later. And conversely, to achieve required quality, teams often have to sacrifice features, thereby impacting business deliverables. Lastly, in order to achieve business deliverables with the desired quality, teams tend to be forced to spend a lot, both in resources and people.

To avoid being forced to sacrifice quality for speed, and vice versa, I recommend a 4Cs framework. This framework eliminates common constraints faced by AD&D teams looking to adopt DevOps practices like continuous integration and continuous delivery, and helps deliver agility in the cloud. Many enterprises today are adopting this framework to help them evaluate the variety of tools and resources in the ecosystem to help them deliver business value faster, with higher quality and lower costs.

In this article, we introduce the 4Cs framework, and use it in the context of four transformations — each addressing a given set of problems, with an appropriate array of tools for a desired end result — that teams are trying to achieve in their software delivery pipelines.

The 4Cs Framework for the Software Delivery Lifecycle

The 4Cs framework is a set of simple questions that teams should ask when evaluating which tools to implement in an agile software development lifecycle (SDLC) in order to achieve faster releases, with higher quality and optimal costs. The 4Cs are:

  • Configurability
  • Consistency
  • Collaboration
  • Control

Configurability

The key question to ask here is:

Can I have test environments that capture the complexity of my application at each stage of testing?

environments: We are talking about multiple development and test environments that are needed across the SDLC by various teams.

complexity: We need to capture the complexity of the entire application. This includes:

  • Topology of the application, i.e. multiple networks, VPN connections, open ports, etc.
  • The scale of the application, i.e. size of VMs being used in RAM, CPU, Storage, the number of VMs, etc.
  • The platforms and components being used, i.e. OSs, middleware, databases, appliances, etc.

application: An enterprise application consists of multiple components, even products, delivered by different product teams.

stage: Each stage of testing entails the testing of different application components by different teams at different levels (functional, systems, integration, performance, etc.).

Consistency

The key question to ask here is:

Can I depend on my test environments to be in the exact state that I need them to be, and whenever I want?

exact state: Consistent test results require testing an application in a known state. This includes not just the infrastructure topology to be in the desired state but also the application (OS and up) to be configured correctly.

whenever: Being able to test continuously and also as needed — based on priorities — is key to achieving continuous integration/continuous delivery and DevOps workflows.

Collaboration

The key question here is:

Can I make it easier for my team (devs + qa + ops) to work together more productively? 

work together: Feedback loops are important for DevOps. Finding bugs, reproducing them quickly, fixing them and verifying them happens continuously in the SDLC. The shorter and faster these loops are, the more agility there is in the SDLC.

Control

The key question here is:

Can I ensure the right people have the appropriate resources to do their jobs?

ensure: Being in control of providing resources for AD&D throughout the SDLC while still servicing the needs of various development and test teams in a self-service and agile fashion.

right people: Being able to secure access to the resources, so that that only teams/users that need access to a set of resources have access to them. This means being able to secure your enterprise resources from the outside world.

appropriate resources: Being able use the resources in the most optimal way— keeping in mind the needs of the AD&D teams and the budgetary constraints in the organization. Being able to proactively monitor the usage of resources, and reactively being able to report on the efficacy and ROI of resources used.

Transformations to the SDLC

The picture below depicts a typical SDLC pipeline.

There are 4 stages in this pipeline (NOTE: this is just an example, there may be more stages in your own pipeline).

  • Development stage: Developers and testers are working on their individual features and automation tools. Developers from various product teams are checking code into their feature branches. There is some unit testing happening. QA teams for a feature are performing functional testing at a feature level. Multiple teams may be using a set of shared services as well, comprising centralized services like source control systems, build services and centralized databases.
  • Integration stage: Code from multiple features gets integrated and larger scale integration testing is
    conducted to ensure the quality of the entire product. There may be multiple QA teams working on various aspects of quality of the product in this stage.
  • Pre-Production stage: This is the last stage before releasing the product to customers. Typically, the most complex testing is done in production-like environments, hoping to weed out complex bugs that can only be found when testing at production scale.
  • Production: The final stage, when the product is deployed for customers. Services are managed by an IT/Ops team, and release teams manage a boxed product.

As we move from left to right in this pipeline, the following changes are observed:

  • Complexity: Increases from left to right. Complexity of the application applies to both the topology and the application configuration. As more and more features come together and as more intensive tests are performed, complexity increases.
  • Churn: Decreases from left to right. More code is added more frequently towards the left. More bugs are found and fixed towards the left. This results in lots of churn in the application.
  • People: Decrease from left to right. There are more developers and testers touching the application code towards the left. By contrast, on the other side in production, the goal is to have as few people touching the application code and configuration as possible.

With this context, let’s take a look at a set of transformations targeted for areas of this SDLC pipeline that are ripe for change. In each transformation we will discuss the problem, introduce the class of tools at our disposal to address the issues, and use the 4Cs framework for evaluation.

It is worth mentioning here that we will go through these transformations in an order, left to right. You may choose to adopt some or all of these transformations, and in a different order than discussed here. We indeed have seen customers take this journey of transformations starting at different points.

Transformation 1: On-demand Test Environments

In this transformation, we focus on the individual dev and test teams on the left.

The problems in this stage are:

  • Developers are checking in code and doing unit tests on their own machines.
  • There is no consistency in test environments used by developers and testers.
  • There may be one lab shared between application teams for running functional tests.
  • There are lots of delays between test passes, either due to lack of environments or due to environments not being properly configured.
  • There is contention between users and teams trying to use test environments that result in lower quality and delays in testing.

The tools available for this transformation fall into the following categories:

  • Infrastructure Platforms: In addition to common on-premise infrastructure management tools, leading cloud platforms — especially the IaaS portions of those platforms — are well-suited to provision lab infrastructure on demand.
  • Configuration Management and Deployment tools: These tools are used to configure software components on top of raw infrastructure and also to deploy application components. Examples of such tools are: Chef, Puppet, Ansible, Salt, UrbanCode, etc.
  • Task Management Tools: These tools are used for tracking work items, bugs, etc. An efficient task management system is needed to ensure that quality issues don’t fall through the cracks and that the right people are working on the right set of work items at any given time.
  • Unit Testing: A robust set of unit tests for each piece of code being checked in is a basic requirement for continuous delivery and DevOps models. There are various unit testing platforms available today, like JUnit, NUnit, Cucumber, etc., that make the task of writing unit tests easy for development teams.

With the application of these tools, the desired end result is:

  • More testing being done by individual developers and testers in consistent environments
  • Lower wait times between test passes
  • Bugs found, fixed, and validated faster
  • Features getting into the integration stage faster and with a higher level of quality

The 4Cs criteria to evaluate the tools to achieve the end result are as follows:

Configurability: Give each team or even an individual developer or tester a complete test environment for their component.

Consistency: Create base environments in the desired state quickly within seconds or minutes. Incremental changes can be applied on top of this known state and testing can be efficiently and consistently conducted.

Collaboration: Ability to easily share one’s test environment with other team members to collaborate on testing and bug fixing. Ability to share a set of common services like databases, source control systems, and build servers with other teams and users.

Control: Ability to provide such test environments whenever needed, stow away when not in use and rehydrate quickly in a consistent state, and optimize spending.

 

Transformation 2: Continuous Integration

For this transformation, we will place our focus on the integration stage.

The problems in this stage are:

  • Integration environments are typically complex to set up, so there is only one. Teams forgo doing integration tests at the feature level and wait to integrate at a much later stage.
  • Integration environment may not run reliably.
  • Components integrate infrequently, causing breaks and integration being blocked for several components.
  • Bringing the integration environment to a known good state takes a long time, thus increasing the time for feature integration.
  • After transformation #1, features are getting into integration faster but hitting a choke point here.

Along with the tools discussed in transformation #1, there is an additional class of tools available to set up this transformation:

Continuous Integration tools: These tools are designed as workflow engines to make the task of automated integration testing easier. They can help you set up workflows to track changes in any feature in the product and run a battery of tests, from the simplest to the most complex, as needed, to validate the quality of the change. Tools in this category include Jenkins, Visual Studio, Teamcity, and Bamboo.
Recently, there have been a host of new service-based tools in this category — TravisCI, CodeShip, etc. — catering to applications built on the PaaS model.

Static Analysis tools: These are tools that give you ‘quality for free’. These tools are designed to weed out bugs just by analyzing the source code without you having to write any test cases. These tools can be easily integrated into continuous integration cycles to improve the quality of the code. Examples of tools in this category are SonarQube, FxCop, Fortify, and Parasoft Static Analysis tools.

The desired end result of this transformation is:

  • Run automated tests for every check-in, for every feature in a representative test environment. The test passes are usually very fast — generally under a couple of minutes — so the results of each check-in can be communicated quickly.
  • Run a more intensive set of integration tests at a periodic interval — e.g., daily — on all changes made to the product since the last run, in a full scale application deployment. These tests usually take hours, if not days.
  • Ability to point out the cause of failure, immediately attribute the failure to the right person and communicate enough information (logs, repro, data, etc.) to resolve the issue quickly.
  • This results in early and intensive testing of a large portion of the application, mostly in an automated fashion.
  • The saving of valuable QA time that can be spent on creative testing areas like exploratory testing.

 

 

The 4Cs criteria to evaluate the tools to achieve the end result are as follows:

  • Configurability – Ability to support a large variety of continuous integration testing workflows. On one end of the spectrum are simple one box CI environments where all application components, at a smaller scale, are deployed on one machine and the integration test suite is deployed on it. On the other end of the spectrum are more complex environments that represent the production-like deployment of the application, including multiple VM, networks, external interfaces, VPN connections, appliances, etc.
  • Consistency – The need is to complete the integration runs as fast as possible, and ensure that they’re of reliable quality. Each integration run should start off as a brand new environment that is configured in the base state required by the test. The latest build is deployed and configured on top of this base state and tests are run.
  • Collaboration – Results of CI runs should be disseminated to teams quickly, with pointers to builds, results, and test environments, especially in case of failures.
  • Control – Should be able to consume use-and-throw CI environments and save them off when needed (e.g. failures). Should be able to scale up the resources needed for CI based on business needs and scale them down when required.

Transformation 3: Testing at Production Scale

For this transformation we will focus on the pre-production stage. However, this transformation can be applied anywhere in the pipeline. The earlier this type of testing is done, the better.

The problems in this stage are:

  • Teams often have a pre-production environment that is not up to production standards.
  • It is hard to build such an environment, given the complexity of the application, the scale and the configuration of the application and data.
  • Maintaining such an environment in a consistent state is hard, given that there are intensive tests being run at this stage and there are a number of teams that work together in these environments.
  • Product upgrades, even seemingly simple ones like OS upgrades, can sometimes leave the environments in a broken state for a long time, delaying product releases.

Along with the tools discussed in Transformations 1 and 2, we should also add the following class of tools to our arsenal:

  • Test Data Management tools: Should be able to create production-like environments and populate them with data that is at the scale and state of production. This should be an easily repeatable process as well.

The desired end result of this transformation is:

  • Ability to create production-level environments at any point in the pipeline
  • Ability to create production-like environments in a consistent state (application and data) and apply product changes to them
  • Run intensive, and even destructive tests in this environment
  • Discover hard-to-find bugs before reaching production

 

 

The 4Cs criteria to evaluate the tools to achieve the end result are as follows:

  • Configurability: Should be able to handle complex networking topologies, access policies, data management, scale of resource requirements, and monitoring.
  • Consistency: Should be able to ensure that pre-production truly reflects the state of production before each release.
  • Collaboration: Should be able to share early access to pre-production with internal stakeholders, like feature teams, QA teams, and Ops, as well as external stakeholders like customers, contractors and partners.
  • Control: Should be able to provide access to pre-production environments when needed and stow away when not needed. Should be able to limit access to certain components to specific users, teams or departments.

Transformation 4: Parallel Testing

This transformation applies across the SDLC pipeline. Parallelism can be introduced at any stage in the pipeline to accelerate the process without compromising quality.

The problem we are trying to address is:

  • Needs to allow the business to grow over time. This results in an increase in products/features being developed and in the number of people working on them. More components, more people = more features + more check-ins.
  • Single-threaded pipeline (single CI environment, single staging/pre-prod environment) does not suffice if the team has to deliver requirements with the same quality in the same time.
  • Teams that are geographically spread end up accessing single testing environments that may be remote to them. This creates more inefficiencies in the testing process.

In terms of tools for this transformation, we will discuss a set of practices that can be implemented with the class of tools discussed previously:

  • Patterns: This deals with using code to create multiple copies of the same application environment. This includes infrastructure-as-code and application configuration-as-code. Combining these produces the complete application stack needed for testing. This code can be run over and over again to create environments.
  • Clones: This implies cloning an existing application stack that has been built either manually or with a pattern. The tools being used for cloning usually take care of the cloning process without any special knowledge needed by the user performing the clone.

Both of these practices can be used exclusively or, more effectively, together in different parts of the SDLC, based on teams’ needs. Patterns create and validate code that can be propagated throughout the SDLC and can even be used for production (continuous delivery). However, each run can take a long time. Cloning makes the process much faster and easier for end users (devs and testers).

The desired end result of this transformation is:

  • More feature teams (existing and future) are onboarded quickly and are productive sooner.
  • More features make it through the pipeline and/or features take less time in the pipeline.

 

The 4Cs criteria for evaluation are:

Configurability: Should be able to handle parallel, possibly identical, environments. E.g., complexity in managing network address spaces, application components

Consistency: Should be able to ensure that testing is in environments with a consistent state, especially if the teams are in different geographic locations throughout the world

Collaboration: Should be able to handle dependency management of components going through parallel testing environments

Control: Should provide oversight on resource utilization. Increased parallelism increases the expenditure on resources that must be managed judiciously

Transformations at a glance

In summary, we have gone through FOUR transformations to introduce the following changes into a typical SDLC pipeline:

  • On-demand self-service test environments
  • Continuous integration and continuous quality
  • Earlier testing in production-like environments
  • Parallel testing

We have typically seen teams start this journey from two points:

  • Testing in production-like environments: Enterprise teams typically face much difficulty when testing in production-like circumstances, and they have taken on this problem as the first step toward transforming their SDLC. Once successful with the right set of tools, they quickly graduate to parallelism in these production-like environments. Subsequently, as they become more efficient, they start thinking about breaking up the monolithic application architecture into more modular blocks. With these modular components and modular teams, it becomes easier to equip those teams with on-demand self-service environments, in order to implement continuous integration/delivery practices.
  • On-demand, self-service test environments: In this path, teams are usually on a journey to modularize their application more from a dev/test perspective than from an IT/Ops perspective. They are implementing continuous integration/delivery practices earlier in the cycle. Once these practices are honed, they are promoted to stages and teams further on the right of the SDLC. Production-like environments are also included in the mix as the complexity of testing increases.

Each team may take the journey through a different path and with different tools, but the end goal is always faster, higher, stronger.

Enterprises want to produce business results faster with good ROI. A key enabler of that is the speed at which software is delivered, the quality at which it is produced — and the cost incurred. It is important for development and test teams to think about the ways they can transform their software delivery lifecycles to achieve those objectives. There is a large ecosystem of patterns, tools and processes that are available to accomplish that goal. In this paper we talked about four such transformations, and the 4Cs framework for evaluating the tools that can help you achieve those transformations.

Sumit Mehrotra

Sumit Mehrotra is Sr. Director of Product Management at Skytap, a role in which he is responsible for product strategy and roadmaps. Prior to Skytap, Sumit worked at Microsoft in different roles and has shipped a number of products, including Windows Azure and Windows operating system. Sumit holds an MBA from University of Chicago Booth School of Business and a Masters in Computer Science from Boston University.

Facebooktwittergoogle_plusredditpinterestlinkedinmail
Sumit Mehrotra
Sumit Mehrotra is Sr. Director of Product Management at Skytap, a role in which he is responsible for product strategy and roadmaps. Prior to Skytap, Sumit worked at Microsoft in different roles and has shipped a number of products, including Windows Azure and Windows operating system. Sumit holds an MBA from University of Chicago Booth School of Business and a Masters in Computer Science from Boston University.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe