Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / DevOps / load-testing

Continuous Delivery with TFS: Pausing to Consider the Big Picture

3.00/5 (1 vote)
6 Mar 2015CPOL4 min read 5.8K  
Continuous delivery with TFS: Pausing to consider the big picture

In this fifth post in my series about building a continuous delivery pipeline with TFS, we pause the practical work and consider the big picture. If you have taken my advice and started to read the Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation book, you will know that continuous delivery (or deployment) pipelines are all about helping speed up the process of getting code from being an idea to working software that is in the hands of end users. Specifically, continuous delivery pipelines are concerned with moving code from development, through testing and in to production. In the olden days when our applications were released once or twice a year, it didn’t much matter how long this phase took because it probably wasn’t the bottleneck. Painful yes, but a bottleneck probably not. However with the increasing popularity of agile methods of software development such as Scrum where deployment to live can sometimes be as frequent as several times a day, the journey time from development to live can become a crucial limiting step and needs to be as quick as possible. As well as being quick, the journey also needs to be repeatable and reliable and the answer to these three requirements is automation, automation, automation.

The delivery pipeline we are building in this series will use a sample ASP.NET MVC web application that talks to a SQL Server database. The hypothetical requirements are that on the journey from development to production, the application is deployed to an environment where automated acceptance tests are run and then optionally (according to an approvals workflow) to an environment where manual and exploratory tests can be carried out. I’ve chosen this scenario because it’s probably a reasonably common one and can illustrate many of the facets of delivery pipelines and the TFS tooling used to manage them. Your circumstances can and will vary though and you will need to take the ideas and techniques I present and adapt them to your situation.

The starting point of the pipeline is the developer workstation – what I refer to as the DEV environment. I’m slightly opinionated here in that my view of an ideal application is one that can be checked-out from version control and then run with only minimal configuration steps entirely in DEV. If there is some complicated requirement to hook in to other machines or applications, then I’d want to be taking a hard look at what is going on. An example of an acceptable post check-out configuration step would be creating a database in LocalDB from the publish profile of a SQL Server Database Project. Otherwise, everything else just works. The solution uses automated acceptance tests? They just work. The automated acceptance tests need supporting data? They handle that automatically. The application talks to external systems? It’s all taken care of automatically through service virtualisation. You get the idea…

Moving on, when code is checked back in to version control from DEV all of the changes from each developer need merging and building together in a process known as continuous integration. TFS handles this for us very nicely and can also run static code analysis and unit tests as part of the CI process. The result of CI is a build of all of an application’s components that could potentially be released to production. (This answers an early question I grappled with – to build as debug or release?) These components are then deployed to increasingly live like environments where code and configuration can be tested to gain confidence in that build. One of the core tenets of continuous delivery pipelines is that the same build should be deployed to successive environments in the pipeline. If any of the tests fail in an environment, the build is deemed void and the process starts again.

The next environment in the pipeline is one where automated acceptance tests will be run. Typically, this will be an overnight process, especially if tests number in their hundreds and test runs take some hours to complete. I define this environment to be a test of whether code has broken the tests such that the code needs fixing or the tests need updating to accommodate the changed code. To this end, all variables that could affect tests need to be controlled. This includes data, interfaces to external systems and in some cases the environment itself if the poor performance of the environment might cause tests to fail. I refer to this environment as DAT – development automated test.

If code passes all the tests in the DAT environment, a build can optionally be deployed to an environment where exploratory testing or manual test cases can be carried out. I call this DQA – development quality assurance. This environment should be more live-like than DAT and could contain data that is representative of production and live links to any external systems. For some pipelines, DQA could be the final environment before deploying to production. For other pipelines, further environments might be needed for load testing (as an example) or to satisfy an organisational governance requirement.

So that’s the Big Picture about what this series is all about – back to building stuff in the next post.

Cheers – Graham

The post Continuous Delivery with TFS: Pausing to Consider the Big Picture appeared first on Please Release Me.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)