Introduction
Anyone who has to manage database changes would love to be living the "continuous delivery" dream. Being about to upgrade a production database using a deployment mechanism so finely-honed that any change can be made in the version control system (VCS), integrated, tested rigorously and deployed to production, with appropriate rollback route, quickly and with confidence.
In a recent article, Phil Factor described the challenges and benefits of Continuous Delivery for the Database, noting in particular that, "...if teams start deploying early, and continue to refine their build, test and deploy processes, the most obvious benefit becomes predictability."
If continuous delivery for the database is the dream, the reality of releasing database changes for many teams is rather more akin to "push the Go button while chanting the age-old mantra please don’t break, please don’t break". Instead of a rapid, automated and predictable process, a database deployment represents a 'roadblock' that throws up numerous last-minute problems, results in late nights of frantic recoding, and delays getting functionality to customers.
The immediate aftermath of such a chaotic release is punctuated by periodic "explosions" as users discover bugs and other issues. At this point, the temptation is to circumvent the "official" deployment process, which caused so much pain in the first place, and perform any subsequent upgrades, especially those "quick but urgent" fixes, directly to the production database.
Even where a reliable and automated deployment process exists, making and fully testing a change to a database of any size, through source control, can take hours rather than minutes. If a bug in the live database application is directly affecting customers, and, therefore, the business as a whole, then there is likely to be a strong demand to make the fix "now", not in several hours' time.
So, as we fight towards the goal of continuous delivery with all database deployments fully automated and tested through source control how do we best deal with the reality that, you either don’t have database source control set up, no one on your team uses the database VCS consistently, or you’re trying, but just haven’t gotten around to learning it properly. So while that continuous delivery goal is that light at the end of the tunnel, you may not have reached the entrance yet.
Here are seven things you should be doing to 'get your house in order' and avoid disaster when forced into a direct production upgrade, in response to one of those business-critical database bugs that come a-calling after an initial deployment.
1. Adopt a VCS and keep tabs on the schema versions.
If your database is not even in source control, the absolute best thing you can do for your database development and deployment processes, without question, is to put your database scripts and files in a VCS. Source control makes database development more reliable, auditable, and easier to deploy. It provides the features that will help you manage the complexity of databases as they grow, and make them easier to maintain.
Some advice if you aren’t using database source control consistently: your team should be able to build any version of a database from what is in source control. This means that the schema of version "2.1.0.1" of a production database should match exactly the "2.1.0.1" schema in the VCS. If you’re ever forced to modify the production database directly, you need to reintegrate the change into the VCS immediately to avoid version drift.
2. Take a database snapshot or backup the moment before you make your changes.
I did not expect this to be a problem that many people faced, but I keep hearing about disasters made worse by not taking a database backup prior to deployment. If it's an emergency fix on a big database, and you're running Enterprise Edition SQL Server, then a database snapshot can be a useful and faster alternative.
A freelance developer told me the story of a client he worked for who patently refused to take backups pre-deployment. When something went wrong with the deployment, the client would come back to him, asking him to restore their system. This happened so often and wasted so much of his time that he initiated an "unauthorized backup regime," which has come in handy countless times now. The lesson is simple -- take a database backup, or snapshot, before making that change!
3. When updating the production database, it helps to have the insurance of a well-tested rollback script.
Or to put it another way, every "up" (migration) script needs a "down" (rollback) script. If you don’t have your database in source control, the rollback is for all intents and purposes your insurance policy. Some people out there will tell you that you don’t need a rollback script if you have a good backup, but this is bad advice. Restoring from a backup is an all-or-nothing proposition which can waste loads of time when it matters most; AND you lose all the data changes that happened since the backup was started. When making changes to production, you’re going to want to check that your rollback scripts work the way you expect them to before you get yourself into a situation where you need to rely on them. Check that parachute for holes before the plane catches fire.
4. Wrap the "up" script in a transaction.
When it comes time to deploy the database change to production, execute the script within a BEGIN TRANSACTION….ROLLBACK TRANSACTION
block. Execute the script (minus the ROLLBACK
), validate that the changes to the objects and rows are exactly what you expected. Commit if they are; rollback, and perform further testing, if they aren't. Keep in mind that if the database is online this could cause blocking of any concurrent transactions on the affected tables.
5. Make any database upgrades through scripts that you've saved to the VCS. Never use the GUI to deploy changes.
The GUI is not your friend. Sure it’ll save you time during day-to-day database design and development, but when deploying database changes it can turn on you viciously. You need to work from fully-documented migration scripts and store them in the VCS. Script-driven deployments are repeatable, and easier to document and check. They’re also less prone to human-error (see also tips 6 and 7). When explosions start going off, you’re going to appreciate having the change scripts in the VCS, which anyone in the team can reference quickly.
6. Test the script on a non-production system first.
"Works on my machine" is not going to cut it when those direct upgrades cause massive performance degradation on the live database. You have to have some sort of realistic test environment, with a close approximation to the live data in terms of volume and distribution, where you can perform some basic testing. Otherwise, there is no way to tell how changes will perform in the wild world of production.
7. Where possible, have a second set of eyes validate what you’re going to do, prior to deployment.
This goes together with point 6, because no one is perfect. If at all possible, get someone to look over what you’re planning to push to production before you deploy. Trivial mistakes can cause big problems. Some advice, put in place some code review from source practices for your team, it isn’t hard to manage and it can make a big difference.
|
SQL Source Control Basics
If you're looking to continue along the path of database source control, this eBook gives a detailed walkthrough of the concepts, complete with code samples.
Download the free eBook
This article was originally posted on Simple-Talk, A technical journal and community hub.
|
This article was originally posted on Simple-Talk, A technical journal and community hub.