The tenth factor in a 12-factor app is that the app should have parity between instances run in local development and those run in the production environment. This is so that we run into fewer issues and can be more confident that processes will run the same way regardless of the environment in which they are running.
This is the tenth video in our 12-factor Application Modernisation series. In this video, Marc Firth (Managing Director at Firney) explains software engineering best practices for building scalable, reliable web services that are efficient to work with.
A transcript of the video above is included below.
Identical application environments
Marc: So we want our dev, test and production environments to be as similar as possible; because it leads to more reliable and predictable deployments and a greater level of efficiency within the engineering team.
Today we’re discussing why.
This is our series on application modernisation, or the 12-factor app; which is a methodology to help make your web services more reliable, scalable and efficient to work with.
So we want parity between dev and prod environments, so more accurately, dev, test and prod environments. But what do we mean when we say dev parity?
I’ll start with something we mentioned earlier in the series, which is immutability.
We want immutability in our deployments because we want the code that we run locally, to be the same code that we run in production so that we’re guaranteed that it’s going to run the same way. That means there aren’t going to be any differences in logic between local and production.
But it’s not only the code that we want to be immutable, we also want things such as our database schemas and our config as well.
Ideally, we want dev and prod in every environment in between to be as similar as possible so that there aren’t any quirks that come up due to running something in a different environment.
But identical environments are not always possible for the reasons of speed and cost.
Why do we even care?
Gaps between Dev and Prod environments
Historically, there were substantial gaps between development and production. A developer would work on some code and it could take days, weeks and months for that code to actually make it into production.
Not only that, but a developer would finish working on the code and an ops person, a separate individual would deploy it into production.
That developer might be running an environment that’s substantially different to the production environment.
They might be running Nginx, SQLite and OS X locally; whilst the production environment is running Apache, MySQL and Linux. That led to many mismatches between environments where these little quirks appear, where something doesn’t quite work between environments.
Even though you’ve tested it and it’s running and all your tests pass in development, it just doesn’t work in production.
In a 12-factor app, we want to make those iterations of pushing code into production as fast as possible by implementing continuous deployment.
You want a developer to be able to write code and deploy that code into production hours or even minutes later so that we have this continuous cycle of iterative improvement.
We want those developers who wrote the code to be involved in its deployment and actually watch that code as it makes its way into production to make sure that everything’s working okay.
The tools gap
We should keep the tools gap small by making the dev environment as similar as possible to the production environment. And that’s particularly important with backing services such as database queues or cache services.
Many languages offer adapters to different types of services, which means the devs don’t have to change their code in order for it to work with multiple backing services. But little discrepancies and quirks appear as you work with different services.
So in order to avoid that and to keep everything running as smoothly as possible, we want to make sure that we’re using consistent tooling in the different environments that we’re working with.
Developers often see great appeal in running those lightweight services locally, such as an in-memory cache locally and Redis in production. But in a 12-factor app, we want to resist those edges as tiny inconsistencies do crop up.
Code that passed tests and development or staging might fail in production. It’s these little errors that break the chain of continuous deployment and disincentivize using a continuous deployment strategy. Those delays are costly and they go against the methodology that we want everyone to adopt, which is getting changes into production as quickly as possible.
How to achieve consistency between environments
So I’ve got five tips for you for achieving consistency or parity between your environments.
Tip 1: Use the same software versions
The first tip is to use the same type of version of software everywhere you can in your environments. Now, modern backing services are much easier to install due to the packaging systems, and you can also use provisioning tools to set up the Dev, Stage and production environments to all be the same.
We used to do that for many years and we had a really thorough process for it.
Now, wherever possible, we use it all like Docker or Vagrant to snapshot a service image and make sure it’s the same image that’s running on every environment.
It does take a little bit more work upfront, but the payoff is massive.
Tip 2: Deployable config
My second tip is to make your config deployable too. Now this is a little bit more difficult to keep consistent in your development environments where you are authoring that config. For that, we’d recommend having a sample.env file that you commit into the repository so you can put your changes to your config values in there and make sure everybody gets that copy.
Then when that config goes into the first staging environment, we can use a tool like Helm to make sure that every environment gets that config consistently, and that would automatically create the ConfigMaps in Kubernetes, for example.
If you haven’t seen my video on config, I will link to that here.
Tip 3: Use Migrations and Seeding
My third tip is to make sure you’re using migrations and seeding in your databases if you can’t deploy the whole database into production.
If you’re capturing any sort of data in production, it’s probably likely that you can’t just deploy the whole database because you’d lose all your data.
We used to do that on occasion with some of our own content management system databases where we would just push them straight into production and overwrite any other content. Any data we were capturing in production was stored in a separate database.
If you do need to make updates to a schema in production without losing any of the data, make sure you take a regular automated backup of the production database. Then you can write your schema changes using migrations which are applied to each environment on its way to production.
Then you can test that they work through that process before they reach production.
If you’re unable to pull down that production database locally, so that you can work with the latest version of the database, then make sure you use a Seeder to fill your local database with representative data of that which you have in production.
Tip 4: Use infrastructure as code
My fourth tip is if you’re deploying similar projects repeatedly, like websites for different customers, then make sure you’re using TerraForm to deploy all the web services for those projects so that is repeatable and consistent.
If you’re not familiar with TerraForm, it lets you automate the setup of your cloud resources so that it’s repeatable.
Note the difference there is that TerraForm is for setting up your cloud resources and Helm is for setting up your Kubernetes config.
By using TerraForm, you make sure that all your services are set up, in the same way, every time you deploy them.
Tip 5: Use Ci/CD as much as you can
Now, my fifth and final tip is, of course, to use CI/CD as much as you can.
Continuous integration is a good first step.
Using CI/CD means you can continuously make that pipeline better, make the solution better, and the more you get into that cycle, the better your overall solution will become as you continuously work on it and improve it.
In other news, we’ve got a poll going to see how many of you are actually elevating your use of the cloud. I’ll leave a link to that in the comments.
Have a great day. Don’t forget to Like, Subscribe and Share and I’ll see you in the next video.