Do you do continuous delivery of your infrastructure? Say you merge a change to a module like ecs_cluster
, would you ever have a CD process that recursively ran terraform apply
on every .tfvars in a whole environment directory? I’d like to be continuously deploying small changes to our infrastructure, but I don’t know if it’s practical.
First, a minor note on vocabulary: usually, continuous delivery means you keep your code in a state where it could be deployed at any time, where as continuous deployment means that every commit that passes tests and the build process gets deployed automatically.
We typically set up continuous deployment for apps, subject to certain rules, and continuous delivery for other types of infrastructure. That is:
- Every commit to an app repo (e.g., Rails app, PHP app, Java app) gets built and tested. Commits to certain branches or with tags of a certain format in an app repo get automatically deployed to certain environments (e.g., everything in the
master
branch gets automatically deployed to staging). - Every commit to Terraform repos gets built and tested. However, it is only deployed when someone manually requests it.
The reason for this is that app deployments are more or less always the same and generally safe: e.g., you do a blue-green deployment of a Docker container, and roll back if there is any issue with the health checks during deployment.
On the other hand, arbitrary infrastructure changes are all quite different and much less safe. You could accidentally destroy a VPC or delete a database, and there’s no rollback from that. Therefore, you want someone to always review the plan
output before applying the changes and to be ready to fix things if you hit a problem.
Ideally shouldn’t be much different from the usual apps code… If terraform had a good test framework, it would be totally possible to use continuous (or at least automatic) deployments.