Terragrunt IAM Module Versioning Strategies Across Environments

We have a central folder that contains our IAM Modules. Each service has its own .tf file. Example:

ls service-roles/

roleA.tf      roleB.tf      roleC.tf

Then for a given environment such as dev, staging, prod, we have a terragrunt.hcl file that calls this module with inputs:

terraform {
  source = run_cmd(
    ...
    "service-roles/"
  )
}

...

inputs = {
  my_service_roles = {
    roleA = ["xyz"]
    roleB = ["abcd"]
    roleC = ["mno"]
  }
}

The challenge we face is if a developer now adds roleD, the inputs for dev, staging, and prod will also need to include roleD. I know that we could in theory version the module for every additional role added and then have each environment point at a given version. But, the issue is if you have many developers adding many roles, then some roles will be promoted on a different cadence from others. So, we’d still have the problem where there are missing inputs in some cases. For example, if roleD, roleE, roleF, roleG are added to dev at a similar time, we’d version the dev module. And then maybe only roleD, and roleE would be promoted, and so now we can’t have staging use the version of the module that includes roleF and role G.

TLDR: It’s confusing as to how to approach this problem where we have a role module that is used by many users, and those roles get promoted on different cadences from each other, which means the inputs get out of sync and cause Module Applies to break due to missing inputs with errors such as

This object does not have an attribute named "roleF".

one thing you can do is add a map as an output that contains the name and ARN of each of your roles, then get the role ARN in your dependent modules via a dependency output from the IAM module. example:

output "roles" {
  value = {
    codebuild = {
      name = aws_iam_role.codebuild.name
      arn  = aws_iam_role.codebuild.arn
    }
    cloudwatch = {
      name = aws_iam_role.cloudwatch.name
      arn  = aws_iam_role.cloudwatch.arn
    }
...

then just pass the full list as an input:

dependency "iam" {
  config_path = "../iam"
}
inputs = {
  iam_roles_list = dependency.iam.outputs.roles
}

then in the module itself:

resource "aws_codepipeline" "pipeline" {
  name     = "blah"
  role_arn = var.iam_roles_list.codepipeline.arn
...

this introduces a new issue though where if you add a role but forget to add it to the list of outputs, then it won’t show up in the map. the easiest way to fix this would be to write a pre_commit hook that checks that all your roles are in the map.

more complicated solutions to the ‘stuff missing from map’ problem might be to somehow parse the terraform code, get all the roles, then write the outputs.tf file from within a before_hook, or maybe some similar kind of hack. precommit hook probly the best bet though

1 Like