ECS ALB host-header

alb
ecs
#1

Hi ,

in the “ecs-service-with-alb” module, the only condition defined is “path-pattern” and we need to be able to use “host-header” also so we can have the “service_name” + “tls_domain_name” (from alb-public) passed into the “alb_listener_rule_config”

Thanks
Shmuel

#2

The ecs-service-with-alb module in the infrastructure-modules repo of the Reference Architecture is intended as a generic starting point. It shows a bunch of useful patterns for how you can deploy Docker containers as ECS services, but we fully expect you to customize it to your needs!

The most common way to customize it is to use it as the “base” module for all of your ECS services and to add custom “wrapper” modules in infrastructure-modules that are specific to each app you’re actually deploying. For example, to deploy services foo and bar, each of which has lots of custom routing rules, you could:

  1. Remove all the routing rules from ecs-service-with-alb, but leave everything else in the module.

  2. Create new modules foo and bar in infrastructure-modules that use ecs-service-with-alb as the base, and add their custom routing rules on top of it:

    # An example of what infrastructure-modules/foo/main.tf may contain
    
    module "service" {
      source = "../ecs-service-with-alb"
    
      # Pass through most params...
      aws_account_id = "${var.aws_account_id}"
      aws_region = "${var.aws_region}"
    
      # You could enforce a naming convention for this service
      service_name = "foo-service-${var.vpc_name}"
    
      # ... (set all the other params for the ecs-service-with-alb module) ...
    }
    
    # Add custom routing rules for the foo service
    resource "aws_alb_listener_rule" "http_host_rule" {
      listener_arn = "${lookup(data.terraform_remote_state.alb.listener_arns, 80)}"
      priority     = 100
    
      action {
        type             = "forward"
        target_group_arn = "${module.service.target_group_arn}"
      }
    
      # Note how I'm using host-based routing here. You may want to make the domain name a variable so you can customize it for each environment
      condition {
        field  = "host-header"
        values = ["foo.acme.com"]
      }
    }
    
  3. In infrastructure-live, instead of deploying the ecs-service-with-alb module directly, you deploy your new foo and bar modules, which now have the custom routing logic you need.

#3

Hi Jim,

  1. I assume it should be source = “…/services/ecs-service-with-alb” and
    not source = “…/ecs-service-with-alb” (if new file located in root dir)

  2. I attached terraform files of the new module main & var i don’t know how
    to handle all the data. dependent var

please advice

Shmuel

#4

Yup, it should be a relative path from the new module to the ecs-service-with-alb module. If that new module is in the services folder, ../ecs-service-with-alb is all you need; if it’s elsewhere, then ../services/ecs-service-with-alb may be necessary.

Not sure I understand the question? Do you have a pull request we could look at?

#5

Hi Jim,

what i am asking is how to set the all the other params

… (set all the other params for the ecs-service-with-alb module) …

environment_name = "${var.vpc_name}"
vpc_id = “${data.terraform_remote_state.vpc.vpc_id}”

ecs_cluster_arn =
"${data.terraform_remote_state.ecs_cluster.ecs_cluster_arn}“
ecs_cluster_name =
”${data.terraform_remote_state.ecs_cluster.ecs_cluster_name}“
ecs_task_container_definitions =
”${data.template_file.ecs_task_container_definitions.rendered}"
desired_number_of_tasks = “${var.desired_number_of_tasks}”

what i have to with all the data. relate vars?

also the listener is dependent on data.

resource “aws_alb_listener_rule” “http_host_rule” {
listener_arn = “${lookup(data.terraform_remote_state.alb.listener_arns,
80)}”

Thanks
Shmuel

#6

It depends on the parameter!

Many of the parameters you will expose as “pass through” parameters in your own module. For example, most modules expose aws_account_id and aws_region parameters, as those are set differently in every environment. If you are adding a “wrapper” module called foo, then in infrastructure-modules/foo/vars.tf, you will add these as input variables:

variable "aws_account_id" {
  description = "The ID of the AWS Account in which to create resources."
}

variable "aws_region" {
  description = "The AWS regioniin which the ECS Service will be created."
}

And pass them through to the ecs-service-with-alb module:

module "service" {
  source = "../services/ecs-service-with-alb"

  aws_account_id = "${var.aws_account_id}"
  aws_region     = "${var.aws_region}"
}

Some of the parameters you may want to hard-code for your module. For example, perhaps for the foo service, the image is always acme/foo and the command is always ./bin/run-app.sh:

module "service" {
  source = "../services/ecs-service-with-alb"

  aws_account_id = "${var.aws_account_id}"
  aws_region     = "${var.aws_region}"

  image   = "acme/foo"
  command = ["./bin/run-app.sh"]
}

Finally, some of the parameters will be fetched from the terraform_remote_state data source. For example, to create the custom aws_alb_listener_rule, you need the ARNs of the ALB listeners. Since the ALB is shared by multiple services, it is deployed in a separate module, so you have to fetch its remote state data:

data "terraform_remote_state" "alb" {
  backend = "s3"

  config {
    region = "${var.terraform_state_aws_region}"
    bucket = "${var.terraform_state_s3_bucket}"
    key    = "${var.aws_region}/${var.vpc_name}/networking/${var.is_internal_alb ? "alb-internal" : "alb-public"}/terraform.tfstate"
  }
}

Now you can use that data in your aws_alb_listener_rule:

resource "aws_alb_listener_rule" "http_host_rule" {
  listener_arn = "${lookup(data.terraform_remote_state.alb.listener_arns, 80)}"
  priority     = 100

  action {
    type             = "forward"
    target_group_arn = "${module.service.target_group_arn}"
  }

  # Note how I'm using host-based routing here. You may want to make the domain name a variable so you can customize it for each environment
  condition {
    field  = "host-header"
    values = ["foo.acme.com"]
  }
}

The existing modules in infrastructure-modules, including ecs-service-with-alb, have tons of examples of using terraform_remote_state data sources.

#7

Hi Jim,

I copied the remote states and it OK.

ecs_cluster_arn =
"${data.terraform_remote_state.ecs_cluster.ecs_cluster_arn}"

ecs_cluster_name =
"${data.terraform_remote_state.ecs_cluster.ecs_cluster_name}"

but how to handle template_file ?

ecs_task_container_definitions =
"${data.template_file.ecs_task_container_definitions.rendered}"

Thanks
Shmuel

#8

The ecs-service-with-alb module sets all those parameters already, so you shouldn’t need to set them again in your new foo module. You should only need to set the parameters that are exposed in ecs-service-with-alb/vars.tf. Does that make sense?

#9

Hi Jim,

I added infrastructure-modules/internus ( new service with header)
and
infrastructure-live\dev\eu-west-1\dev\services\internus

When i run terragrunt plan i get:

terragrunt plan
[terragrunt] [c:\terraform\infrastructure-live\dev\eu-west-1\dev\services\internus] 2018/01/07 20:42:43 Running command: terraform --version
[terragrunt] 2018/01/07 20:42:43 Reading Terragrunt config file at c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus/terraform.tfvars
[terragrunt] 2018/01/07 20:42:43 Terraform files in C:/Windows/Temp/terragrunt/QnVFtls4XQCGGPy-ZaiWQQDVyjo/EOuGIiHhugqiTOWLT6811eMPZsc/internus are up to date. Will not download again.
[terragrunt] 2018/01/07 20:42:43 Copying files from c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus into C:/Windows/Temp/terragrunt/QnVFtls4XQCGGPy-ZaiWQQDVyjo/EOuGIiHhugqiTOWLT6811eMPZsc/internus
[terragrunt] 2018/01/07 20:42:43 Setting working directory to C:/Windows/Temp/terragrunt/QnVFtls4XQCGGPy-ZaiWQQDVyjo/EOuGIiHhugqiTOWLT6811eMPZsc/internus
[terragrunt] 2018/01/07 20:42:43 Found remote_state settings in c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus/terraform.tfvars but no backend block in the Terraform code in C:/Windows/Temp/terragrunt/QnVFtls4XQCGGPy-ZaiWQQDVyjo/EOuGIiHhugqiTOWLT6811eMPZsc/internus. You must define a backend block (it can be empty!) in your Terraform code or your remote state settings will have no effect! It should look something like this:

terraform {
backend “s3” {}
}

[terragrunt] 2018/01/07 20:42:43 Unable to determine underlying exit code, so Terragrunt will exit with error code 1

Can you check what i done wrong.

Thanks
Shmuel

#10

As the error message indicates, you need to add the following to infrastructure-modules/internus/main.tf:

terraform {
  backend "s3" {}
}

Without this, Terragrunt can’t fill in the remote state configuration details automatically. See https://github.com/gruntwork-io/terragrunt#keep-your-remote-state-configuration-dry for details.

#11

Hi Jim ,

I added the S3 .

now to my understanding I have to remove variable “alb_listener_rule_configs” from :
infrastructure-modules\internus\main.tf
and
infrastructure-modules\services\ecs-service-with-alb\vars.tf

is it the right approach ?

BUT now i am getting question about region
/////////////////////////////////////////////////////////////
[terragrunt] 2018/01/07 22:10:20 Running command: terraform plan -var-file=c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus/…/…/…/…/account.tfvars -var-file=c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus/…/…/…/region.tfvars -var-file=c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus/…/…/env.tfvars -var-file=c:/terraform/infrastructure-live/dev/eu-west-1/dev/services/internus/terraform.tfvars
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.

Default: us-east-1
Enter a value:

///////////////////////////////////////////////////////////
Please advice
Shmuel

#12

Do you have a provider block in infrastructure-modules\internus\main.tf?

provider "aws" {
  # The AWS region in which all resources will be created
  region = "${var.aws_region}"

  # Only these AWS Account IDs may be operated on by this template
  allowed_account_ids = ["${var.aws_account_id}"]
}

You need that in every module too!

#13

Thanks Jim
After doing terragrunt apply on internus directory i don’t see the needed repo in The ECR.

and when i run the circle CI i getting:

Pushing Docker image 075139435924.dkr.ecr.eu-west-1.amazonaws.com/internus:38dcfc0cd30d0d7fe35b1c0c65db4dc4cde11456
The push refers to a repository [075139435924.dkr.ecr.eu-west-1.amazonaws.com/internus] (len: 1)

unknown: User: arn:aws:iam::908969436983:user/circle-ci-machine-user is not authorized to perform: ecr:InitiateLayerUpload on resource: arn:aws:ecr:eu-west-1:075139435924:repository/internus

SERVICE_PATH=“dev/eu-west-1/dev/services/internus” IAM_ROLE_ARN=“arn:aws:iam::101444535047:role/allow-auto-deploy-from-other-accounts” ./_ci/deploy.sh returned exit code 1

Action failed: SERVICE_PATH=“dev/eu-west-1/dev/services/internus” IAM_ROLE_ARN=“arn:aws:iam::101444535047:role/allow-auto-deploy-from-other-accounts” ./_ci/deploy.sh

When the ECR repo should be created?

Thanks
Shmuel

#14

OK
With which user i have to run
terragrunt plan
C:\terraform\infrastructure-live\shared-services\eu-west-1_global\ecr-repos

#15

terragrunt plan
[terragrunt] [C:\terraform\infrastructure-live\shared-services\eu-west-1_global\ecr-repos] 2018/01/08 13:41:09 Running command: terraform --version
[terragrunt] 2018/01/08 13:41:09 Reading Terragrunt config file at C:/terraform/infrastructure-live/shared-services/eu-west-1/_global/ecr-repos/terraform.tfvars
[terragrunt] 2018/01/08 13:41:09 Cleaning up existing *.tf files in C:/Windows/Temp/terragrunt/agGSahnSr5My3inKuRAukN4ZBHY/EOuGIiHhugqiTOWLT6811eMPZsc
[terragrunt] 2018/01/08 13:41:09 Downloading Terraform configurations from git::ssh://git@github.com/inPact/infrastructure-modules.git?ref=v0.0.1 into C:/Windows/Temp/terragrunt/agGSahnSr5My3inKuRAukN4ZBHY/EOuGIiHhugqiTOWLT6811eMPZsc using
terraform init
[terragrunt] [C:\terraform\infrastructure-live\shared-services\eu-west-1_global\ecr-repos] 2018/01/08 13:41:09 Initializing remote state for the s3 backend
[terragrunt] [C:\terraform\infrastructure-live\shared-services\eu-west-1_global\ecr-repos] 2018/01/08 13:41:10 [terragrunt] [C:\terraform\infrastructure-live\shared-services\eu-west-1_global\ecr-repos] Remote state S3 bucket tabit-shared
-services-terraform-state does not exist or you don’t have permissions to access it. Would you like Terragrunt to create it? (y/n)

#16

finally i used user from the shared acc and it runs (is it the correct approach)

#17

Hi Jim,

I have the domain headers rules set,

But i need that like the main domain is set to domain_name = “www.tabit-dev.com” (points to the main ALB)
The host internus.tabit-dev.com will also point to the main ALB.

Please advice

Shmuel

#18

The recommended approach is:

  1. You have a single IAM user in the security account.
  2. To access other accounts with that IAM user, you assume IAM roles.

There are several ways to assume IAM roles, but the easiest with Terragrunt is to set the TERRAGRUNT_IAM_ROLE environment variable (docs):

# Authenticate to the security account, with account ID 111111111111 and MFA token 123456
eval "$(aws-auth --serial-number arn:aws:iam::111111111111:mfa/jondoe --token-code 123456)"

# Tell Terragrunt to use an IAM role in dev, with account ID 2222222222
export TERRAGRUNT_IAM_ROLE="arn:aws:iam::2222222222:role/allow-full-access-from-other-accounts"

# Deploy to dev
terragrunt apply
#19

There are a two main ways to handle domain names:

  1. If all your services run on one domain name and different paths (e.g., service foo is at www.your-domain.com/foo and service bar is at www.your-domain.com/bar), then the easiest place to manage them is with the ALB itself. This is how the Reference Architecture is initially configured. Go to your infrastructure-live repo and check out one of the terraform.tfvars files in an alb-public folder (e.g., stage/networking/alb-public/terraform.tfvars). You’ll see:

    create_route53_entry = true
    domain_name = "www.your-domain.com"
    

    You can update this domain_name to whatever you want.

  2. If each of your services runs at a different domain name (e.g., service foo is at foo.your-domain.com and service bar is at bar.your-domain.com), then you’ll probably want to disable the domain name management in the ALB (set create_route53_entry = false in alb-public) and manage domain names with each service instead. To do that, go to the terraform.tfvars folder for your app in infrastructure-live (e.g., stage/networking/sample-app-frontend/terraform.tfvars) and set the following:

    # You can set a different domain name for each app in each environment this way
    create_route53_entry = true
    domain_name = "foo.your-domain.com"
    

    This assumes you’re using the ecs-service-with-alb module under the hood (or a wrapper for it that exposes and forwards the same create_route53_entry and domain_name variables).

#20

Hi Jim,

After set create_route53_entry = false in alb-public

running terragrunt apply on infrastructure-live\dev\eu-west-1\dev\services\internus
results:

  • module.service.aws_route53_record.dns_record: 1 error(s) occurred:

  • module.service.aws_route53_record.dns_record: Resource ‘data.terraform_remote_state.alb’ does not have attribute ‘alb_dns_name’ for variable ‘data.terraform_remote_state.alb.alb_dns_name’

  • module.service.module.route53_health_check.var.domain: Resource ‘data.terraform_remote_state.alb’ does not have attribute ‘alb_dns_name’ for variable ‘data.terraform_remote_state.alb.alb_dns_name’

please advice.
Shmuel