Allow machine user to access EKS

What would be the best approach to allow a machine-user to access ECR and EKS directly?

I want to be able to run helm commands against the EKS cluster directly from the pipeline and perform operation in different clusters depending on branch.
Ex: develop branch pipeline does the helm upgrade in staging EKS, but for master branch we use prod EKS.

Any recommendations that we should be following?

Hi @MiracleMax,

Apologies for the delay in responding here!

The approach we take in the reference architecture is to leverage the autodeploy cross account IAM role created with the cross-account-iam-roles module. The idea is to:

  • Grant the machine user permissions to assume these roles in the child account.
  • Bind an RBAC role with appropriate permissions to create resources in the kubernetes cluster to the autodeploy IAM role.
  • During deployment, have the CI user assume the autodeploy role for the appropriate account prior to making the calls to ECR and Kubernetes using aws-auth.

Does that make sense?


Thanks @yoriy for your awnser.

So far I manage to login to EKS with the machine-user. But it seems I can’t find the way to bind the IAM role to RBAC.

Error from server (Forbidden): pods is forbidden: User “allow-auto-deploy-from-other-accounts” cannot list resource “pods” in API group “” in the namespace “default”

The machine user is part of allow_auto_deploy_from_other_account_arns in iam-cross-account.
And iam-cross-account is a dependency of eks-cluster.

What could I be missing?

I get it now after having a deeper look into role mapping.

                - rolearn: arn:aws:iam::XXX:role/allow-auto-deploy-from-other-accounts
                  username: allow-auto-deploy-from-other-accounts
                 - autodeploy

Where is this autodeploy group setup? I can’t find reference to it nor see it in the EKS cluster. (kubectl get clusterrole -n kube-system)

If you are using our reference architecture, then it is still using helm 2 where there is a server component (aka Tiller) that does all the work. In this model, the idea is that the autodeploy RBAC group gets access to each Tiller that is used for deployments. This is done through the variable grant_autodeploy_access in k8s-namespace-with-tiller, which will bind the minimal permissions necessary to allow running kubergrunt helm grant and talk to Tiller for that namespace. We don’t bind any other permissions to the RBAC group.

If you are using helm 3 or wish to directly talk to kubernetes API using the autodeploy role, then you need to bind additional permissions to the autodeploy RBAC group, bind an alternative RBAC group. This can be done using any method that creates RoleBinding and ClusterRoleBinding resources: kubernetes manifest file, helm, or terraform kubernetes provider.

Hope this makes sense!


1 Like