K8s-sample-app-frontend-multi-account-acme create failed | pod status CrashLoopBackOff


Thanks for all the support from the community.

We are in the last phase of deploying our app in EKS using Gruntwork code.
When we are trying to deploy a webserver using k8s-sample-app-frontend-multi-account-acme module it is failing with below error (pod is not coming as healthy. It is failing with CrashLoopBackOff error).

kubectl get pods --namespace applications
NAME                        READY   STATUS             RESTARTS   AGE
plte-dev-7c49f74d47-hblgw   0/1     CrashLoopBackOff   4          2m36s
plte-dev-7c49f74d47-zg264   0/1     Running            1          2m36s

kubectl describe pods plte-dev-7c49f74d47-hblgw --namespace applications
Name:           plte-dev-7c49f74d47-hblgw
Namespace:      applications
Priority:       0
Node:           ip-172-21-86-4.ec2.internal/
Start Time:     Mon, 09 Mar 2020 20:47:25 +0530
Labels:         app.kubernetes.io/instance=plte-dev
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Running
IPs:            <none>
Controlled By:  ReplicaSet/plte-dev-7c49f74d47
    Container ID:   docker://21ef4b4ba67eb36d47bed3dfabb13b5153219a71d055c6ef4d37d2969200ff7e
    Image:          <acct id>.dkr.ecr.us-east-1.amazonaws.com/plte-dev:v1
    Image ID:       docker-pullable://<acct id>.dkr.ecr.us-east-1.amazonaws.com/plte-dev@sha256:d681bdce7c0f3bf4ebc13c4ef44f4c037d1b6562095156dbe4195db5bf10c30a
    Ports:          3000/TCP, 3000/TCP, 3000/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    127
      Started:      Mon, 09 Mar 2020 20:53:15 +0530
      Finished:     Mon, 09 Mar 2020 20:53:15 +0530
    Ready:          False
    Restart Count:  6
 Liveness:       http-get http://:liveness/sample-app-frontend-multi-account-acme/health delay=15s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://:readiness/sample-app-frontend-multi-account-acme/health delay=15s timeout=1s period=30s #success=1 #failure=3
      AWS_REGION:    us-east-1
      BACKEND_PORT:  80
      BACKEND_URL:   acme-multi-account-sample-app-backend-multi-account-acme-dev.applications.svc.cluster.local
      DB_URL:        mysql-dev.caxvtbabremc.us-east-1.rds.amazonaws.com:3306
      REDIS_URL:     redis-dev.plgqcz.ng.0001.use1.cache.amazonaws.com
      VPC_NAME:      dev
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fg5vn (ro)
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fg5vn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
  Type     Reason     Age                   From                                  Message   
  ----     ------     ----                  ----                                  -------   
  Normal   Scheduled  7m9s                  default-scheduler                     Successfully assigned applications/plte-dev-7c49f74d47-hblgw to ip-172-21-86-4.ec2.internal
  Normal   Pulled     5m32s (x5 over 7m8s)  kubelet, ip-172-21-86-4.ec2.internal  Container image "<acct id>.dkr.ecr.us-east-1.amazonaws.com/plte-dev:v1" already present on machine
  Normal   Created    5m32s (x5 over 7m8s)  kubelet, ip-172-21-86-4.ec2.internal  Created container
  Normal   Started    5m32s (x5 over 7m8s)  kubelet, ip-172-21-86-4.ec2.internal  Started container
  Warning  BackOff    118s (x28 over 7m6s)  kubelet, ip-172-21-86-4.ec2.internal  Back-off restarting failed container

I am sharing my docker file details below. I have created a docker image using below file and pushed image to my ecr and updated this details in k8s-sample-app-frontend-multi-account-acme code.

-rw-r--r-- 1 sanoop sanoop 272 Mar  9 20:40 server.js
-rw-r--r-- 1 sanoop sanoop  72 Mar  9 20:40 Dockerfile

sanoop@sanoop-Latitude-E7470:~/ms$ cat server.js 
var http = require('http');

var handleRequest = function(request, response) {
      console.log('Received request for URL: ' + request.url);
      response.end('Hello World!');
var www = http.createServer(handleRequest);
sanoop@sanoop-Latitude-E7470:~/ms$ cat Dockerfile 
FROM node:6.14.2
COPY server.js .
CMD [ "node", "server.js" ]

Any quick response is highly appreciated.


@yoriy Appreciate if you can take a quick look at this.

Hi Sanoop,

Usually this means the container is failing to start up. This is most likely an issue with the container and not with Kubernetes. Do you see anything in your logs that indicate an issue? You should be able to extract some logs from the pod with kubectl get logs POD_NAME -n POD_NAMESPACE.


Yes, it was the problem with our container. Container was exiting after once it is started. We got to add an extra ‘CMD’ command at the end to keep it alive and running. Also, it will go to crashloop even if there is a container healthcheck failure. We are able to successfully deploy the application in EKS using Gruntwork framework. Thanks!

Thanks for closing the loop! Glad to hear you got it working.