With ECS, your ECS Cluster has a total amount of memory and CPU available that can be allocated to run your ECS Tasks. For example, if you have a cluster of 3
t2.medium has 2 CPU cores and 4 GB of RAM, so your total available resources for the cluster are 6 CPU cores and 12 GB of RAM.
Whenever you run an ECS Service, ECS will figure out how many ECS Tasks you intend to run. For example, maybe you run 2 ECS Tasks for your service and up it to 3 as your service load increases. For each ECS Task, ECS will attempt to run the ECS Task on a node in the cluster that has enough resources to run the service. For example, you might have created an ECS Task Definition whose
cpu properties are set to 1 core and 1 GB of RAM respectively (docs).
If ECS can find a node in the ECS Cluster that has at last 1 GB of RAM and 1 CPU core available, it will run the task. If it can’t, it will give you the error message you saw and even tell you who the best candidate node was.
To resolve the issue, you need visibility into the ECS Cluster to understand your Cluster Utilization. ECS exposes this information as CloudWatch Metrics, and describes in the docs which metrics are available to you. You can use the AWS Web Console to view these metrics by going to your ECS Cluster and looking for the “Metrics” tab.
Clearly, your cluster doesn’t have enough available resources to run the desired ECS Task, so your next step will probably be to look at each ECS Node and review the currently running ECS Tasks. Ultimately, your options are to either:
- Terminate enough ECS Tasks to free up the resources you need
- Add additional resources to the cluster by launching new ECS nodes (or increasing the instance type) in use for the cluster.