“Are you a Docker person or a Kubernetes person?”
“AWS ECS is for you if you like using Docker!”
“AWS EKS is for you if you love Kubernetes!”
“AWS Fargate is for you if do not want the grunt work of managing either Docker or Kubernetes!”
While we have heard such statements from cloud engineers on several accounts, each of these services surprisingly look similar at the top level, but have their own characteristics and advantages. (Check out this comic here ).
To give you a quick walk through, here’s a table with few key curated facts.
Amazon ECS | Amazon EKS | Amazon FARGATE |
The Good | ||
Popularly Known as Amazon's Docker as a service. Few Call them, Amazon Beanstalk in multi-docter mode too. | Popularly known as Amazon's Kubernetes as a service. | Dev folks dearly call it the The Container Manager. |
Offers Support in its CLI for Docker Compose. | Offers all the features of ECS, plus VPC for pod networking and isolation, at the cluster level. | Offers the same API actions as ECS, so you can use the ECS console or CLI, or the AWS CLI. |
Supports duplicating environments using AWS CLI/SDK calls, thus helps in managing hundreds of containers | Supports upstream Kubernetes and replicates across three masters in different Availability Zones | Supports heterogeneous clusters that are made up of tasks running on both EC2 and Fargate launch types. Ideal for rapid horizontal scaling. |
Integrates Seamlessly with ECR. This eases custom Docker images management. | Gives the advantage of running the same scheduler in AWS or anywhere else. | Helps to focus on designing and building your applications instead of managing the infrastructure that runs them. |
Eliminates own registry management. | Can replicate container environment to another live environment in AWS, with minimal modification(s) | Takes care of bin packing problem. |
Has auto healing feature, so failed containers will be relaunced automatically. | Extracts an additonal layer of scheduling and clustering to a container environment. | Supports AWS vpc network mode natively, which means all of the tasks running on the same instance share the instance's elastic network interface (ENI). |
it's for free, but you have to pay for underlying resources provisioned to support applications. | Each cluster costs just $0.20 per hour. The major advantage over ECS is that a single Amazon EKS cluster is sufficient to run multiple applications. | Pay for the computing time, rather than the underlying EC2 instances. Works out cheaper, But can spiral out of control, depending on the usecase. |
All communications between pods are via IP addresses in the VPC | Unlike ECS, Fargate has its own fleet of EC2s ready for your tasks. You can provision tens or thousands of containers in seconds. | |
The Bad | ||
Not Easy to work with distributed systems. | Does not offer as deeper integration into the AWS compared to ECS. | Tasks must be launched into a cluster, even through it abstracts away VMs. |
While scaling the service the service, you will have to wait until a new EC2 instance is deployed to launch a new task in that instance. | Charges applicable for launching complementary resources, like EBS volumes. | Pricing is based on the memory and CPU required to run a tasks, as well as the duration the task runs (by second and a minimum of 1 minute). If you launch complementary resources, like load balancers, you'll be charged for that as well. |
Cannot relocate container instances to a different cluster. Neither can you change the instance type after launching. | Can spin up only three cluster in a region, currently. If need be, you can spin up more than three clusters but only after raising a ticket. | P.S AWS announced price reduction of up to 50% sometime during Jan 2019. Check for your use case keeping both CPU and memory utilization in mind. |
Maximum number of control plane security groups per cluster is five. | ||
The Ugly | ||
Running own service discovery has ELB/ALB costs attached for services that doesn't need to be exposed outside. | Assigning pod-level IAM is a difficult task. | Customization options are less. |
Getting on- demand clusters is time consuming. | Required to run your own components. | |
Has long startup times | ||
No persistent filesystem access. |
If you think we have missed out on any good points, do tweet to us at @totalcloudio.
Thanks to Marc Weaver at Databasable, who helped us curate few interesting observations he made while working with these services.