I want to confirm if I have this right.
So, most people compare containers with virtual machine. Its always an either container or virtual machine case according to internet articles.
As far as I understand, ECS is a container orchestration service that runs tasks(1 or more containers) inside an ec2 instance (which is virtual machine).
So are we essentially running containers inside virtual machines?
Please correct my concepts, if they're incorrect.
Your understanding is correct. Amazon ECS is a highly scalable, fast container management service that makes it easy to run, stop, and manage containers on a cluster. Your containers are launched in a cluster of EC2 instances that you manage.
Alternatively, you can choose AWS Fargate to launch your containers without having to provision or manage EC2 instances. AWS Fargate is the easiest way to launch and run containers on AWS.
For more details, please refer to the hands-on ECS workshop and ECS documentation.
Related
I can understand basic management and operation of container on bare-metal running Docker engine and Kubernetes Orchestration. I'm wondering how is the management and orchestration of container on virtual machine (VM)? Anyone familiar working of container on VM, does it difficult to manage and orchestrate compare to container on bare-metal?
Resources of container on VM as my understanding the VM instance itself is already mapped to specific flavor (e.g 2vCPU and 8G memory) and does it means if container on VM will be limited by the defined VM flavor?
How K8s will manage container on VM, does it see the VM as a VM or as a POD?
Thanks for sharing your comments and input. Please advise and enlighten me.
There is no difference if you are looking forward to using VM as worker node of the Kubernetes cluster and manage the POD (container) over it. Kubernetes consider and manage the VM as Node.
If you are looking forward to running standalone containers on top of a VM using simple docker without any Orchestration tool, it will be hard to manage
Deployment options
Scaling containers
Resource management
Load balancing the traffic across containers
Handling the routing
Monitor the health of containers and hosts
If you are still looking forward to running the container top of only VM there are few managed services from AWS & GCP.
Cloud Run
ECS
above are managed container Orchestration services and using it you can manage the container top of VM.
If you looking forward to the running the container by your ownself on simple you can do it using simple docker or docker-compose also. But at very first you will face an issue routing the traffic across multiple containers.
How K8s will manage container on VM, does it see the VM as a VM or as
a POD?
It sees the VM as a node and runs the necessary services top of VM first and manages it.
I have a project to containerize several applications (Gitlab, Jenkins, Wordpress, Python Flask app...). Currently each application runs on a Compute Engine VM each at GCP. My goal would be to move everything to a cluster (Swarm or Kubernetes).
However I have different questions about Docker Swarm on Google Cloud Platform:
How can I expose my Python application on the outside (HTTP load balancer) as well as the other applications only available in my private VPC ?
From what I've seen on the internet, I have the impression that docker swarm is very little used. Should I go for a kubernetes cluster instead ? (I have good knowledge of Docker/Kubernetes)
It is difficult to find information about Docker Swarm in cloud providers. What would be an architecture with Docker Swarm on GCP?
Thanks for your help.
I'd create a template and from that an instance group for all VM, which shall host the Docker swarm. And a separate instance or instance group for said internal purposes - so that there is a strict separation, which can then be used to route the internal & external traffic accordingly (this would apply in any case). Google Kubernetes Engine is about the same as such an instance group, but Google managed infrastructure. See the tutorial, there's not much difference - except that it better integrates with gcloud & kubectl. While there is no requirement to want or need to maintain the underlying infrastructure, GKE is probably less effort.
What you are basically asking is:
Kubernetes vs. Docker Swarm: What’s the Difference?
Docker Swarm vs Kubernetes: A Helpful Guide for Picking One
Kubernetes vs. Docker: What Does it Really Mean?
Docker Swarm vs. Kubernetes: A Comparison
Kubernetes vs Docker Swarm
All the solutions I know to deploy a Docker to EC2 involves running it inside a wrapping Ubuntu.
I want to deploy my Ubuntu docker to EC2 so it will be a standalone EC2 image running by itself.
Is that feasible?
You can not launch EC2 with docker image, EC2 uses AWS AMI to launch instance.
One to way to launch your docker image directly with fargate, which does not manage any instance but will run your image as a standalone container.
AWS Fargate is a compute engine for Amazon ECS that allows you to run
containers without having to manage servers or clusters. With AWS
Fargate, you no longer have to provision, configure, and scale
clusters of virtual machines to run containers. This removes the need
to choose server types, decide when to scale your clusters, or
optimize cluster packing. AWS Fargate removes the need for you to
interact with or think about servers or clusters.
I thought a major benefit of Docker was the ability to deploy a single unit of work (a container) that is cheap, lightweight, and boots fast, instead of having to deploy a more expensive and heavy VM that boots slowly. But everywhere I look (eg AWS, Docker Cloud, IBM, Azure, Google Cloud, kubernetes), deploying single containers is not an option. Instead, a single customer must deploy entire VMs that will run instances of the docker engine which will then host clusters of containers.
Is there any CaaS that allows you to deploy only as few containers as you need? I thought many cloud provider companies would offer this service, coordinating the logistics of which containers submitted by which customers to group together and distribute among the companies' docker engines. I see this service is unnecessary for those customers that will be deploying enough containers that a full docker engine instance is necessary. But what about those customers that want the cheap option of only deploying a single container?
If this service is not available, I see Docker containers as no cheaper nor lighter in weight than full VMs. In both cases, you pay for a heavy VM. The only remaining benefit would be isolation of processes and the ability to quickly change them.
Again, is there any cloud service available to deploy only a single container?
As far as I see here, the problem is the point of view of your approach, not Docker.
Any machine that runs a GNU-Linux distro can run the docker daemon and therefore, run your docker containers.
There are solutions like Elastic Beanstalk that allow you to deploy docker containers with a high level of abstraction, making your "ops" part a little bit easier.
Nevertheless, I wonder, how do you actually try to deploy your application? what do you mean with:
"Instead, a single customer must deploy entire VMs that will run
instances of the docker engine which will then host clusters of
containers."
?
For example, kubernetes is a framework that allows you to deploy containers in other machines, so yes, you have to have a Framework for that or, instead, use a Framework as a service as Elastic Beankstalk is.
I hope my answer helps!
We are currently moving towards microservices with Docker from a monolith application running in JBoss. I want to know the platform/tools/frameworks to be used to test these Docker containers in developer environment. Also what tools should be used to deploy these containers to this developer test environment.
Is it a good option to use some thing like Kubernetes with chef/puppet/vagrant?
I think so. Make sure to get service discovery, logging and virtual networking right. For the former you can check out skydns. Docker now has a few logging plugins you can use for log management. For virtual networking you can look for Flannel and Weave.
You want service discovery because Kubernetes will schedule the containers the way it sees fit and you need some way of telling what IP/port your microservice will be at. Virtual networking make it so each container has it's own subnet thus preventing port clashes in case you have two containers with the same ports exposed in the same host (kubernetes won't let it clash, it will schedule containers to run until you have hosts with ports available, if you try to create more it just won't run).
Also, you can try the built-in cluster tools in Docker itself, like docker service, docker network commands and Docker Swarm.
Docker-machine helps in case you already have a VM infrastructure in place.
We have created and open-sourced a platform to develop and deploy docker based microservices.
It supports service discovery, clustering, load balancing, health checks, configuration management, diagnosing and mini-DNS.
We are using it in our local development environment and production environment on AWS. We have a Vagrant box with everything prepared so you can give it a try:
http://armada.sh
https://github.com/armadaplatform/armada