How to deploy several docker images to single virtual machine - docker

I have an application which consists of 2 docker containers. Both are small and need to interact with each other quite often through rest api.
How can I deploy both of them to a single Virtial Machine in Google Cloud?
Usually, when creating virtual machine, I get to chose a container image to deploy: Deploy a container image to this VM instance.
I can specify one of my images and get it running in the VM. Can I set multiple images?

You can not deploy multiple containers per VM.
Please consider this limitation when deploying containers on VMs:
1.You can only deploy one container for each VM instance. Consider Google Kubernetes Engine if you need to deploy multiple containers per
VM instance.
2.You can only deploy containers from a public repository or from a private repository at Container Registry. Other private repositories
are currently not supported.
3.You can't map a VM instance's ports to the container's ports (Docker's -p option).
4.You can only use Container-Optimized OS images with this deployment method. You can only use this feature through the Google Cloud
Platform Console or the gcloud command-line tool, not the API.

You can use docker-compose to deploy multi-container applications.
To achieve this on Google Cloud, you'll need:
ssh access to VM
docker and docker compose installed on the VM

Related

Can I deploy Ubuntu Docker directly to EC2 (not inside Ubuntu)?

All the solutions I know to deploy a Docker to EC2 involves running it inside a wrapping Ubuntu.
I want to deploy my Ubuntu docker to EC2 so it will be a standalone EC2 image running by itself.
Is that feasible?
You can not launch EC2 with docker image, EC2 uses AWS AMI to launch instance.
One to way to launch your docker image directly with fargate, which does not manage any instance but will run your image as a standalone container.
AWS Fargate is a compute engine for Amazon ECS that allows you to run
containers without having to manage servers or clusters. With AWS
Fargate, you no longer have to provision, configure, and scale
clusters of virtual machines to run containers. This removes the need
to choose server types, decide when to scale your clusters, or
optimize cluster packing. AWS Fargate removes the need for you to
interact with or think about servers or clusters.

What are the correct steps to re-deploy a docker container on compute engine?

I deploy a docker container on compute engine.
I want to re-deploy this docker container after I build a new docker image with same image name and tag, like webapp:latest
For now, I re-deploy docker container by restart compute engine instance.
I think it's not correct.
What is the correct way for re-deploying a docker container?
When you deploy Docker images on Google Compute Engine virtual machine instances there is some limitation as you can only deploy one container for each VM instance and you can only use Container-Optimized OS images with this deployment method.
I believe the best workaround is to uncheckbox the container option in your instance details to do not deploy a container to the VM instance by using a container-optimized OS image. This option is useful if you want to deploy a single container on the VM.
Instead, install docker in your VM outside the GCP. Also, Consider Kubernetes Engine if you need to deploy multiple containers per VM instance.

Docker usage in compose/swarm mode

I am quite new to docker and I need some help about distributing my application.
Consider this:
I have a pool of physical machines, each of them running the latest version of docker.
My "Application A" has several containers. To be clear in this definition, an application would be a database running in a container, 4 messaging containers and a master container. All 6 containers need to communicate between each other. The database, the messaging and etc containers would be the "services".
I can also have "Application B", "Application C" and "Application N...", that are slightly different in size and configuration from "Application A". Applications do not communicate between each other and are completely independent.
Requirements:
All applications "A,B,C..N" must use the same pool of physical machines.
Each service of each application must run in a different physical machine, if needed.
You may want to restrict how each service is allocated to each physical machine
I need to create applications "on the fly"
My first thought would be to use a docker-compose to define an application and several dockerfiles to define the services inside it. But if I do that, each application would be running in the same docker engine and therefore, the same physical machine.
I have read that you could deploy a docker compose into a docker swarm. In this case, docker swarm would act as a docker engine. However, I could not find any examples on how to do that and I am not sure of the limitations.
My second thought would be to use swarm mode. I would create a swarm, and run services on it. However, I would lose the the concept of "application". There would be a bunch of services thrown into the swarm and I could not manage how each of them communicate with each other.
So, given this problem:
Is there any assumption or statement I got wrong?
What is the recommended docker tools usage in the scenario?
It is possible to use Docker Compose with Docker Swarm Mode (Docker 1.12), but it is currently not completely compatible with it. Have a look at Docker Stacks and Bundles.
In the next version of Docker (1.13) there will also be the new release of Docker Compose v3, which will be compatible with Docker without Docker Compose. This will make it possible to deploy your Docker Compose file like this:
docker deploy --compose-file docker-compose.yml AppA
This is currently experimental but works quite fine with Docker 1-13-rc5. (Docker Releases)
A more detailed explanation of this can be found in this article.
For your requirements to have them all run on different hosts, this is possible with defining constraints in the docker service create (or in the Docker Compose v3) (See Docker Service Create - Constraints). But why do you need to have them run on different hosts?
It is possible to limit the CPU and memory usage that each service is able to use with --limit-cpu and --limit-memory.
If you want to play with Docker Swarm Mode you can create a swarm with Docker Machine on your local host. (Attention do not use the old Docker Swarm)

How to deploy docker container to Cloud Foundry?

My application is comprised of two separate docker containers. One being a Grails based web application and second being a RESTful Python Flask application. Both docker containers are sitting on my local computer. They are not hosted on docker hub. They are proprietary and I don't want to host them publicly.
I would like to try Cloud Foundry to deploy these docker containers and see how it works. However, from the documentation I get a sense that Cloud Foundry doesn't support deploying docker containers sitting on a local machine.
Question
Is there a way to deploy docker containers sitting on a local computer to CloudFoundry? If not, what is a way to securely host the containers somewhere from CF can fetch them?
Is CloudFoundry capable of running a docker container that is a Python Flask application?
One option you have is to not use Docker images, and just push your code directly, one of the nice features of CF. PCF comes with a python buildpack which should automatically detect your Flask app.
Another option would be run your own trusted docker registry, push your images there, and then when you push your app, tell it to grab the images from your registry. If you google "cloud foundry docker registry" you get the following useful results you should check out:
https://github.com/cloudfoundry-community/docker-registry-boshrelease
http://docs.pivotal.io/pivotalcf/1-8/adminguide/docker.html#caveats
https://docs.pivotal.io/pivotalcf/1-7/opsguide/docker-registry.html

Deploying docker swarm without using docker machine

Currently I have a bunch of RHEL7 VMs running on RackSpace and want to deploy docker swarm for testing purpose. The Docker Docs only describes the method to deploy docker swarm by using docker machine.
Question:
Since VirtualBox cannot be used in VMs, are any other ways such that I can directly deploy docker swarm on my VMs without using docker machine?
In fact Docker documentation offers you how to set up a swarm cluster 'manually' without using docker-machine: Create a swarm for development
I think that this full step-by-step tutorial might be useful.
It details how to deploy Swarm with a multi-hosts network, without Docker-machine by using consul and suggest two different means for the Swarm agent discovery (static file and token).

Resources