Can I deploy two different services (Java and NodeJs) using Docker? - docker

I have two services, one developed in Java and the other one in nodejs. Can I deploy these on windows OS? Is it possible to deploy services in distributed mode?

Yes, you can deploy both the technologies on different containers. That's the beauty of Docker/Containerization.
Create both the images from a base image from Docker Hub and build upon the base image and create a new image. Use the new image for deployment. Upload the new image on Docker Hub and then you can pull it from Docker on any Windows OS and run it.
However, I suggest you run the containers on any Linux distribution as the container can use many a files/binaries from the host OS which will thereby reduce the resources consumed by the containers.

Related

Deploying Container on GCP

I am trying to deploy this app. https://github.com/taigaio/taiga-docker . This container is a collection of various images. It uses docker-compose to create the container. It is my understanding that this cannot be run as an image from a GCP Artifact Repo as a docker Image. This needs a VM perhaps?
My question is if there is a way to deploy this container as an Image in a serverless fashion in GCP or any other cloud platform. Any pointers/help is much appreciated.
You can either use Cloud Run (the most Serverless way) or on a VM.
On Cloud Run you can deploy a single image as a Service (Cloud Run Terminology), if you have more than one image you can deploy multiple Services and make them talk to each other
Or on VM, that would be as if you are deploying on your personal laptop

Docker-machine vs Vagrant? [duplicate]

Every Docker image, as I understand, is based on base image - for example, Ubuntu.
And if I want to isolate any process I should deploy ubuntu docker base image (where is difference with Vagrant here?), and create a necessary subimage after it installing on ubuntu image?
So, if Ubuntu is launched on Vagrant and on Docker, where is practice difference?
And if to use docker provider in Vagrant - where here is difference between Vagrant and Docker?
And, in Docker is it possible to isolate processes on some PC without base image without it's sharing to another PC?
Vagrant is a utility to help you automate setting up VMs. Docker is a utility that helps you use containerization in linux.
A virtual machine runs a whole system, and emulates hardware. Containers section off processes in a single running kernel without emulating hardware.
Both a VM and a Docker image may be Ubuntu 14.04, but with the Docker image you don't need to run the whole OS.
For example, if I want to run an nginx container based on ubuntu, I'd end up with only the nginx process running. No upstart/systemd/init is needed. A VM would run an init system, manage its own networking, and run other services as well. The container image approach that uses a linux distro base is mostly for convenience.
It is entirely possible to run Docker containers with very minimal images. A statically compiled binary alone in an image is all you'd need to run a container.
Vagrant : Vagrant is a project that helps the spawning of virtual machines. It started as an command line of VirtualBox, something similar to Gemfile for VM's. You can choose the base image to start with, network, IP, share folders and put it all in a file that anyone can reuse to spawn the same configured machine. Vagrant has different extensions, provisioning options and VM providers. You can run a VirtualBox, VMware and it is extensible enough to be able to create instances on EC2.
Docker : Docker, allows to package an application with all of its dependencies into a standardized unit of software development. So, it reduces a friction between developer, QA and testing. It dynamically change your application, adding new capabilities every single day, scaling out services to quickly changing the problem areas. Docker is putting itself in an excited place as the interface to PaaS be it networking, discovery and service discovery with applications not having to care about underlying infrastructure. Yes, their are still issues with docker in production, but, hopefully, we'll see the solutions to those problems, as docker team and contributors working hard on those issues. As Docker Volume driver allows third-party container data management solutions to provide data volumes for containers which operate on data, such as database, key-value stores, and other stateful applications. The latest version is coming with much more flexibility, complete orchestration build-in, advanced networking, secrets management, etc. As you can see one, rexray, as volume plugin and provides advanced storage functionality. emccode/rexray We're finally starting to agree on more than just images and run time.

How to deploy several docker images to single virtual machine

I have an application which consists of 2 docker containers. Both are small and need to interact with each other quite often through rest api.
How can I deploy both of them to a single Virtial Machine in Google Cloud?
Usually, when creating virtual machine, I get to chose a container image to deploy: Deploy a container image to this VM instance.
I can specify one of my images and get it running in the VM. Can I set multiple images?
You can not deploy multiple containers per VM.
Please consider this limitation when deploying containers on VMs:
1.You can only deploy one container for each VM instance. Consider Google Kubernetes Engine if you need to deploy multiple containers per
VM instance.
2.You can only deploy containers from a public repository or from a private repository at Container Registry. Other private repositories
are currently not supported.
3.You can't map a VM instance's ports to the container's ports (Docker's -p option).
4.You can only use Container-Optimized OS images with this deployment method. You can only use this feature through the Google Cloud
Platform Console or the gcloud command-line tool, not the API.
You can use docker-compose to deploy multi-container applications.
To achieve this on Google Cloud, you'll need:
ssh access to VM
docker and docker compose installed on the VM

Docker usage in compose/swarm mode

I am quite new to docker and I need some help about distributing my application.
Consider this:
I have a pool of physical machines, each of them running the latest version of docker.
My "Application A" has several containers. To be clear in this definition, an application would be a database running in a container, 4 messaging containers and a master container. All 6 containers need to communicate between each other. The database, the messaging and etc containers would be the "services".
I can also have "Application B", "Application C" and "Application N...", that are slightly different in size and configuration from "Application A". Applications do not communicate between each other and are completely independent.
Requirements:
All applications "A,B,C..N" must use the same pool of physical machines.
Each service of each application must run in a different physical machine, if needed.
You may want to restrict how each service is allocated to each physical machine
I need to create applications "on the fly"
My first thought would be to use a docker-compose to define an application and several dockerfiles to define the services inside it. But if I do that, each application would be running in the same docker engine and therefore, the same physical machine.
I have read that you could deploy a docker compose into a docker swarm. In this case, docker swarm would act as a docker engine. However, I could not find any examples on how to do that and I am not sure of the limitations.
My second thought would be to use swarm mode. I would create a swarm, and run services on it. However, I would lose the the concept of "application". There would be a bunch of services thrown into the swarm and I could not manage how each of them communicate with each other.
So, given this problem:
Is there any assumption or statement I got wrong?
What is the recommended docker tools usage in the scenario?
It is possible to use Docker Compose with Docker Swarm Mode (Docker 1.12), but it is currently not completely compatible with it. Have a look at Docker Stacks and Bundles.
In the next version of Docker (1.13) there will also be the new release of Docker Compose v3, which will be compatible with Docker without Docker Compose. This will make it possible to deploy your Docker Compose file like this:
docker deploy --compose-file docker-compose.yml AppA
This is currently experimental but works quite fine with Docker 1-13-rc5. (Docker Releases)
A more detailed explanation of this can be found in this article.
For your requirements to have them all run on different hosts, this is possible with defining constraints in the docker service create (or in the Docker Compose v3) (See Docker Service Create - Constraints). But why do you need to have them run on different hosts?
It is possible to limit the CPU and memory usage that each service is able to use with --limit-cpu and --limit-memory.
If you want to play with Docker Swarm Mode you can create a swarm with Docker Machine on your local host. (Attention do not use the old Docker Swarm)

Docker vs Vagrant

Every Docker image, as I understand, is based on base image - for example, Ubuntu.
And if I want to isolate any process I should deploy ubuntu docker base image (where is difference with Vagrant here?), and create a necessary subimage after it installing on ubuntu image?
So, if Ubuntu is launched on Vagrant and on Docker, where is practice difference?
And if to use docker provider in Vagrant - where here is difference between Vagrant and Docker?
And, in Docker is it possible to isolate processes on some PC without base image without it's sharing to another PC?
Vagrant is a utility to help you automate setting up VMs. Docker is a utility that helps you use containerization in linux.
A virtual machine runs a whole system, and emulates hardware. Containers section off processes in a single running kernel without emulating hardware.
Both a VM and a Docker image may be Ubuntu 14.04, but with the Docker image you don't need to run the whole OS.
For example, if I want to run an nginx container based on ubuntu, I'd end up with only the nginx process running. No upstart/systemd/init is needed. A VM would run an init system, manage its own networking, and run other services as well. The container image approach that uses a linux distro base is mostly for convenience.
It is entirely possible to run Docker containers with very minimal images. A statically compiled binary alone in an image is all you'd need to run a container.
Vagrant : Vagrant is a project that helps the spawning of virtual machines. It started as an command line of VirtualBox, something similar to Gemfile for VM's. You can choose the base image to start with, network, IP, share folders and put it all in a file that anyone can reuse to spawn the same configured machine. Vagrant has different extensions, provisioning options and VM providers. You can run a VirtualBox, VMware and it is extensible enough to be able to create instances on EC2.
Docker : Docker, allows to package an application with all of its dependencies into a standardized unit of software development. So, it reduces a friction between developer, QA and testing. It dynamically change your application, adding new capabilities every single day, scaling out services to quickly changing the problem areas. Docker is putting itself in an excited place as the interface to PaaS be it networking, discovery and service discovery with applications not having to care about underlying infrastructure. Yes, their are still issues with docker in production, but, hopefully, we'll see the solutions to those problems, as docker team and contributors working hard on those issues. As Docker Volume driver allows third-party container data management solutions to provide data volumes for containers which operate on data, such as database, key-value stores, and other stateful applications. The latest version is coming with much more flexibility, complete orchestration build-in, advanced networking, secrets management, etc. As you can see one, rexray, as volume plugin and provides advanced storage functionality. emccode/rexray We're finally starting to agree on more than just images and run time.

Resources