What is the best way to deliver docker containers to remote host? - docker

I'm new in docker and docker-compose. I'm using of docker-compose file with several services. I have containers and images on the local machine when I work with docker-compose and my task is to deliver them remote host.
I found several solutions:
I could build my images and push them to the some registry and pull
them on production server. But for this option I need private
registry. And as I think- registry is an unnecessary element. I
wanna run countainers directly.
Save docker image to tar and load it to remote host. I saw this post
Moving docker-compose containersets around between hosts
, but in this case I need to have shell scripts. Or I can use
docker directly
(Docker image push over SSH (distributed)),
but in this case I'm losing the benefits of docker-compose.
Use docker-machine (https://github.com/docker/machine) with general driver. But in this case I can use it for deployng only from
one machine, or I need to configure certificates
(How to set TLS Certificates for a machine in docker-machine). And, again, it isn't simple solution, as for me.
Use docker-compose and parameter host (-H) - But in the last option I need to build images on remote host. Is it possible to
build image on local mashine and push it to remote host?
I could use docker-compose push (https://docs.docker.com/compose/reference/push/) to remote host,
but for this I need to create registry on remote host and I need to
add and pass hostname as parameter to docker compose every time.
What is the best practice to deliver docker containers to remote host?

Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.
If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.
There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.
Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.

Related

How to deploy a LOCAL docker image on a REMOTE docker host machine?

Assume that there are two docker host machines A and B and there is a docker image, xyz on machine A. I would like to deploy it to machine B from machine A. How can I do it?
I know that it is possible to operate on containers remotely by setting DOCKER_HOST as below, but this seems to require the docker image xyz already exists on machine B. Also, creating a tar file from the image xyz and coping it to machine B, and then running it on machine B is another way to do it. I wonder if there is a way in which I can do it directly from machine A? Thanks!
docker -H "ssh://my-user#machine-b" run -i - t xyz
As David mentions, the standard way to move images is with a registry. You can self host (e.g. Docker's registry), use a SaaS offering like Hub, or one of the cloud vendors (e.g. ECR, GCR). The advantage of a registry over sending exports is layers are deduplicated, saving you bandwidth when your image only has minor changes between versions. This functionality is also built into the docker engine with the push and pull commands.
With the save and load method, you could do this in two commads, piping the output, but as mentioned before this will transfer all the layers even if you only have minor changes:
docker save ${image} | docker -H ssh://${user}#${host} load
I use Portainer Agent for this. Setup Portainer locally on Machine A on either port 9000 or 9443. Then add the Machine B environment using Portainer Agent on port 8000. Open the Machine B enviornment. You'll then be able to create containers, pull images, deploy stacks, whatever you need from Machine A->Machine B. It's a great way to migrate containers.

Update Docker Images via dockerized Jenkins Job

I run some docker containers on my Synology NAS. Now I also run Jenkins via Docker on the NAS and want to create a job that does the following steps:
Stop all Docker Containers
Delete all unnecessary stuff (-> docker system prune)
Rebuild all Docker images
Run the new Docker image
But I don't know how to access the host system in dockerized Jenkin. SSH to the Host doesn't seem to be a good idea.
Do you have any tips?
The whole point of your Docker images is to run in an isolated sandbox, so it's by design that your image doesn't have access to the native system. SSH is one approach, but risky, as you point out.
A better approach is to set the DOCKER_HOST environment variable to point to the IP of the NAS (which might need to be the virtual network NAS address). You will probably need to experiment a bit with getting the correct address and making sure the hosted docker command has permissions to drive the host's Docker service.
This post in the Synology Forums may get you on the right track.

Proper way to deploy docker services via Gitlab CI/CD to an own server

My application is built using 3 Docker services:
backend (React)
frontend (Node.js)
nginx (routing traffic)
Up until now I was manually logging into an own Digital Ocean server, cloning the repository and launching the services with docker-compose build && docker-compose up.
I want to automate the process from now on.
Given Gitlab CI/CD Pipelines and the runners, what would be the best approach to automatically deploy the code to Digital Ocean server?
[WHAT I WAS THINKING OF, might seem very "beginner"]
Idea 1: Once a commit was pushed to master -> Gitlab runner will build the services and then copy it over to the DO server via scp. Problem: how do you launch the services? Do you connect to the DO server via ssh from the runner and then run the start script there?
Idea 2: Register a worker on the DO server just so when it pulls the data from Gitlab it has the code on the DO server itself. It just has to build them and run. But this approach is not scalable and seems hacky.
I am looking for some thinking guidelines or a step-by-step approach.
One of the benefits of using Docker in a production-deployment scenario is that you don't separately scp your application code; everything you need is built into the image.
If you're using an automation system like Ansible that can directly run containers on remote hosts then this is straightforward. Your CI system builds Docker images, tags them with some unique version stamp, and pushes them to a repository (Docker Hub, something provided by your cloud provider, one you run yourself). It then triggers the automation system to tell it to start containers with the image you built. (In the case of Ansible, it runs over ssh, so this is more or less equivalent to the other ssh-based options; tools like Chef or Salt Stack require a dedicated agent on the target system.)
If you don't have an automation system like that but you do have ssh and Docker Compose installed on the target system, then you can copy only the docker-compose.yml file to the target host, and then launch it.
TAG=...
docker push myname/myimage:$TAG
scp docker-compose.yml root#remote:
ssh root#remote env TAG=$TAG docker-compose up -d
A further option is to use a dedicated cluster manager like Kubernetes, and talk to its API; then the cluster will pull the updated containers itself, and you don't have to ssh anything. At the scale you're discussing this is probably much heavier weight than you need.

use docker's remote API in a secure manner

I am trying to find an effective way to use the docker remote API in a secure way.
I have a docker daemon running in a remote host, and a docker client on a different machine. I need my solution to not be client/server OS dependent, so that it would be relevant to any machine with a docker client/daemon etc.
So far, the only way I found to do such a thing is to create certs on a Linux machine with openssl and copy the certs to the client/server manually, as in this example:
https://docs.docker.com/engine/security/https/
and then configure docker on both sides to use the certificates for encryption and authentication.
This method is rather clunky in my opinion, because some times it's a problem to copy files and put them on each machine I want to use remote API from.
I am looking for something more elegant.
Another solution I've found is using a proxy for basic HTTP authentication, but in this method the traffic is not encrypted and it is not really secure that way.
Does anyone have a suggestion for a different solution or for a way to improve one of the above?
Your favorite system automation tool (Chef, SaltStack, Ansible) can probably directly manage the running Docker containers on a remote host, without opening another root-equivalent network path. There are Docker-oriented clustering tools (Docker Swarm, Nomad, Kubernetes, AWS ECS) that can run a container locally or remotely, but you have less control over where exactly (you frequently don't actually care) and they tend to take over the machines they're running on.
If I really had to manage systems this way I'd probably use some sort of centralized storage to keep the TLS client keys, most likely Vault, which has the property of storing the keys encrypted, requiring some level of authentication to retrieve them, and being able to access-control them. You could write a shell function like this (untested):
dockerHost() {
mkdir -p "$HOME/.docker/$1"
JSON=$(vault kv get -format=json "secret/docker/$1")
for f in ca.pem cert.pem key.pem; do
echo "$JSON" | jq ".data.data.[\"$f\"]" > "$HOME/.docker/$1/$f"
done
export DOCKER_HOST="https://$1:2376"
export DOCKER_CERT_PATH="$HOME/.docker/$1"
}
While your question makes clear you understand this, it bears repeating: do not enable unauthenticated remote access to the Docker daemon, since it is trivial to take over a host with unrestricted root access if you can access the socket at all.
Based on your comments, I would suggest you go with Ansible if you don't need the swarm functionality and require only single host support. Ansible only requires SSH access which you probably already have available.
It's very easy to use an existing service that's defined in Docker Compose or you can just invoke your shell scripts in Ansible. No need to expose the Docker daemon to the external world.
A very simple example file (playbook.yml)
- hosts: all
tasks:
- name: setup container
docker_container:
name: helloworld
image: hello-world
Running the playbook
ansible-playbook -i username#mysshhost.com, playbook.yml
Ansible provides pretty much all of the functionality you need to interact with Docker via its module system:
docker_service
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_facts
Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.

Deploy docker windows container from CI to Windows Server 2016

I'm trying to wrap my head around Docker containers, specifically how to deploy them to a Docker container host. I know there are lots of options here and ultimately we'll switch to a more common deployment approach (e.g. to Azure, AWS) but this is a temporary requirement. We're using windows containers.
I have a container image that I've created and will be recreated on each build as part of a Jenkins job (our Jenkins instance is hosted on a container-ready windows server 2016 box). I also have a separate container-ready Windows Server 2016 box which is where we intend to run the containers from.
However, I'm not sure how I can have the containers that our Jenkins box produces automatically pushed to our separate 2016 host. Ideally, I'd like to avoid using a container registry, unless there is a low-friction, on-premise option available.
Container registries are the way to distribute Docker images. Tooling is built around registries, it would be counterproductive to work against the concept.
But docker image save and docker image import could get you started as it saves the image as a tar file that you can transfer between the hosts. Once you copied the image to the other box, you can start it up with the usual docker run command, or docker compose up.
If your case is not trivial though and you start having multiple Docker hosts to run the containers, container orchestrators like Docker Swarm, Kubernetes are the way to go - or the managed versions of those, like Azure ACS. That rabbit hole is deeper though than I can answer in a single SO answer :)

Resources