Deploy Docker services to a remote machine via ssh (using Docker Compose with DOCKER_HOST var) - docker

I'm trying deploy some docker services from a compose file to a Vagrantbox. The Vagrantbox does not have a static IP. I'm using the DOCKER_HOST environment variable to set up the target engine.
This is the command I use: DOCKER_HOST="ssh://$BOX_USER#$BOX_IP" docker-compose up -d. The BOX_IP and BOX_USER vars contain the correct IP address and username (obtained at runtime from the Vagrantbox).
I can connect and deploy services this way, but I the SSH connection always asks if I wanna trust the machine. Since the VM gets a dynamic IP, my known_hosts file gets polluted with lines I only used once and might cause trouble some time in the future in case the IP is taken again.
Assigning a static IP results in error messages stating that the machine does not match my known_hosts entry.
And setting StrictHostKeyChecking=no also is not an option because this opens the door for a lot of security issues.
So my question is: how can I deploy containers to a remote Vagrantbox without the metioned issues? Ideally I can start a docker container handles the deployments. But I'm open to any other idea as well.
The reason why I don't just use a bash script while provisioning the VM is that this VM acts as a testing ground for a physical machine. The scripts I use are the same for the real machine. I test them regularly and automated inside a Vagrantbox.
UPDATE: I'm using Linux

Related

Gitlab Runner, docker executor, deploy from linux container to CIFS share

I have a Gitlab runner that runs all kind of jobs using Docker executors (host is Ubuntu 20, guests are various Linux images). The runner runs containers as unprivileged.
I am stumped on an apparently simple requirement - I need to deploy some artifacts on a Windows machine that exposes the target path as an authenticated share (\\myserver\myapp). Nothing more than replacing files on the target with the ones on the source - a simple rsync would be fine.
Gitlab Runner does not allow specifying mounts in the CI config (see https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28121), so I tried using mount.cifs, but I discovered that by default Docker does not allow mounting anything inside the container unless running privileged, which I would like to avoid.
I also tried the suggestion to use --cap-add as described at Mount SMB/CIFS share within a Docker container but they do not seem to be enough for my host, there are probably other required capabilities and I have no idea how to identify them. Also, this looks just slightly less ugly than running privileged.
Now, I do not strictly need to mount the remote folder - if there were an SMB-aware rsync command for example I would be more than happy to use that. Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
Do you have any idea on how achieve this?
Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
You could simply copy an executable (that you can build anywhere else) in Go, listening to a port, ready to receive a file.
This this implementation for instance: file-receive.go. It listens on port 8080 (can be changed) and copies the file content to a local folder.
No installation or setup required: just copy the exe to the target machine and run it.
From your GitLab runner, you can 'curl' send a file to the remote PC machine, port 8080.

It's possible to manage MacOS Docker Desktop with Docker Machine?

I have Docker Desktop installed on my Mac (not Docker Toolkit) and I installed docker-machine according to the official documentation
I'm triying to add my localhost Docker engine like a docker node under docker machine with no success.
The steps that I made were:
Enable sshd in localhost (ssh localhost works)
Add localhost Docker to Docker Machine:
docker-machine create --driver generic --generic-ip-address 127.0.0.1 --generic-ssh-user <"ssh_username"> <node_name>
Running pre-create checks...
Creating machine...
(localhost) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Error creating machine: Error detecting OS: Error getting SSH command: ssh command error:
command : cat /etc/os-release
err : exit status 1
output : cat: /etc/os-release: No such file or directory
Output of docker-machine ls
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
localhost - generic Running tcp://127.0.0.1:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
Sorry for my English, I'm not native.
docker-machine is dangerous. I wouldn't recommend it for managing production servers as it requires passwordless sudo and makes it very easy to damage your Docker installation. I managed to completely remove all containers an images from a server, not realizing the command I ran was not merely connecting to the server, but initializing it from scratch.
If you want to control multiple Docker daemons from single CLI try Docker Contexts.
Edit:
docker-machine's purpose is provisioning and managing machines with Docker daemon.
It can be used both with local VM's and with various cloud providers. With a single command it can create and start a VM, then install and configure Docker on that new VM (including generating TLS certificates).
It can create an entire Docker Swarm cluster.
It can also install Docker on a physical machine, given SSH access with passwordless sudo (that is what generic driver you tried to use is for).
Once a machine is fully provisioned with Docker it also can set environment variables that configure Docker CLI to send commands to a remote Docker daemon installed on that machine - see here for details.
Finally, one can also add machines with Docker manually configured by not using any driver - as described here. The only purpose of that is to allow for a unified workflow when switching between various remote machines.
However, as I stated before docker-machine is dangerous - it can also remove existing VMs and in case of physical machines reprovsion them, thereby removing all existing images, containers, etc. A simple mistake can wipe a server clean. Not to mention it requires both key-based SSH and passwordless sudo, so if an unauthorized person gets their hands on an SSH key for a production server, then that's it - they have full root access to everything.
It is possible to use docker-machine with preexisting Docker installations safely - you need to add them without using any driver as described here. In this scenario, however, most docker-machine commands won't work, so the only benefit is easy generation of those environment variables for Docker CLI I mentioned before.
Docker Contexts are a new way of telling Docker CLI which Docker daemon it's supposed to communicate with. They essentially are meant to replace all those environment variables docker-machine generates.
Since Docker CLI only communicates with Docker daemon, there is no risk of accidentally deleting a VM or reprovisioning already configured physical machine. And since they are a part of Docker CLI, there is no need to install additional software.
On the other hand, Docker contexts cannot be used to create or provision new machines - one needs to either do that manually or use some other mechanism or tool (like Vagrant or some kind of template provided by the cloud provider).
So if you really need a tool that'll let you easily create, provision and remove docker-enabled machines then use docker-machine. If, however, all you wan is to have a list of all your Docker-enabled machines in one place and a way to easily set up which one your local Docker CLI is supposed to talk to, Docker Contexts are a much safer alternative.

Consistent hostname for Docker containers/VM across platforms?

I have a docker-compose.yml file that starts my application server. I then use a few different tools to run integration/stress tests against it.
However, on my Mac the application server starts on localhost, and on Windows it starts on the VirtualBox VM at 192.168.99.100.
I've heard that docker-machine ip is a cross-platform way to "find" your Docker containers, and it works for Windows:
Craig#Work-PC MINGW64 ~
$ docker-machine ip
192.168.99.100
But on my Mac:
craig$ docker-machine ip
Error: No machine name(s) specified and no "default" machine exists
Is there a way to use docker-machine or some other already established method for finding the container hostname in a cross-platform way?
My goal would be to have this run out-of-the-box without any config on both a new Mac, and new Windows, environment. I can update all my scripts to check for Windows vs Mac (vs Unix), but then I have to duplicate the logic in a bunch of different places.
Docker machine provides a wrapper around setting $DOCKER_HOST in your environment. So you can check this variable, if it's defined strip off the port number and that's your target IP. And if it's not defined, you can use localhost.

docker containers communication on dev machine

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?
So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.
I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch
If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)
docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
See "Linking containers together"
In that case, your container config can be static and known/written in advance, as it will refer to the search machine as 'elasticsearch'.
How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?
To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.
Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you're using service names, not ip addresses. Even with ip addresses you'd have no problem as long as you only have one ES and you're willing to restart your service every time the ES ip address changes.
You should really ask someone who knows for sure how you resolve these things in your production environments, because you're unlikely to be the only person in your org who has had this problem -- connecting to a database poses the same problem.
If you have no constraints at all then you should check out something like Consul from Hashicorp. It'll help you a lot with this problem; if you are allowed to use it.

Dockerized jenkins is a good choice?

As mentioned in the title, I thinking about a dockerized jenkins. I have a running container that run all tests but now I want to run some deployment job.
The files (.py, .conf, .sh) will be copied into folders which are mounted by other container (app container). As I seen some recommend do not use docker as well.
Now I'm wondering if I should continue to use jenkins in a container (so i must find a way to run the deployment script) or prefer to install it on the server ?
If you are running dockerized Jenkins for production, It is good practice to have its volume mounted on Docker host.
I personally do not prefer dockerized Jenkins for production due to non static IP for Jenkins, and reliability issues with docker networking. For non-production use, i dockerize Jenkins.
We're experimenting with containerizing Jenkins in production - the flexibility of being able to easily set up or move instances offsets the learning pain, and that pain is :
1 - Some build jobs are themselves containerized, requiring that you run docker-in-docker. This is possible by passing the host docker.sock into the Jenkins' container. (more reading : https://getintodevops.com/blog/the-simple-way-to-run-docker-in-docker-for-ci). It requires that the host and Jenkins container are running identical versions of Docker, but I can live with that.
2 - SSH keys are a bigger issue. ssh agent forwarding in Docker is notorious for its unreliability, and we've always copied keys into containers (ignoring security questions for the context of this question). In an on-the-host Jenkins instance we put our ssh keys in Jenkins' home folder and everything works seamlessly. But, dockerized Jenkins has its home folder inside a Docker volume, which is owned by the host system, so keys are too open. We got around this by copying the keys to a folder outside Jenkins' home, chown/chmod'ing those keys to the Jenkins container user, then adding the key path to the container's /etc/ssh/ssh_config.

Resources