Issue commands from a gitlab-runner inside docker container - docker

I have a machine with multiple docker containers for a project that I am developing and I just set up a new docker container running Gitlab-Runner inside it.
I need to run a few commands on all the other docker-containers whenever a commit is issued, is there anyway for the runner inside the Gitlab-Runner to access the other containers and tell them to execute commands or even restart them?
We currently don't use SSH keys to access this server that has all the docker containers, we use username and password.

The safe way (and easier than with passwords too) is start using SSH keys and access containers over network. Or at least issue commands to host over SSH from gitlab-runner.
Also, SOF seach returned this: manage containers from another container, docker
Looks legit.

Related

Use docker socket during image building [duplicate]

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

It's possible to manage MacOS Docker Desktop with Docker Machine?

I have Docker Desktop installed on my Mac (not Docker Toolkit) and I installed docker-machine according to the official documentation
I'm triying to add my localhost Docker engine like a docker node under docker machine with no success.
The steps that I made were:
Enable sshd in localhost (ssh localhost works)
Add localhost Docker to Docker Machine:
docker-machine create --driver generic --generic-ip-address 127.0.0.1 --generic-ssh-user <"ssh_username"> <node_name>
Running pre-create checks...
Creating machine...
(localhost) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Error creating machine: Error detecting OS: Error getting SSH command: ssh command error:
command : cat /etc/os-release
err : exit status 1
output : cat: /etc/os-release: No such file or directory
Output of docker-machine ls
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
localhost - generic Running tcp://127.0.0.1:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
Sorry for my English, I'm not native.
docker-machine is dangerous. I wouldn't recommend it for managing production servers as it requires passwordless sudo and makes it very easy to damage your Docker installation. I managed to completely remove all containers an images from a server, not realizing the command I ran was not merely connecting to the server, but initializing it from scratch.
If you want to control multiple Docker daemons from single CLI try Docker Contexts.
Edit:
docker-machine's purpose is provisioning and managing machines with Docker daemon.
It can be used both with local VM's and with various cloud providers. With a single command it can create and start a VM, then install and configure Docker on that new VM (including generating TLS certificates).
It can create an entire Docker Swarm cluster.
It can also install Docker on a physical machine, given SSH access with passwordless sudo (that is what generic driver you tried to use is for).
Once a machine is fully provisioned with Docker it also can set environment variables that configure Docker CLI to send commands to a remote Docker daemon installed on that machine - see here for details.
Finally, one can also add machines with Docker manually configured by not using any driver - as described here. The only purpose of that is to allow for a unified workflow when switching between various remote machines.
However, as I stated before docker-machine is dangerous - it can also remove existing VMs and in case of physical machines reprovsion them, thereby removing all existing images, containers, etc. A simple mistake can wipe a server clean. Not to mention it requires both key-based SSH and passwordless sudo, so if an unauthorized person gets their hands on an SSH key for a production server, then that's it - they have full root access to everything.
It is possible to use docker-machine with preexisting Docker installations safely - you need to add them without using any driver as described here. In this scenario, however, most docker-machine commands won't work, so the only benefit is easy generation of those environment variables for Docker CLI I mentioned before.
Docker Contexts are a new way of telling Docker CLI which Docker daemon it's supposed to communicate with. They essentially are meant to replace all those environment variables docker-machine generates.
Since Docker CLI only communicates with Docker daemon, there is no risk of accidentally deleting a VM or reprovisioning already configured physical machine. And since they are a part of Docker CLI, there is no need to install additional software.
On the other hand, Docker contexts cannot be used to create or provision new machines - one needs to either do that manually or use some other mechanism or tool (like Vagrant or some kind of template provided by the cloud provider).
So if you really need a tool that'll let you easily create, provision and remove docker-enabled machines then use docker-machine. If, however, all you wan is to have a list of all your Docker-enabled machines in one place and a way to easily set up which one your local Docker CLI is supposed to talk to, Docker Contexts are a much safer alternative.

Access docker within Dockerfile?

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

What is the purpose of creating a volume between two docker.sock files in Docker?

I see this in many docker apps that are somehow related hardly on networking, but I can't understand this
Got it.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Let’s take a step back here. Do you really want Docker-in-Docker? Or
do you just want to be able to run Docker (specifically: build, run,
sometimes push containers and images) from your CI system, while this
CI system itself is in a container?
I’m going to bet that most people want the latter. All you want is a
solution so that your CI system like Jenkins can start containers.
And the simplest way is to just expose the Docker socket to your CI
container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other),
instead of hacking something together with Docker-in-Docker, start it
with:
docker run -v /var/run/docker.sock:/var/run/docker.sock
Now this
container will have access to the Docker socket, and will therefore be
able to start containers. Except that instead of starting “child”
containers, it will start “sibling” containers.

Is there a Better way to run a command or shell on Docker swarm

Lets say I want to edit a config file for an NGINX Docker service that is replicated across 3 nodes.
Currently I list the services using docker service ls.
Then get the details to find a node running a container for that service using docker serivce ps servicename.
Then ssh to a node where one of the containers is running.
Finally, docker exec -it containername bash. Then I edit the config file.
Two questions:
Is there a better way to do this rather than ssh to a node running a container? Maybe there is a swarm or service command to do so?
If I were to edit that config file on one container would that change be replicated to the other 2 containers in the swarm?
The purpose of this exercise would be to edit configuration without shutting down a service.
You should not be exec'ing into containers to change their configuration, and so docker has not created an easy way to do this within Swarm Mode. You could use classic swarm to avoid the need to ssh into the other host, but I still don't recommend this.
The correct way to do this is to migrate your configuration file into a docker config entry. Version your config name. Then when you want to update it, you create a new version with the desired changes, and do a rolling update of your service to use that new configuration.
Unless the config is mounted from an external source like NFS, changes to one config in one container will not apply to other containers running on other nodes. If that config is stored locally inside your container as part of it's internal copy-on-write filesystem, then no changes from one container will be visible in any other container.

Resources