How can I remotely connect to docker swarm? - docker

Is it possible to execute commands on a docker swarm cluster hosted in cloud from my local mac? If yes, how?
I want to execute command such as following on docker swarm from my local:
docker create secret my-secret <address to local file>
docker service create --name x --secrets my-secret image

Answer to the question can be found here.
What one needs to do for ubuntu machine is define daemon.json file at path /etc/docker with following content:
{
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
}
The above configuration is unsecured and shouldn't be used if server is publicly hosted.
For secured connection use following config:
{
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"hosts": ["tcp://x.x.x.y:2376", "unix:///var/run/docker.sock"]
}
Details for generating certificate can be found here as mentioned by #BMitch.

One option is to provide direct access to the docker daemon as suggested in the previous answers, but that requires setting up TLS certificates and keys, which can itself be tricky and time consuming. Docker machine can automate that process, when docker machine has been used to create the nodes.
I had the same problem, in that I wanted to create secrets on the swarm without uploading the file containing the secret to the swarm manager. Also, I wanted to be able to run the deploy stackfile (e.g. docker-compose.yml) without the hassle of first uploading the stackfile.
I wanted to be able to create the few servers I needed on e.g. DigitalOcean, not necessarily using docker machine, and be able to reproducibly create the secrets and run the stackfile. In environments like DigitalOcean and AWS, a separate set of TLS certificates is not used, but rather the ssh key on the local machine is used to access the remote node over ssh.
The solution that worked for me was to run the docker commands using individual commands over ssh, which allows me to pipe the secret and/or stackfile using stdin.
To do this, you first need to create the DigitalOcean droplets and get docker installed on them, possibly from a custom image or snapshot, or simply running the commands to install docker on each droplet. Then, join the droplets into a swarm: ssh into the one that will be the manager node, type docker swarm init (possibly with the --advertise-addr option if there is more than one IP on that node, such as when you want to keep intra-swarm traffic on the private network) and get back the join command for the swarm. Then ssh into each of the other nodes and issue the join command, and your swarm is created.
Then, export the ssh command you will need to issue commands on the manager node, like
export SSH_CMD='ssh root#159.89.98.121'
Now, you have a couple of options. You can issue individual docker commands like:
$SSH_CMD docker service ls
You can create a secret on your swarm without copying the secret file to the swarm manager:
$SSH_CMD docker create secret my-secret - < /path/to/local/file
$SSH_CMD docker service create --name x --secrets my-secret image
(Using - to indicate that docker create secret should accept the secret on stdin, and then piping the file to stdin using ssh)
You can also create a script to be able to reproducibly run commands to create your secrets and bring up your stack with secret files and stackfiles only on your local machine. Such a script might be:
$SSH_CMD docker secret create rabbitmq.config.01 - < rabbitmq/rabbitmq.config
$SSH_CMD docker secret create enabled_plugins.01 - < rabbitmq/enabled_plugins
$SSH_CMD docker secret create rmq_cacert.pem.01 - < rabbitmq/cacert.pem
$SSH_CMD docker secret create rmq_cert.pem.01 - < rabbitmq/cert.pem
$SSH_CMD docker secret create rmq_key.pem.01 - < rabbitmq/key.pem
$SSH_CMD docker stack up -c - rabbitmq_stack < rabbitmq.yml
where secrets are used for the certs and keys, and also for the configuration files rabbitmq.config and enabled_plugins, and the stackfile is rabbitmq.yml, which could be:
version: '3.1'
services:
rabbitmq:
image: rabbitmq
secrets:
- source: rabbitmq.config.01
target: /etc/rabbitmq/rabbitmq.config
- source: enabled_plugins.01
target: /etc/rabbitmq/enabled_plugins
- source: rmq_cacert.pem.01
target: /run/secrets/rmq_cacert.pem
- source: rmq_cert.pem.01
target: /run/secrets/rmq_cert.pem
- source: rmq_key.pem.01
target: /run/secrets/rmq_key.pem
ports:
# stomp, ssl:
- 61614:61614
# amqp, ssl:
- 5671:5671
# monitoring, ssl:
- 15671:15671
# monitoring, non ssl:
- 15672:15672
# nginx here is only to show another service in the stackfile
nginx:
image: nginx
ports:
- 80:80
secrets:
rabbitmq.config.01:
external: true
rmq_cacert.pem.01:
external: true
rmq_cert.pem.01:
external: true
rmq_key.pem.01:
external: true
enabled_plugins.01:
external: true
(Here, the rabbitmq.config file sets up the ssh listening ports for stomp, amqp, and the monitoring interface, and tells rabbitmq to look for the certs and key within /run/secrets. Another alternative for this specific image would be to use the environment variables provided by the image to point to the secrets files, but I wanted a more generic solution that did not require configuration within the image)
Now, if you want to bring up another swarm, your script will work with that swarm once you have set the SSH_CMD environment variable, and you need neither set up TLS nor copy your secret or stackfiles to the swarm filesystem.
So, this doesn't solve the problem of creating the swarm (whose existence was presupposed by your question), but once it is created, using an environment variable (exported if you want to use it in scripts) will allow you to use almost exactly the commands you listed, prefixed with that environment variable.

This is the easiest way of running commands on remote docker engine:
docker context create --docker host=ssh://myuser#myremote myremote
docker --context myremote ps -a
docker --context myremote create secret my-secret <address to local file>
docker --context myremote service create --name x --secrets my-secret image
or
docker --host ssh://myuser#myremote ps -a
You can even set the remote context as default and issue commands as if it is local:
docker context use myremote
docker ps # lists remote running containers
In this case you don't even need to have docker engine installed, just docker-ce-cli.
You need to use key based authentication for this do work (you should already be using it). Other options include setting up tls cert based socket, or ssh tunnels.
Also, consider setting up ssh control socket to avoid re-authenting on each command, so your commands will run faster, as it was local.

To connect to a remote docker node, you should setup TLS on both the docker host and client signed from the same CA. Take care to limit what keys you sign with this CA since it is used to control access to the docker host.
Docker has documented the steps to setup a CA and create/install the keys here: https://docs.docker.com/engine/security/https/
Once configured, you can connect to the newer swarm mode environments using the same docker commands you run locally on the docker host just by changing the value of $DOCKER_HOST in your shell.

If you start from scratch, you can create the manager node using a generic docker-machine driver. Afterwards you will be able to connect to that docker engine from your local machine with the help of docker-machine env command.

Related

Why do I need to be in Swarm mode to use Docker secrets?

I am playing around with a single container docker image. I would like to store my db password as a secret without using compose (having probs with that and Gradle for now). I thought I could still use secrets even without compose but when I try I get...
$ echo "helloSecret" | docker secret create helloS -
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
Why do I need to use swarm mode just to use secrets? Why can't I use them without a cluster?
You need to run swarm mode for secrets because that's how docker implemented secrets. The value of secrets is that workers never write the secret to disk, the secret is on a need-to-know basis (other workers do not receive the secret until a task is scheduled there), and on managers encrypt that secret on disk. The storage of the secret on the manager uses the raft database.
You can easily deploy a single node swarm cluster with the command docker swarm init. From there, docker-compose up gets changed to docker stack deploy -c docker-compose.yml $stack_name.
Secrets and configs in swarm mode provide a replacement for mounting single file volumes into containers for configuration. So without swarm mode on a single node, you can always make the following definition:
version: '2'
services:
app:
image: myapp:latest
volumes:
- ./secrets:/run/secrets:ro
Or you can separate the secrets from your app slightly by loading those secrets into a named volume. For that, you could do something like:
tar -cC ./secrets . | docker run -i -v secrets:/secrets busybox tar -xC /secrets
And then mount that named volume:
version: '2'
volumes:
secrets:
external: true
services:
app:
image: myapp:latest
volumes:
- secrets:/run/secrets:ro
Check out this answer: https://serverfault.com/a/936262 as provided by user sel-en-ium :-
You can use secrets if you use a compose file. (You don't need to run
a swarm).
You use a compose file with docker-compose: there is documentation for
"secrets" in a docker-compose.yml file.
I switched to docker-compose because I wanted to use secrets. I am
happy I did, it seems much more clean. Each service maps to a
container. And if you ever want to switch to running a swarm instead,
you are basically already there.
Unfortunately the secrets are not loaded into the container's
environment, they are mounted to /run/secrets/

Calling docker stack deploy on a docker host from within a Jenkins container

On my OS X host, I'm using Docker CE (18.06.1-ce-mac73 (26764)) with Kubernetes enabled and using Kubernetes orchestration. From this host, I can run a stack deploy to deploy a container to Kubernetes using this simple docker-compose file (kube-compose.yml):
version: '3.3'
services:
web:
image: dockerdemos/lab-web
volumes:
- "./web/static:/static"
ports:
- "9999:80"
and this command-line run from the directory containing the compose file:
docker stack deploy --compose-file ./kube-compose.yml simple_test
However, when I attempt to run the same command from my Jenkins container, Jenkins returns:
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
I do not want the docker client in the Jenkins container to be initialized for a swarm since I'm not using Docker swarm on the host.
The Jenkins container is defined in a docker-compose to include a volume mount to the docker host socket endpoint:
version: '3.3'
services:
jenkins:
# contains embedded docker client & blueocean plugin
image: jenkinsci/blueocean:latest
user: root
ports:
- "8080:8080"
- "50000:50000"
volumes:
- ./jenkins_home:/var/jenkins_home
# run Docker from the host system when the container calls it.
- /var/run/docker.sock:/var/run/docker.sock
# root of simple project
- .:/home/project
container_name: jenkins
I have also followed this guide to proxy requests to the docker host with socat: https://github.com/docker/for-mac/issues/770 and here: Docker-compose: deploying service in multiple hosts.
Finally, I'm using the following Jenkins definition (Jenkinsfile) to call stack to deploy on my host. Jenkins has the Jenkins docker plug-in installed:
node {
checkout scm
stage ('Deploy To Kube') {
docker.withServer('tcp://docker.for.mac.localhost:1234') {
sh 'docker stack deploy app --compose-file /home/project/kube-compose.yml'
}
}
}
I've also tried changing the withServer signature to:
docker.withServer('unix:///var/run/docker.sock')
and I get the same error response. I am, however, able to telnet to the docker host from the Jenkins container so I know it's reachable. Also, as I mentioned earlier, I know the message is saying to run swarm init, but I am not deploying to swarm.
I checked the version of the docker client in the Jenkins container and it is the same version (Linux variant, however) as I'm using on my host:
Docker version 18.06.1-ce, build d72f525745
Here's the code I've described: https://github.com/ewilansky/localstackdeploy.git
Please let me know if it's possible to do what I'm hoping to do from the Jenkins container. The purpose for all of this is to provide a simple, portable demonstration of a pipeline and deploying to Kubernetes is the last step. I understand that this is not the approach that would be taken anywhere outside of a local development environment.
Here is an approach that's working well for me until the Jenkins Docker plug-in or the Kubernetes Docker Stack Deploy command can support the remote deployment scenario I described.
I'm now using the Kubernetes client kubectl from the Jenkins container. To minimize the size increase of the Jenkins container, I added just the Kubernetes client to the jenkinsci/blueocean image that was built on Alpine Linux. This DockerFile shows the addition:
FROM jenkinsci/blueocean
USER root
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
RUN mkdir /root/.kube
COPY kube-config /root/.kube/config
I took this approach, which added ~100 mb to the image size rather than getting the Alpine Linux Kubernetes package, which almost doubled the size of the image in my testing. Granted, the Kubernetes package has all Kubernetes components, but all I needed was the Kubernetes client. This is similar to the requirement that the docker client be resident to the Jenkins container in order to run Docker commands on the host.
Notice in the DockerFile that there is reference to the Kuberenetes config file:
kube-config /root/.kube/config
I started with the Kubernetes configuration file on my host machine (the computer running Docker for Mac). I believe that if you enable Kubernetes in Docker for Mac, the Kubernetes client configuration will be present at ~/.kube/config. If not, install the Kubernetes client tools separately. In the Kubernetes configuration file that you will copy over to the Jenkins container via DockerFile, just change the server value so that the Jenkins container is pointing at the Docker for Mac host:
server: https://docker.for.mac.localhost:6443
If you're using a Windows machine, I think you can use docker.for.win.localhost. There's a discussion about this here: https://github.com/docker/for-mac/issues/2705 and other approaches described here: https://github.com/docker/for-linux/issues/264.
After recomposing the Jenkins container, I was then able to use kubectl to create a deployment and service for my app that's now running in the Kubernetes Docker for Mac host. In my case, here are the two commands I added to my Jenkins file:
stage ('Deploy To Kube') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-deployment.yaml'
}
stage('Configure Kube Load Balancer') {
sh 'kubectl create -f /kube/deploy/app_set/sb-demo-service.yaml'
}
There are loads of options for Kubernetes container deployments. In my case, I simply needed to deploy my web app (with replicas) behind a load balancer. All of that is defined in the two yaml files called by kubectl. This is a bit more involved than docker stack deploy, but achieves the same end result.

Docker on Windows10 home - inside docker container connect to the docker engine

When creating a Jenkins Docker container, it is very useful to able to connect to the Docker daemon. In that way, I can start docker commands inside the Jenkins container.
For example, after starting the Jenkins Docker container, I would like to 'docker exec -it container-id bash' and start 'docker ps'.
On Linux you can use bind-mounts on /var/run/docker.sock. On Windows this seems not possible. The solution is by using 'named pipes'. So, in my docker-compose.yml file I tried to create a named pipe.
version: '2'
services:
jenkins:
image: jenkins-docker
build:
context: ./
dockerfile: Dockerfile_docker
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- jenkins_home:/var/jenkins_home
- \\.\pipe\docker_engine:\\.\pipe\docker_engine
# - /var/run/docker.sock:/var/run/docker.sock
# - /path/to/postgresql/data:/var/run/postgresql/data
# - etc.
Starting docker-compose with this file, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
How can I setup the docker-compose file so that I can use the docker.sock (or Docker) inside the started container?
On Linux you can use something like volumes: /var/run/docker.sock:/var/run/docker.sock. This does not work in a Windows environment. When you add this folder (/var) to Oracle VM Virtualbox, it won't get any IP forever. And on many posts
You can expose the daemon on tcp://localhost:2375 without TLS in the settings. This way you can configure Jenkins to use the Docker API instead of the socket. I encourage you to read this article by Nick Janetakis about "Understanding how the Docker Daemon and the Docker CLI work together".
And then there are several Docker plugins for Jenkins that allows this connection:
Also, you can find additional information in the Docker plugin documentation on wiki.jenkins.io:
def dockerCloudParameters = [
connectTimeout: 3,
containerCapStr: '4',
credentialsId: '',
dockerHostname: '',
name: 'docker.local',
readTimeout: 60,
serverUrl: 'unix:///var/run/docker.sock', // <-- Replace here by the tcp address
version: ''
]
EDIT 1:
I don't know if it is useful, but the Docker Daemon on Windows is located to C:\ProgramData\docker according to the Docker Daemon configuration doc.
EDIT 2:
You need to say explicitly the container to use the host network because you want to expose both Jenkins and Docker API.
Following this documentation, you only have to add --network=host (or network_mode: 'host' in docker-compose) to your container/service. For further information, you can read this article to understand what is the purpose of this network mode.
First try was to start a Docker environment using "Docker Quickstart terminal". This is a good solution when running Docker commands within that environment.
When installing a complete CI/CD Jenkins environment via Docker means that WITHIN the Jenkins Docker container you need to access the Docker daemon. After trying many solutions, reading many posts, this did not work. #Paul Rey, thank you very much for trying all kinds of routes.
A good solution is to get an Ubuntu Virtual Machine and install it via the Oracle VM Virtualbox. It is then VERY IMPORTANT to install Docker via this official description.
Before installing Docker, of course you need to install Curl, Git, etc.

How to read external secrets when using docker-compose

I wonder how can i pass external secrets into services spawned by docker-compose. I do the following:
I create new secret
printf "some secret value goes here" | docker secret create wallet_password -
My docker-compose.yml:
version: "3.4"
services:
test:
image: alpine
command: 'cat /run/secrets/wallet_password'
secrets:
- wallet_password
secrets:
wallet_password:
external: true
Then I run:
docker-compose -f services/debug/docker-compose.yml up -d --build
and
docker-compose -f services/debug/docker-compose.yml up
I get the following response:
WARNING: Service "test" uses secret "wallet_password" which is external. External secrets are not available to containers created by docker-compose.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Starting debug_test_1 ...
Starting debug_test_1 ... done
Attaching to debug_test_1
test_1 | cat: can't open '/run/secrets/wallet_password': No such file or directory
Sooo.... is there any way of passing external secret into container spawned by docker-compose?
Nope.
External secrets are not available to containers created by docker-compose.
The error message sums it up pretty nicely. Secrets are a swarm mode feature, the secret is stored inside of the swarm manager engine. That manager does not expose those secrets to externally launched containers. Only swarm services with the secret can run containers with the secret loaded.
You can run a service in swarm mode that extracts the secret since it's just a file inside the container and the application inside the container can simply cat out the file contents. You can also replicate the functionality of secrets in containers started with compose by mounting a file as a volume in the location of the secret. For that, you'd want to have a separate compose file since the volume mount and secret mount would conflict with each other.
You need to run a swarm. This is how it goes:
Create a swarm:
docker swarm init
Create your secrets (as many as you need):
docker secret create <secret_name> <secret_content>
Check all the available secrets with:
docker secret ls
Now, use the docker-compose as precursor for the service:
docker stack deploy --compose-file <path_to_compose> <service_name>
Be aware that you'll find your secrets in a plain text file located at /run/secrets/<secret_name>.

What is the difference between docker and docker-compose

docker and docker-compose seem to be interacting with the same dockerFile, what is the difference between the two tools?
The docker cli is used when managing individual containers on a docker engine. It is the client command line to access the docker daemon api.
The docker-compose cli can be used to manage a multi-container application. It also moves many of the options you would enter on the docker run cli into the docker-compose.yml file for easier reuse. It works as a front end "script" on top of the same docker api used by docker, so you can do everything docker-compose does with docker commands and a lot of shell scripting. See this documentation on docker-compose for more details.
Update for Swarm Mode
Since this answer was posted, docker has added a second use of docker-compose.yml files. Starting with the version 3 yml format and docker 1.13, you can use the yml with docker-compose and also to define a stack in docker's swarm mode. To do the latter you need to use docker stack deploy -c docker-compose.yml $stack_name instead of docker-compose up and then manage the stack with docker commands instead of docker-compose commands. The mapping is a one for one between the two uses:
Compose Project -> Swarm Stack: A group of services for a specific purpose
Compose Service -> Swarm Service: One image and it's configuration, possibly scaled up.
Compose Container -> Swarm Task: A single container in a service
For more details on swarm mode, see docker's swarm mode documentation.
docker manages single containers
docker-compose manages multiple container applications
Usage of docker-compose requires 3 steps:
Define the app environment with a Dockerfile
Define the app services in docker-compose.yml
Run docker-compose up to start and run app
Below is a docker-compose.yml example taken from the docker docs:
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
A Dockerfile is a text document that contains all the commands/Instruction a user could call on the command line to assemble an image.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. By default, docker-compose expects the name of the Compose file as docker-compose.yml or docker-compose.yaml. If the compose file has a different name we can specify it with -f flag.
Check here for more details
docker or more specifically docker engine is used when we want to handle only one container whereas the docker-compose is used when we have multiple containers to handle. We would need multiple containers when we have more than one service to be taken care of, like we have an application that has a client server model. We need a container for the server model and one more container for the client model. Docker compose usually requires each container to have its own dockerfile and then a yml file that incorporates all the containers.

Resources