Connect to Docker Swarm for Continuous deploy - docker

Any suggestions on how best to connect to a swarm for continuous deploy (within CI)? I'm using docker cloud, and CircleCI 2.
Tried dockercloud/client
e.g.
docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client -u ${DOCKER_LOGIN} -p ${DOCKER_PASSWORD} myapp/app
However, since I'm using CircleCI 2 I'm having an issue when I switch to the other docker host as following
Cannot connect to the Docker daemon at tcp://XXX:XXX. Is the docker daemon running?
This is an issue due to the remote docker they setup for security reasons from what I understand, so I don't think it's possible.
What I would like to achieve is simply to connect to the swarm and call docker stack deploy ...
Any help would be appreciated.

Related

Can I Run Docker Exec from an external VM?

I have a group of docker containers running on a host (172.16.0.1). Because of restrictions of the size of the host running the docker containers, I'm trying to set up an auto-test framework on a different host (172.16.0.2). I need my auto-test framework to be able to access the docker containers. I've looked over the docker documentation and I don't see anything that says how to do this.
Is it possible to run a docker exec and point it to the docker host? I was hoping to do something like the following but there isn't an option to specify the host.:
docker exec -h 172.16.0.1 -it my_container bash
Should I be using a different command?
Thank you!
Not sure why there is need of doing docker exec remotely. But anyways it is achievable.
You need to make sure your docker daemon on your host where your containers are running is listening on a socket.
Something like this:
# Running docker daemon which listens on tcp socket
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375
Now interact with the docker daemon remotely from external VM using:
$ docker -H tcp://<machine-ip>:2375 exec -it my-container bash
OR
$ export DOCKER_HOST="tcp://<machine-ip>:2375"
$ docker exec -it my-container bash
Note: Exposing docker socket publicly in your network has some serious security risks. Although there are other ways to expose it on encrypted HTTPS socket or over the ssh protocol.
Please go through these docs carefully, before attempting anything:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
https://docs.docker.com/engine/security/https/
If you have SSH on both machines you can easily execute commands on remote daemon like that:
docker -H "ssh://username#remote_host" <your normal docker command>
# for example:
docker -H "ssh://username#remote_host" exec ...
docker -H "ssh://username#remote_host" ps
# and so on
Another way to do the same is to store -H key value into DOCKER_HOST environment variable:
export DOCKER_HOST=ssh://username#remote_host
# now you can talk to remote daemon with your regular commands
# these will be executed on remote host:
docker ps
docker exec ...
Without SSH you can make Docker listen for TCP. This will require you to make some preparations to maintain security. This guide walks through creating certificates and some basic usage. After that you will have somewhat similar usage:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=172.16.0.1:2376
At last you can use docker context to save external hosts and their configuration. Using context allows you to communicate with various remote hosts with ease by using --context <name> option. Read context documentation here.

Running containers over an Ubuntu container

I need to separate the environments so my team could work without ports conflicts. My idea was to use an ubuntu container to run a lot of other containers and map just the ports we would use, without conflict.
Unfortunately after the Docker installation over the ubuntu container it gives the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Is it possible to use Docker over containers? Does this idea works?
Plus, if this is not the best way to solve the original problem could you please give me a better solution?
First question:
I think you have to bind the docker daemon to your Ubuntu container
-v /var/run/docker.sock:/var/run/docker.sock
Or optional using the official docker image with the DinD flag (docker in docker) which based on Ubuntu 18.09
docker run --privileged --name some-docker -v /my/own/var-lib-docker:/var/lib/docker -d docker:dind
Second question:
Instead of the ubuntu container with docker you could use a reverse proxy in front of your other service containers.
For example traefik or nginx
You can use kubernetes, create multiple namespaces for each developer. Use nginx and dynamic server_name to map url to different namespaces.

How to convert a Docker run command into a Swarm command?

I have a setup which runs my Docker container like this.
run-docker.sh
docker build -t wordpress-gcloud
container=$(docker run -d wordpress-gcloud)
ipOfContainer=$(docker inspect "$container" | jq -r '.[0].NetworkSettings.IPAddress')
But now I have setup a Docker Swarm (1 manager + 2 workers).
How should I convert the above bash script to run the container on the swarm?
Typically, you can access your Swarm cluster via Swarm APIs, which is similar with Docker API. To access Swarm APIs, you can use -H parameter with docker commands. For example, if you have a swarm manager running on your local machine, and the port number is 3376, then you can get your swarm cluster info with:
docker -H 127.0.0.1:3376 info
You can also inspect the swarm cluster containers by:
docker -H 127.0.0.1:3376 inspect <container ID>
More details about communciate with Swarm cluster can be found here: https://docs.docker.com/swarm/install-manual/#/step-6-communicate-with-the-swarm
But in your case, I think that docker build command could be a problem. In my understanding, Swarm will find a random node from your cluster to execute this docker build process, so if the Dockerfile is not existing on the node where docker build has been executed, you will get error. My idea is to consider to build your image in a certain place, and push the image to a image registry, then pull and run the image in any place you want.

Docker Swarm - Map ports and Scaling

I am currently using Docker Engine 1.11, and I am investigating if its possible for me to move to Docker 1.12 and use Swarm. I am currently using Docker to run 50+ Bamboo agents, all of which need to have a port mapped to a port on the server. For instance, each docker container needs to have port 4000 available, so when I do Docker run, I do-
Docker run -p 10000:4000 myimg
Docker run -p 10001:4000 myimg
Docker run -p 10002:4000 myimg
Docker run -p 10003:4000 myimg
In Docker Swarm, from what I understand, I would run the following command to scale my service to 50 containers
docker service scale helloworld=5
But, if I did this, then they would all be trying to map to the same port. How can I accomplish this? Is it possible?
No, you can't.
It's just one key function that docker service provides that a single port can map to multi containers(service discovery)
And another one is when container fails, swarm will start a new one.(self healing)
I know nothing about Bamboo, so I can't tell you if there's a way to run bamboo service with the swarm mode.

running container from private registry with docker swarm

I'm trying to run an image from a private registry with docker swarm.
I have an image I've tagged and pushed to a private registry. If I run this locally:
docker run -p 8000:8000 -d registry.mydomain.com:8080/myimage
it runs fine.
If I activate my swarm and try and run from there:
$(docker-machine env --swarm swarm-master)
docker login registry.mydomain.com:8080
docker run -p 8000:8000 -d registry.mydomain.com:8080/myimage
I get "Authentication is required".
I'm actually trying to do this via the docker remote API, but first I figure I should get it running on the command line.
Is this possible?
Thanks!
Just curious, you are using authentication, but no SSL? I think docker only supports basic authentication over SSL. You could try to start docker with the insecure flag to at least try out the capabilities of swarm.
docker -d --insecure-registry registry.mydomain.com:8080
The error you are getting is probably docker swarm host trying to pull down the image from your registry first since run can be short hand for pull me this image and run it.

Resources