Docker cannot connect locally after swarm is set up - docker

I do the docker tutorial document at part 3. Because my computer is windows, I use the docker toolbox. Before part 3, I use the command docker run -p 8080:80 test, and it can connect to 192.168.99.100:8080, that's successful.
But when creates a swarm and deploies the docker-compose.yml, it was a success.
ID NAME MODE REPLICAS IMAGE PORTS
uskmy4zkflhf testswarm_web replicated 5/5 ***/get-started:test *:6666->80/tcp
However, when I used 192.168.99.100:6666 to connect, the page could not be displayed, and using ping, I could see that 192.168.99.100 could be connected.
When I uninstall the toolbox and then reinstall it, I deploy it only once, which means that the entire program sets the port only once and no containers occupy it. It doesn't work in this case either.
What's the problem with that?

The port publishing mechanism works differently when you use standalone or swarm mode. If you're using a compose file in swarm mode, you should not be using docker-compose up but docker stack deploy instead.
I would suggest taking it step-by-step, instead of using the stack deploy or compose approach, first learn to use the docker service create command, and take it one service at a time.
Try docker service create --name proxy --publish 8080:80 nginx and see if you can reach NGINX in 192.168.99.100:8080. Once you're there, try scaling it with docker service update --replicas=5 proxy.
Once you feel comfortable with this, you should be able to tell what's going on with more precision.
If you want to delve deeper into how por publishing works in swarm mode, I suggest this docs article.

Related

Ngnix: setting up multiple websites with SSL/Docker

I need to set up a server that does multiple things:
Hosts a Grafana instance on docker (3000 is the default port)
Hosts a flask service for printing Grafana reports (the default Grafana printing sucks so I built a Selenium robot to grab the objects on the screen, create a PDF and download the results)
Hosts a Docker App built with Wappler (Php-based app builder)
I'd like to use free certs (lets encrypt).
I'm new to docker and new to linux server administration. What's the best resource for learning how to set this up?
It's super easy to setup reverse proxies using the Linuxserver LetsEncrypt container (It's an Nginx container that auto-manages free certs). The initial setup if you're completely new to docker might seem a little intimidating, but it's easier than it looks, and once you get the hang of it, it's cake.
Other than that, you just need to be sure to place all 3 in the same docker network so they can talk to each other, and (if you want) also expose those ports to the host during the docker run or docker compose code.
i.e. (pseudo code):
docker run -d --name grafana -p 3000:3000 grafana/grafana
For anyone who ends up finding this via a search, I ended up using Traefik for routing/SSL setup. The best article I found on how to set this up is here.
(Note many articles reference Traefik 1.7, however, they changed a lot between 1.7 and version 2. The article above uses Traefik 2.0)
Basically the way that Traefik works is it sees other docker containers that are in the same network and if the docker container contains specific labels set in the docker configuration, it will automatically generate LetsEncrypt SSL certs (see the docs) and will perform the routing to the docker container.

Docker Swarm - Unable to connect to other containers because IP lookup fails

Say we provision an overlay network using docker swarm and create various containers with following names:
Alice
Bob
Larry
John
now if we try to ping any container from another it fails because it does not know how to do the IP lookup i.e., alice does not know bob's IP and so on. We have been taking care of this by manually editing the /etc/hosts on every container and entering the name/IP key value pair in that file but this is becoming very tedious with every restart of our network. There ought to be a better way of handling this.
E.g., services created using docker stack do not suffer from this problem. Due to various reasons we are stuck with creating containers using the vanilla docker create. How can we make containers discover each other on the overlay network without manual labor of editing /etc/hosts?
Below is detailed workflow we currently have to follow:
we first provision a docker swarm and overlay network
Then for each container, we create it using the docker create command and then start it using docker start command. we use the --network flag to attach the container to the overlay network at time of creation
We then use docker container inspect to get the IP address of each container. This involves running n commands and noting down IP address.
Then we log into each container and edit the /etc/hosts file by hand and enter the (name, IP) key-value pair of the other containers. So this means having to enter n*(n-1) records by hand when summed across containers.
Not sure why docker create does not do all this automatically - docker already knows (or can know) all the IP addresses. Containers provisioned using docker stack e.g., do not have to go through this manual process to "discover" each other. The reason why we cannot use docker stack is because:
it does not allow us to specify container name
we run various commands (mostly docker cp) before starting the container and not possible to do this using stack
You might have seen this already: DNS on User defines networks
Have you created your services like in the section „Attach a service to an overlay“ in this doc?
It seems that the only thing that is needed is to refer the containers by their {name}.{network} instead of just the {name}. No need to edit /etc/hosts or use the --add-host flag or run some additional dns server. Refer https://forums.docker.com/t/need-help-connecting-containers-in-swarm-mode/77944/6
Further details: the official documentation for docker does not mention anywhere the necessity to add .{network} suffix to the {containername}. Indeed on this link, Step #7 under the Walk-through, there is no .{network} suffix used. So not sure why we need to do that. The version of docker we are using is 18.06.1-ce for linux.
I had a similar issue : I'm following this official tutorial to create a docker swarm overlay network on two Raspberry pi 3 and the ping was impossible unless I found on Github the answer : as I understood, it seems that latest version of alpine (for a reason that I ignore) is not suitable for Raspberry pi 3 so the solution would be the use of the version 3.12.3 like this : sudo docker run -dit --name alpine1 --network test1 alpine:3.12.3
Hope that this might help someone :)

Docker for Windows swarm overlay networking, connecting to the swarm from outside or localhost

I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Deploy Cloudsuite benchmark using Docker swarm mode

I want to deploy a benchmark, called cloudsuite, using swarm mode for leveraging its benefits in distribution between hosts. The case (explained here) I am trying to use has 4 components:
memcached server
web server
db server
faban client
the way for deploying the benchmark that is explained in documentation, is by using docker run. For instance, for deploying the web server this command is used:
$ docker run -dt --net=host --name=web_server cloudsuite/web-serving:web_server \
/etc/bootstrap.sh ${DATABASE_SERVER_IP} ${MEMCACHED_SERVER_IP} ${MAX_PM_CHILDREN}
As you can see, it has custom entry point and also some additional parameters. it is the same for other components. I have two questions regarding these situation:
1- Can I use services in swarm mode to deploy these containers? How am I supposed to give the entry point and parameters in the command for creating the service?
2- As I have understood so far, the services are for containers that provide long term services, like nginx or mysql server. but my last component, faban client, is not a long term thing. it just starts and sends some requests to other components and gather some results. And also I need to get those results out of this container too. Can it be also a service?
I've read documentation of docker, docker swarm and a lot of other posts about it, still I'm not sure whether I am understanding docker swarm correctly or not.
Well I could not find a way to use swarm mode to deploy the benchmark. So the first Question maybe still open.
How ever I found out about the second question. Since the client component of the benchmark is not a service, it is not a service and should not be implemented as one. Using docker swarm (not the swarm mode) it is easy to deploy an overlay network and run all components communicating to each other. You can check my repository in Github for the bash scripts of such deployment. However, to show how the client component should be run, I bring my line of code for it:
sudo docker -H :4000 run \
--network web-serving-network \
--name faban_client \
cloudsuite/web-serving:faban_client {WEB_SERVER_IP}

Resources