Containers started with docker-compose inside another container are unreachable - docker

I'm using a dedicated container for running generic project-related shell scripts in order to avoid having to test scripts on multiple environments(mac, win, ubuntu, debian...) and to minimize software requirements on the host OS. Even docker-compose commands are run from the console container. /var/run/docker.sock is bind mounted from host.
Everything else seems to be working fine, but for example if I run docker-compose up traefik inside the console container, traefik starts normally but it's unreachable both on the host and even on another container in the same network. If docker-compose up traefik is run from the host OS(Windows 10), traefik becomes reachable as expected. I suspect this has something to do with how Docker or docker-compose handle networking but I'm not completely sure. I did check that regardless of how I start the traefik container, the same ports appear instantly in NirSoft CurrPorts(sort of gui for netstat).
Is there any way (and how) to fix this?
EDIT
I realised that this must be somehow an error on my part, since dockerized docker guis exist and they assumably don't have any problems bringing up containers that are accessible from the host and outside world.
Now I'm wondering if this might be a simple configuration error either in my docker(-compose) settings or somewhere else on my host machine, or do guis like Portainer go through some extra steps in order to expose the started containers to the host?

For development purpose we all map the port of Traefik to 80, so I will assume the same in your case as well. Let's assume that you are running Traefik container in a port 80 which is mapped to the port 80 in the host. But according to your Traefik container the host machine is nothing but the container which is used for running the scripts. But the port 80 of the shell script container is not mapped to the Host machine of that particular container. I hope now you have been lost somewhere around the port mapping and containers.
Let me describe your situation in the image below.
To make your setup working you should deploy your containers as shown above along with the port mapping.
To simplify the answer,
docker run -t -d -p 80:80 shellScriptImage
docker run -t -d -p 80:80 traefik (- inside the shell script container)
By doing this you can access the traefik container from the outside.

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

Accessing a service on the PROPER IP running in docker container on a Linux host

My problem is specific to k6 and InfluxDB, but i think the root cause is more general.
I'm using the official k6 distribution and its docker-compose.yml to run Grafana and InfluxDB which i start with the docker-compose up -d influxdb grafana command.
Grafana dashboard is accessible from localhost:3000, but running k6 with the recommended command $ docker run -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db - <script.js (following this guide) k6 throws the following error (on Linux and MacOS as well):
level=error msg="InfluxDB: Couldn't write stats" error="Post \"http://localhost:8086/write?consistency=&db=myk6db&precision=ns&rp=\": dial tcp 127.0.0.1:8086: connect: connection refused"
I tried the command with localhost and 127.0.0.1 as well for InfluxDB as well. Also with IP addresses returned by docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' k6_influxdb_1 It either failed with the above error or didnt work, which means k6 didn't complain, but no data appeared in InfluxDB.
However, if i query the "internal IP" address which the network interface is using (with ifconfig command) and use that IP (192.168.1.66), everything works fine:
docker run -i loadimpact/k6 run --out influxdb=http://192.168.1.66:8086/k6db - <test.js
So my questions are:
Why does Grafana work fine with localhost:3000 and InfluxDB with localhost:8086 doesn't?
Why does only the "internal IP" work and no other IP?
I know there is a similar question, but that doesn't answer mine.
Docker containers run in an isolated network space. Docker can maintain internal networks, and there is Compose syntax to create them.
If you're making a call to a Docker container from outside Docker space but on the same host, you can usually connect to it as localhost, and the first port number listed in the Compose ports: section. If you look at the docker-compose.yml file you link to, it lists ports: [3000:3000], so port 3000 on the host forwards to port 3000 in the container; and if you're calling http://localhost:3000 from a browser on the same host, that will reach that forwarded port.
Otherwise, calls from one container to another can generally use the container's name (as in docker run --name) or the Compose service name; but, they must be on the same Docker network. That docker-compose.yml file also lists
services:
influxdb:
networks:
- k6
- grafana
so you can reach http://influxdb:8086 using the service's normal port number, provided the calling container is on one of those two networks. If the service has ports:, they're not considered for inter-container calls.
In the Docker documentation, Networking in Compose has more details on this setup.
There's one final trick that can help you run the specific command you're trying to run. docker-compose run will run a one-off command, using the setup for some container in the docker-compose.yml, except without its ports: and replacing its command:. The docker-compose.yml file you reference includes a k6 container, on the k6 network, running the loadimpact/k6 image. So you can probably run
docker-compose run k6 \
run --out influxdb=http://influxdb:8086/myk6db - \
<script.js
(And probably the K6_OUT environment variable in the docker-compose.yml can supply that --out option for you.)
You shouldn't ever need to look up the container-private IP addresses. They aren't usable in a variety of common scenarios, and in between Docker networking for calls between containers and published ports for calls from outside Docker, there are better ways to make calls to containers.

How can I get a docker container to talk to the host machine over HTTP, without docker compose?

I want to run nginx inside a docker container as a reverse proxy to talk to an instance of gunicorn running on the host machine (possibly inside another container, but not necessarily). I've seen solutions involving docker compose but I'd like to learn how to do it "manually" first without learning a new tool, right now.
The simplified version of the problem is this:
Say I run a docker container on my machine.
Outside the container, I run gunicorn on port 5000.
From within the container, I want to run ping ??? and have it reach the gunicorn instance run in step 2.
How can I do this in a simple, portable way?
The easiest way is to run gunicorn in its own container and expose port 5000 (not map it, just expose it).
It is important to create a network first and run both your containers on the same network so that they see each other: docker network create xxxx
The when you run your 2 containers attach them to this network: docker run ... --network xxxx ...
Give names to your your containers, it is a good practice. (eg: docker run ... --name gunicorn ...)
Now from your container you can ping your gunicorn container: ping gunicorn, or even telnet on on port 5000.
If you need more information just drop me a comment.

Share localhost:port loadbalancer with kubernetes

Do you know if it is possible to share localhost:port with kubernetes.
I am running kubernetes in docker-for-mac, and when creating a loadbalancer - everything works great for containers running in kubernetes via localhost.
Sometime I like to test some code, in a container running just as a docker run - where I am opening ports with -p 8080:80 something.
Now the question is will it conflict with the localhost running k8s loadbalancer - if I run on ports not open to kubernetes loadbalancer?
My guess is, that it does not work - as I am experience some problems reaching ports running with docker run.
If it does not work, how do you docker run along side Kubernetes?
If you’re using the Kubernetes built into Docker (Edge) for Mac, it is the same Docker daemon, and docker run -p will publish ports on your host as normal. This should share a port space with services running outside Docker/Kubernetes and also with exposed Kubernetes services.
You need to pick a different host port with your docker run -p option if you need to run a second copy of a service, whether the first one is another plain Docker container or a Kubernetes Service or a host process or something else.
Remember that “localhost” is extremely context sensitive; I’d avoid using it in questions like this. If you docker run -p 8080:80 ... as you suggest, the host can make outbound calls to the container at localhost:8080; the container can make outbound calls to itself at localhost:80; and nothing in any Kubernetes pod or any other container can see the service at localhost on any port.

How to make a container visible to the outside network, and handle I.P addresses in production

I have:
a Windows server on bare metal with Hyper-V
Ubuntu server running in Hyper-V
a Docker container with an NGINX web application running in Ubuntu server
Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don't know how to make the Docker container visible to the external network. I also don't know how to handle the fact that the I.P address changes every time the image is run.
What's the correct way to:
make a Docker container visible to the external network?
handle Docker container I.P. addresses in a repeatable way in production?
When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:
docker run -p 80:80 nginx
This would route port 80 from the Ubuntu server to port 80 within the Nginx container.
You should check the Docker documentation on this at https://docs.docker.com/reference/run/#expose-incoming-ports.
When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here: https://docs.docker.com/reference/builder/#expose.

Resources