Docker compose can't access ports of other containers - docker

I have 2 containers running with docker compose. One of the containers is executing a shell script which should check if the other container has already started and is running on port 9990.
Even though the container is starting, the shell script echos nothing.
keycloak:
image: jboss/keycloak:latest
volumes:
- ./imports/cache_reload/disable-theme-cache.cli:/opt/jboss/startup-scripts/disable-theme-cache.cli
- ./imports/themes/custom/:/opt/jboss/keycloak/themes/custom-theme/
- ./imports/realm/realm-export.json:/opt/jboss/realms/custom-import.json
environment:
DB_VENDOR: MYSQL
DB_ADDR: mysql
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: Pa55w0rd
ports:
- 8080:8080
depends_on:
- mysql
keycloak_installer:
image: solr:6.6-alpine
volumes:
- ./imports/scripts/import-realm.sh:/docker-entrypoint-initdb.d/init.sh
depends_on:
- keycloak
The shell script is the following:
echo "MOIN LEUDE TRYMACS HIER!"
while ! nc -z localhost 9990; do
sleep 1
echo "Waiting for keycloak server startup 9990..."
echo "$(nc -z localhost 9990)"
done
The first echo is printed, but then nothing else is printed.
The container keycloak is running on Port 9990.
Please help, thanks

You have to understand more detail about network in docker compose.
To solve your issue, you need :
Add network in your docker compose file for each container (there is a default network but to understand the mechanism, you can define it explicitly). This must looks like this (under ports for example) for the first container (named keycloak):
ports:
- 8080:8080
networks:
- keycloak_network
On the second container (named keycloak_installer) (you must expose the port that you want to request in the first container):
depends_on:
- keycloak
networks:
- keycloak_network
On your script call explictly the second container which will be now available by the network. You must change your code by this :
nc -z keycloak_installer 9990

Related

Docker container can't talk to another container

I have a docker compose file set up with 3 separate containers (Flask, Nginx and Solr)
After starting up all 3 run successfully but my Flask application can't connect to my Solr instance and when I run:
wget -S http://localhost:8983/solr/CORE_NAME/select
I get the error "Connecting to localhost (localhost)|127.0.0.1|:8983... failed: Connection refused."
I am fairly new to docker and been around a few different forums looking at this issue but nothing has worked so far. I have tried creating a network also but running into the same issue.
Here is my docker-compose.yml.
version: "2.7"
services:
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
container_name: nginx
ports:
- "80:80"
- "8181:8181"
volumes:
- ./:/opt/ee1
- ee1-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
depends_on:
- flask
flask:
build:
context: .
dockerfile: Dockerfile-flask
entrypoint: ["/bin/bash", "./system/start-uwsgi-docker.bash"]
container_name: flask
user: root
restart: always
volumes:
- ./:/opt/ee1
- ./ee1config.ini:/opt/ee1config.ini
- ee1jobs-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
links:
- solr
solr:
build:
context: .
dockerfile: Dockerfile-solr
container_name: solr
volumes:
- data:/var/solr
entrypoint:
- bash
- "-c"
- "precreate-core ee1_1; precreate-core ee1_2; exec solr -f"
ports:
- "8983:8983"
volumes:
sockets-volume: {}
ee1-logs-volume: {}
data:
Every docker container is - network wise - a separate host with it's own IP.
Traffic to localhost or 127.0.0.1 will definitely never leave that container.
So what you need to find out is the IP of the server container (solr) you actually want to talk to, then configure the client container (flask) accordingly. This can be done by e.g. docker inspect. Be aware that upon container restart the IPs can change. You will want to use something like DNS rather than raw IPs.
Since you use docker compose, each container for a service joins the same network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For more details check out
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/

Docker client container couldn't connect to a docker-compose server

I have a docker-compose, that gathers 3 images (mariadb, tomcat and a backup service).
In the end, this exposes a 8080 port on which any user can connect using a browser.
This docker-compose seems to work nicely as I can open a browser (from the host) and browse http://localhost:8080/my service path
I did not try yet from a different machine (I do not have another one where I am currently) but since the default network type is bridge it should work also.
My docker-compose.yml looks like this:
version: "3.0"
networks:
my-network:
services:
mariadb-service:
image: *****
ports:
- "3306:3306"
networks:
- my-network
tomcat-service:
image: *****
ports:
- "8080:8080"
networks:
- my-network
depends_on:
- mariadb-service
backup-service:
image: *****
depends_on:
- mariadb-service
networks:
- my-network
(I remove all the useless stuff)
Now I also have a 'client' docker image allowing to connect to such a server (very similarly to the user with its browser). I'm running this docker image this way:
docker run --name xxx -it -e SERVER_NAME=<ip address of the server> <image name/tag> bash
The strange thing is that this client docker can connect to an external server (running on a production server) but cannot connect to the server docker running locally on the same host.
My understanding is that using default network type (bridge), all docker images can communicate together on the docker host and can also be accessed from outside.
What Am I missing ?
Thanks,

Expose container DNS to another container?

Using Docker Compose and Traefik, I am trying to have the application container communicate to the solr container and vice versa in a local environment.
Currently, I can access both the application and the solr URL in the browser just fine, but they cannot 'see' or talk to one another internally.
I am new with Docker. Here is a section of my docker compose file with the relevant containers:
php:
image: wodby/drupal-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
DB_DRIVER: $DB_DRIVER
PHP_FPM_USER: wodby
PHP_FPM_GROUP: wodby
COLUMNS: 80
volumes:
- ./:/var/www/html:cached
solr:
image: wodby/solr:$SOLR_TAG
container_name: "${PROJECT_NAME}_solr"
environment:
SOLR_DEFAULT_CONFIG_SET: $SOLR_CONFIG_SET
SOLR_HEAP: 1024m
labels:
- 'traefik.backend=${PROJECT_NAME}_solr'
- 'traefik.port=8983'
- 'traefik.frontend.rule=Host:solr.${PROJECT_BASE_URL}'
traefik:
image: traefik
container_name: "${PROJECT_NAME}_traefik"
command: -c /dev/null --web --docker --logLevel=INFO
ports:
- '80:80'
- '8983:8983'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I can access Solr at the given URL, but the application cannot see it at the same URL. I need to be able to do this so it can talk to Solr and have it crawl/etc.
Is there a way to expose them so they can see each other by their hostname?
docker-compose has container DNS resolution built-in. You can expose a specific port on your container to enable access within the docker network, or define port (as you have done for your traefik container) to expose it both within the docker network and externally. In either case, you will be able to access another container by its name (eg. php,solr,or traefik in this case) on the exposed port from another container.

Running Ngrok in a container using docker

[https://github.com/gtriggiano/ngrok-tunnel ] runs ngrok inside a container. Ngrok is required to run in the container to avert security risks. But am facing problems after running the scripts, which generates the url
$ docker pull gtriggiano/ngrok-tunnel
$ docker run -it -e "TARGET_HOST=localhost" -e "TARGET_PORT=3000" -p 4040 gtriggiano/ngrok-tunnel
am running my rails app on localhost:3000
is it my problem or can it be fixed by altering the scripts(inside the repo)?
I couldn't get this working but switched to https://github.com/shkoliar/docker-ngrok and it works brilliantly.
In my case I added it to my docker-compose.yml file:
ngrok:
image: shkoliar/ngrok:latest
ports:
- 4551:4551
links:
- web
environment:
- PARAMS=http -region=eu -authtoken=${NGROK_AUTH_TOKEN} localdev.docker:80
networks:
dev_net:
ipv4_address: 10.5.0.10
And it's started with everything else when I do docker-compose up -d
Then there's a web UI at http://localhost:4551/ for you to see the status, requests, the ngrok URLs, etc.
The Github page does have examples of running it manually from the command line too though, rather than via docker-compose:
Command-line Example The example below assumes that you have running
web server docker container named dev_web_1 with exposed port 80.
docker run --rm -it --link dev_web_1 shkoliar/ngrok ngrok http dev_web_1:80
With command line usage, ngrok session is active until it
won't be terminated by Ctrl+C combination.
No. if you execute -p with single number it's container port - host port is randomly assigned.
Using -p, --publish ip:[hostPort]:containerPort at docker run can specify the the host port with the container port.
as of now the 4040 of container is exposed. Not sure if your service listens by default on it.
To get localhost port execute
docker ps
you'll see the actual port it's not listening on.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aaaeffe789d gtriggiano/ngrok-tunnel "npm start" About a minute ago Up About a minute 0.0.0.0:32768->4040/tcp wizardly_poincare
here it's listening on localhost:32768
this composer works for me. Note that in the entrypoint command for ngrok you have to reference the other service by name
version: '3'
services:
yourwebserver:
build:
context: ./
dockerfile: ...
target: ...
container_name: yourwebserver
volumes:
- ...
ports:
- ...
extra_hosts:
- 'host.docker.internal:host-gateway'
depends_on:
- ngrok
ngrok:
image: ngrok/ngrok:alpine
environment:
NGROK_AUTHTOKEN: '...'
command: 'http yourwebserver:80'
ports:
- '4040:4040'
expose:
- '4040'
I'm not sure if you have already solved this but when I was getting this error I could only solve it like this:
# docker-compose.yml
networks:
- development
I also needed to expose the 3000 port of my web container because it still wasn't exposed.
# docker.compose.yml
web:
expose:
- "3000"
My container for the server running on development is also under the development network. The only parameters, I believe, you should pass for the container to execute are image, ports, environment with DOMAIN and PORT for the server container, a link, and an expose on your web container:
# docker-compose.yml
ngrok:
image: shkoliar/ngrok
ports:
- 4551:4551
links:
- web
networks:
- development
environment:
- DOMAIN=squad_web
- PORT=3000
Actually to make ngrok work with your docker container you can install it outside of your project just like the manual on their website says. And then add
nginx:
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`, `aaa-abc-xxx-140-177.eu.ngrok.io`)"
This particular example is for docker4drupal docker-compose file and traefik mapped as 80:80

Access endpoint in one docker container from another

I have a docker-compose file with two services: app and httpd
app
app:
image: primus852/machinelearning:latest
ports:
- 5001:5000
expose:
- "5001"
restart: always
networks:
- default
volumes:
- ./api:/app
environment:
- FLASK_APP=app/source/__init__.py
- FLASK_ENV=development
httpd
httpd:
image: primus852/mitswiki:latest
ports:
- 80:80
restart: always
networks:
- default
volumes:
- ./project:/var/www/html
Flask app
The app container has an endpoint like this:
#app.route('/predict', methods=['GET'])
def predict():
...DO STH....
I can open http://localhost:5001/predict in my browser, works...
I can curl from my cmd: curl localhost:5001/predict, works...
But when I am inside my httpd container this does not work from the console: curl localhost:5001/predict
curl: (7) Failed to connect to localhost port 5001: Connection refused
So I thought I address the app container as I address my mysql from inside my httpd container: curl app:5001/predict but it has the same result.
Can anyone see what I am doing wrong?
According to your yaml:
ports:
- 5001:5000
Inside container you have to use port 5000
Inside the httpd container localhost refers to just that httpd container. It cannot access other containers by default.
Another thing which might be occuring is that your app is not open for 'remote' access. A connection from one container to another one is a remote connection.
Within your docker-compose files you can link containers to eachother
While the containers are linked you can then use curl to get the /predict page by using curl app:5001/predict

Resources