I'm trying to connect my api server to a minio container, i'm using dockercompose like this:
minio:
image: "minio/minio"
# volumes:
# - ./docker-data/minio/data:/data
command: minio server /data
networks:
- backend
environment:
MINIO_ACCESS_KEY: 7PDZZCOFGYUASDCBWW9L
MINIO_SECRET_KEY: cSqaXmYpTk91asduFJ7ZKsZ+8e2pSLOXfc6ycogq
ports:
- "9000:9000"
api:
image: "applications-api"
networks:
- backend
environment:
NODE_ENV: test
ports:
- 3000:3000
# command: npm run dockertest
networks:
backend:
But I keep getting this error, the credentials are the same i cannot understand what i'm doing wrong, I appreciate any kind of help.
Error: connect ECONNREFUSED 172.22.0.2:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1141:16) {
errno: 'ECONNREFUSED',
code: 'NetworkingError',
syscall: 'connect',
address: '172.22.0.2',
port: 443,
region: 'eu-central-1',
hostname: 'minio',
retryable: true,
time: 2020-08-28T13:24:39.702Z```
I suspect the problem is your API tries to connect its own container's localhost.
If so, in your API code:
Try using host.docker.internal instead of localhost as minio client endpoint.
This will connect your API to the host's localhost.
Related
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
I have a problem when trying to make the connection between containers using flask, nginx, and postgres. The following error appears:
flask | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
flask | Is the server running on host "localhost" (127.0.0.1) and accepting
flask | TCP/IP connections on port 5454?
flask | could not connect to server: Cannot assign requested address
flask | Is the server running on host "localhost" (::1) and accepting
flask | TCP/IP connections on port 5454?
Flask connection:
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:admin#localhost:5454/plataforma_testes'
docker-compose:
version: "3.3"
services:
flask:
build: ./flask
container_name: flask
restart: always
environment:
- APP_NAME=PlataformDeTestes
- DB_USERNAME=postgres
expose:
- 8080
links:
- database
depends_on:
- database
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
database:
image: postgres:10
env_file: postgres/.env
ports:
- "5454:5432"
volumes:
- /docker/volumes/postgres:/var/lib/postgresql/
Does anyone have any suggestions?
Change :
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:admin#localhost:5454/plataforma_testes'
to :
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:admin#database:5432/plataforma_testes'
localhost to database and port to 5432 then docker-compose up again
Thank you! It worked using:
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:admin#database:5432/plataforma_testes'
and
database:
image: postgres:10
env_file: postgres/.env
volumes:
- /docker/volumes/postgres:/var/lib/postgresql/data
Hello!
I am having issues finding out the reason why I am getting this error. Tried googling it.
It seems to be an issue with dns lookup from the container.
Error in traefik log:
time="2020-01-30T12:12:12+01:00" level=error msg="Unable to obtain ACME certificate for domains \"traefik.xyz.se\": cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:54773->127.0.0.11:53: i/o timeout" providerName=cloudflare.acme routerName=traefik-secure#docker rule="Host(`traefik.xyz.se`)"
time="2020-01-30T12:12:32+01:00" level=error msg="Unable to obtain ACME certificate for domains \"hivemq.xyz.se\": cannot get ACME client get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:53671->127.0.0.11:53: i/o timeout" rule="Host(`hivemq.xyz.se`)" providerName=cloudflare.acme routerName=hivemq-secure#docker
Unable to lookup google from within traefik container. Don't know if this is working as intended?
/o/a/traefik> docker exec -it traefik /bin/sh
/ # nslookup google.se
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'google.se': Try again
/ #
Traefik docker-compose.yaml
version: '3'
services:
traefik:
image: traefik:v2.1
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
environment:
- CF_API_EMAIL=redacted
- CF_API_KEY=redacted
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
- ./data/config.yml:/config.yml:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.xyz.se`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=redacted"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.xyz.se`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
- "traefik.http.routers.traefik-secure.tls.domains[0].main=xyz.se"
- "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.xyz.se"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
data/traefik.yml:
api:
dashboard: true
debug: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
filename: /config.yml
certificatesResolvers:
cloudflare:
acme:
email: redacted
storage: acme.json
dnsChallenge:
provider: cloudflare
delayBeforeCheck: 0
resolvers:
- "1.1.1.1:53"
- "8.8.8.8:53"
Service example (hivemq) docker-compose.yml:
version: "3"
services:
hivemq:
image: hivemq/hivemq4
container_name: hivemq
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- 1883:1883
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.hivemq.entrypoints=http"
- "traefik.http.routers.hivemq.rule=Host(`hivemq.xyz.se`)"
- "traefik.http.routers.hivemq.middlewares=https-redirect#file"
- "traefik.http.routers.hivemq-secure.middlewares=secured#file"
- "traefik.http.routers.hivemq-secure.entrypoints=https"
- "traefik.http.routers.hivemq-secure.rule=Host(`hivemq.xyz.se`)"
- "traefik.http.routers.hivemq-secure.tls=true"
- "traefik.http.routers.hivemq-secure.service=hivemq"
- "traefik.http.services.hivemq.loadbalancer.server.port=8080"
- "traefik.docker.network=proxy"
networks:
- internal
- proxy
networks:
proxy:
external: true
internal:
external: false
I have also tried reinstalling docker-ce, didn't help.
I had a similar issue and it was due to a bug of Docker: all my containers had lost their connection to the internet but they were all already removed for maintenance puprose so I couldn't see it.
In the logs, cannot get ACME client get directory means that Traefik cannot connect to Let's Encrypt url.
I fixed it by:
Removing Traefik stack
Pruning networks so traefik-public was removed
Restarting Docker service
If it's not enough, you can try these:
Try to restart the Docker Engine, which will reset any iptables rules (assuming you are using Docker on Linux)
Try to restart your whole machine
Try to disable (temporary) the firewall of your machine to verify that it fixes the issue
As mentioned here: https://community.containo.us/t/cannot-create-renew-acme-certificate-cannot-get-acme-client-get-directory/2469/2
I gave a rapid look around concerning Docker bugs about loosing connection and seems to be a mess, for years: https://github.com/moby/moby/issues/15172
Not a docker specialist but I had a similar issue and fixed it by activating ipv6 on docker daemon :
% grep ipv6 /etc/docker/daemon.json
"ipv6": true`
You need to reload docker daemon then
% sudo systemctl reload docker
I'm working with a docker-compose file from an open-source repo. Notably, it's missing the version and services keys, but it still works (up until now, I have not seen a compose file without these keys).
redis:
image: redis
ports:
- '6379'
app:
build: .
environment:
- LOG_LEVEL='debug'
links:
- redis
docker-compose up starts everything up and the app is able to talk to redis via 127.0.0.1:6379.
However, when I add the version and services keys back in, connections to redis are refused:
version: '3'
services:
redis:
image: redis
ports:
- '6379'
app:
build: .
environment:
- LOG_LEVEL='debug'
links:
- redis
Which results in:
[Wed Jan 03 2018 20:51:58 GMT+0000 (UTC)] ERROR { Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at Object.exports._errnoException (util.js:896:11)
at exports._exceptionWithHostPort (util.js:919:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1073:14)
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379 }
Why does adding version: '3' and services: lead to failure to connect?
You don't need to specify the ports neither the links for services in the same network (compose file). You can use:
version: '3'
services:
redis:
image: redis
app:
build: .
environment:
- LOG_LEVEL='debug'
And then in your app code refer to redis just as 'redis:6379'. If you see the Dockerfile for the redis image you can see the port is already exposed at the end.
When you want to expose the service to a specific host port, in Docker Compose version 3 you should use this syntax:
ports:
- '6379:6379'
Check the docs here:
Either specify both ports (HOST:CONTAINER), or just the container port
(a random host port will be chosen).
This is what worked for me after having the same issue:
docker-compose.yml
version: "3"
services:
server:
...
depends_on:
- redis
redis:
image: redis
My redis config file:
const redis = require('redis');
const redisHost = 'redis';
const redisPort = '6379';
let client = redis.createClient(redisPort, redisHost);
client.on('connect', () => {
console.log(`Redis connected to ${redisHost}:${redisPort}`);
});
client.on('error', (err) => {
console.log(`Redis could not connect to ${redisHost}:${redisPort}: ${err}`);
});
module.exports = client;
The port might be in use. Either kill the container using it or restarting docker will release the port.
I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.