I am trying to run a local mosquitto broker, publisher and subscriber setup via docker and docker-compose, but the publisher cannot connect to the broker. However, connecting to local broker via cli works fine.
Getting following error when running below setup.
{ Error: connect ECONNREFUSED 127.0.0.1:1883
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1088:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 1883 }
Local dockerized setup:
docker-compose.yml:
version: "3.5"
services:
publisher:
hostname: publisher
container_name: publisher
build:
context: ./
dockerfile: dev.Dockerfile
command: npm start
networks:
- default
depends_on:
- broker
broker:
image: eclipse-mosquitto
hostname: mosquitto-broker
container_name: mosquitto-broker
networks:
- default
ports:
- "1883:1883"
networks:
default:
dev.Dockerfile:
FROM node:11-alpine
RUN mkdir app
WORKDIR app
COPY package*.json ./
RUN npm ci
COPY ./src ./src
CMD npm start
src/index.js:
const mqtt = require("mqtt");
const client = mqtt.connect("mqtt://localhost:1883");
client.on("connect", () => {
console.log("Start publishing...");
client.publish("testTopic", "test");
});
client.on("error", (error) => {
console.error(error);
});
However, if I connect to the mosquitto broker via mqtt-js cli, it works as expected. E.g.
mqtt sub -t 'testTopic' -h 'localhost' and mqtt pub -t 'testTopic' -h 'localhost' -m 'from MQTT.js'.
What am I missing?
your publisher container and broker are running in two different containers that's mean that they are two different machines each machine has it's own ip.
you can't call broker service from your publisher container by using localhost:1883 and vice verse , from broker to publisher container
To reach broker container you have to call container ip or name or service name
in your case change mqtt.connect("mqtt://localhost:1883"); value to be mqtt.connect("mqtt://broker:1883"); and give it a try
The publisher and broker run in different containers, meaning they have different IPs.
When the publisher is trying to reach the broker at localhost:1883, it is normal to receive a ECONNREFUSED, hence the broker is not in the same container.
You should replace the 127.0.0.1 or localhost with the service name of the broker(broker in this case). The service name will be resolved to the correct IP of the broker container.
in your index.js you should change "localhost" to "broker". When inside a container "localhost" will resolve to that specific container so you should always use the service name instead and docker will take care of the routing to that specific service. Also by default all service in the same compose file are added to the same network so there is no need to specify it.
So basically change this: const client = mqtt.connect("mqtt://localhost:1883");
To this: const client = mqtt.connect("mqtt://broker:1883");
Related
I am having troubles with microservice health checks in my consul docker setup, which i believe is a symptom of failure in service discovery as i only have one server in my registry.
Below is consul list of members from inside the docker container.
/ # consul members
Node Address Status Type Build Protocol DC Segment
7b1edb14a647 172.19.0.6:8301 alive server 1.7.4 2 dc1 <all>
/ #
Consul container logs repeat the same error below for all the microservices:
consul | 2020-06-16T12:19:11.087Z [WARN] agent: Check socket connection failed: check=service:ffa44b66c4869601c04abdbea6dc5be5 error="dial tcp 172.19.0.6:50044: connect: connection refused"
I am using docker-compose v.3.2 to create a network for containers.
This is a consul service definition
consul:
container_name: consul
ports:
- '8400:8400'
- '8500:8500'
- '8600:53/udp'
image: consul
command: ['agent', '-server', '-bootstrap', '-ui', '-client', '0.0.0.0']
Microservice definition
service-notification:
build:
context: .
dockerfile: apps/service-notification/Dockerfile
args:
NODE_ENV: development
depends_on:
- consul
image: 'service-notification:latest'
restart: always
environment:
- CONSUL_HOST=consul
ports:
- '50044:50044'
I am using CONSUL_HOST env variable to pass in correct host url.
Consul config for the microservice
consul:
host: ${{CONSUL_HOST}}
port: 8500
service:
discoveryHost: ${{CONSUL_HOST}}
healthCheck:
timeout: 1s
interval: 10s
tcp: ${{ service.discoveryHost }}:${{ service.port }}
maxRetry: 5
retryInterval: 5000
tags: ["v1.0.0", "microservice"]
name: io.ultimatebackend.srv.notification
port: 50044
My conclusion so far is that consul server container fails to reach the agents somehow. But i don't know why and i feel like i am missing some obvious peace of consul structure. Please advise.
I was incorrectly configuring my service. The dicoveryHost should be an IP and port of a micro-service inside docker network.
Running Docker 18.09.7ce with Docker API v1.39 on Ubuntu 18.04 LTS.
I'm trying to set up Traefik 2.2 as a reverse proxy for some swarm services but for some reason Traefik can't connect to the Docker daemon via the TCP port given in the Traefik documentation. These three error messages keep repeating.
level=debug msg="FIXME: Got an status-code for which error does not match any expected type!!!: -1" status_code=-1 module=api
level=error msg="Failed to retrieve information of the docker client and server host: Cannot connect to the Docker daemon at tcp://127.0.0.1:2377. Is the docker daemon running?" providerName=docker
level=error msg="Provider connection error Cannot connect to the Docker daemon at tcp://127.0.0.1:2377. Is the docker daemon running?, retrying in 1.461723532s" providerName=docker
It's running on a manager node (I only have one node) and the swarm is working fine, with the API exposed via that TCP port, as shown by the output of the following command.
$ sudo ss --tcp --listening --processes --numeric | grep ":2377"
LISTEN 0 128 *:2377 *:* users:(("dockerd",pid=30747,fd=23))
My architecture is based on this blog post, with a shared overlay network called proxy created with docker network create --driver=overlay proxy.
I tried this but it didn't work, and I can't really find any other related questions. Here are my configuration files:
traefik.toml
[providers.docker]
endpoint = "tcp://127.0.0.1:2377"
swarmMode = true
network = "proxy"
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[certificatesResolvers.le.acme]
email = "my-email#email.com"
storage = "/letsencrypt/acme.json"
caserver = "https://acme-staging-v02.api.letsencrypt.org/directory" # For testing
[certificatesResolvers.le.acme.httpChallenge]
entryPoint = "web"
[log]
level = "DEBUG"
traefik.yml
version: "3.7"
services:
reverse-proxy:
deploy:
placement:
constraints:
- node.role == manager
image: "traefik:v2.2"
ports:
- 80:80
- 443:443
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/path/to/traefik.toml:/etc/traefik/traefik.toml"
- "letsencrypt:/letsencrypt"
networks:
- "proxy"
networks:
proxy:
external: true
volumes:
letsencrypt:
The only difference I can see is that the blog does not explicitly define an endpoint for the dockers provider. Maybe to removing that?
I have this docker-compose.yml that basically builds my project for e2e test. It's composed of a postgres db, a backend Node app, a frontend Node app, and a spec app which runs the e2e test using cypress.
version: '3'
services:
database:
image: 'postgres'
backend:
build: ./backend
command: /bin/bash -c "sleep 3; yarn backpack dev"
depends_on:
- database
frontend:
build: ./frontend
command: /bin/bash -c "sleep 15; yarn nuxt"
depends_on:
- backend
spec:
build:
context: ./frontend
dockerfile: Dockerfile.e2e
command: /bin/bash -c "sleep 30; yarn cypress run"
depends_on:
- frontend
- backend
The Dockerfiles are just simple Dockerfiles that based off node:8 which copies the project files and run yarn install. In the spec Dockerfile, I pass http://frontend:3000 as FRONTEND_URL.
But this setup fails at the spec command when my cypress runner can't connect to frontend with error:
spec_1 | > Error: connect ECONNREFUSED 172.20.0.4:3000
As you can see, it resolves the hostname frontend to the IP correctly, but it's not able to connect. I'm scratching my head over why can't I connect to the frontend with the service name. If I switch the command on spec to do sleep 30; ping frontend, it's successfully pinging the container. I've tried deleting and let docker-compose recreate the network, I've tried specifying expose and links to the services respectively. All to no success.
I've set up a sample repo here if you wanna try replicating the issue:
https://github.com/afifsohaili/demo-dockercompose-network
Any help is greatly appreciated! Thank you!
Your application is listening on loopback:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.11:35233 *:*
LISTEN 0 128 127.0.0.1:3000 *:*
From outside of the container, you cannot connect to ports that are only listening on loopback (127.0.0.1). You need to reconfigure your application to listen on all interfaces (0.0.0.0).
For your app, in the package.json, you can add (according to the nuxt faq):
"config": {
"nuxt": {
"host": "0.0.0.0",
"port": "3000"
}
},
Then you should see:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:3000 *:*
LISTEN 0 128 127.0.0.11:39195 *:*
And instead of an unreachable error, you'll now get a 500:
...
frontend_1 | response: undefined,
frontend_1 | statusCode: 500,
frontend_1 | name: 'NuxtServerError' }
...
spec_1 | The response we received from your web server was:
spec_1 |
spec_1 | > 500: Server Error
I have the following docker-compose.yml:
version: '3'
services:
web:
build: .
image: webapp
env_file: .env.docker
ports:
- "3000:3000"
links:
- redis
- mongo
redis:
image: "redis:alpine"
mongo:
image: "mongo"
And I'm using the following env variables to connect to Mongo and Redis
REDIS_URL=redis://redis:6379
DATABASE_URL=mongodb://mongo:27017/webapp
With this configuration, when the app starts it can connect to the Mongo container, but it fails to connect to Redis with the following error:
Error: connect ECONNREFUSED 127.0.0.1:6379
I tried exposing and mapping the ports:
expose:
- "6379"
ports:
- "6379:6379"
but it still doesn't solve the issue. Mapping the ports I can use redis-cli to connect to Redis, so I know the container is running.
Any clues?
EDIT: Running the webapp on my machine without Docker works normally. I tried both, native Redis and Mongo as well as running with the docker-compose below commenting out the web section and mapping the ports.
EDIT 2: Output of lsof
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 3136 JayC 20u IPv4 0x7dac4a08aadc94c9 0t0 TCP *:6379 (LISTEN)
com.docke 3136 JayC 21u IPv6 0x7dac4a08bbf50781 0t0 TCP localhost:6379 (LISTEN)
EDIT 3: Adding where the app connects to Redis:
const { RedisPubSub } = require('graphql-redis-subscriptions');
console.log(`-------------- REDIS_URL: ${process.env.REDIS_URL} --------------`);
const engine = new RedisPubSub({
connection: {
url: process.env.REDIS_URL,
},
connectionListener: err => {
if (err) {
Logger.sys.error(
`redis connection failed at ${process.env.REDIS_URL}`,
);
Logger.sys.error(err);
} else {
Logger.sys.info(
`pubsub connected to redis at ${process.env.REDIS_URL}`,
);
}
},
});
The log output:
-------------- REDIS_URL: redis://redis:6379 --------------
2018-06-05T13:21:37.658Z - error: redis connection failed at redis://redis:6379
2018-06-05T13:21:37.659Z - error: { Error: connect ECONNREFUSED 127.0.0.1:6379
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1182:14)
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379 }
The configuration you described is not being used by your application:
REDIS_URL=redis://redis:6379
When you see the connection run, it's trying to connect to 127.0.0.1 instead of the container ip:
Error: connect ECONNREFUSED 127.0.0.1:6379
To solve this, you'll need to reconfigure your app so that it uses the redis DNS name instead of 127.0.0.1. Each container has its own private loopback interface, so connecting to this inside a container will connect to the container itself, not your host or any other container running on the host.
As an aside, do not use links. They have been deprecated. The built in DNS will give name resolution to the service name. If you have dependencies between containers, it's best to handle this in the application or entrypoint. You can also use depends_on to list service dependencies, but this only works with docker-compose, and does not verify the health of the dependent services.
It have worked on me when I have changed REDIS_URL to:
REDIS_URL=redis://redis
Using Docker, I was able to use eclipse-mosquitto to set up an MQTT broker with my app, which subscribes to messages. I'm learning Docker right now, so wanted to try adding two brokers to Docker-compose with different ports mapped like this:
version: '3'
services:
myapp:
...
links:
- mqtt
- mqtt2
depends_on:
- mqtt
- mqtt2
mqtt:
image: eclipse-mosquitto:latest
container_name: mqtt-iot
ports:
- 1883:1883
mqtt2:
image: eclipse-mosquitto:latest
container_name: mqtt2-iot
ports:
- 1884:1883
From outside of the myapp container (i.e. from my OS X terminal), both mqtt and mqtt2 are working; I can publish and subscribe to messages as expected.
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1884}) // Success
However, when I'm inside the container of myapp, I can only connect to mqtt. mqtt2 connection fires the offline event right away, and no connection fails. What do I need to do to for myapp to be using both of those brokers properly?
Two issues here
links:
- mqtt
- mqtt2
Links is deprecated now and is not even required in your compose. Next when you use below
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1884}) // Success
From outside. This is based on the ports on the host. When you do it from app container you should do it like below
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1883}) // Success
The container cannot see mapped port on host. It will see what is inside the network. And in local network both are listen on 1883