Setting up more than one MQTT broker with Docker - docker

Using Docker, I was able to use eclipse-mosquitto to set up an MQTT broker with my app, which subscribes to messages. I'm learning Docker right now, so wanted to try adding two brokers to Docker-compose with different ports mapped like this:
version: '3'
services:
myapp:
...
links:
- mqtt
- mqtt2
depends_on:
- mqtt
- mqtt2
mqtt:
image: eclipse-mosquitto:latest
container_name: mqtt-iot
ports:
- 1883:1883
mqtt2:
image: eclipse-mosquitto:latest
container_name: mqtt2-iot
ports:
- 1884:1883
From outside of the myapp container (i.e. from my OS X terminal), both mqtt and mqtt2 are working; I can publish and subscribe to messages as expected.
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1884}) // Success
However, when I'm inside the container of myapp, I can only connect to mqtt. mqtt2 connection fires the offline event right away, and no connection fails. What do I need to do to for myapp to be using both of those brokers properly?

Two issues here
links:
- mqtt
- mqtt2
Links is deprecated now and is not even required in your compose. Next when you use below
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1884}) // Success
From outside. This is based on the ports on the host. When you do it from app container you should do it like below
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1883}) // Success
The container cannot see mapped port on host. It will see what is inside the network. And in local network both are listen on 1883

Related

Telegraf docker can not connect to Mosquitto brocker docker [duplicate]

I am trying to run a local mosquitto broker, publisher and subscriber setup via docker and docker-compose, but the publisher cannot connect to the broker. However, connecting to local broker via cli works fine.
Getting following error when running below setup.
{ Error: connect ECONNREFUSED 127.0.0.1:1883
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1088:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 1883 }
Local dockerized setup:
docker-compose.yml:
version: "3.5"
services:
publisher:
hostname: publisher
container_name: publisher
build:
context: ./
dockerfile: dev.Dockerfile
command: npm start
networks:
- default
depends_on:
- broker
broker:
image: eclipse-mosquitto
hostname: mosquitto-broker
container_name: mosquitto-broker
networks:
- default
ports:
- "1883:1883"
networks:
default:
dev.Dockerfile:
FROM node:11-alpine
RUN mkdir app
WORKDIR app
COPY package*.json ./
RUN npm ci
COPY ./src ./src
CMD npm start
src/index.js:
const mqtt = require("mqtt");
const client = mqtt.connect("mqtt://localhost:1883");
client.on("connect", () => {
console.log("Start publishing...");
client.publish("testTopic", "test");
});
client.on("error", (error) => {
console.error(error);
});
However, if I connect to the mosquitto broker via mqtt-js cli, it works as expected. E.g.
mqtt sub -t 'testTopic' -h 'localhost' and mqtt pub -t 'testTopic' -h 'localhost' -m 'from MQTT.js'.
What am I missing?
your publisher container and broker are running in two different containers that's mean that they are two different machines each machine has it's own ip.
you can't call broker service from your publisher container by using localhost:1883 and vice verse , from broker to publisher container
To reach broker container you have to call container ip or name or service name
in your case change mqtt.connect("mqtt://localhost:1883"); value to be mqtt.connect("mqtt://broker:1883"); and give it a try
The publisher and broker run in different containers, meaning they have different IPs.
When the publisher is trying to reach the broker at localhost:1883, it is normal to receive a ECONNREFUSED, hence the broker is not in the same container.
You should replace the 127.0.0.1 or localhost with the service name of the broker(broker in this case). The service name will be resolved to the correct IP of the broker container.
in your index.js you should change "localhost" to "broker". When inside a container "localhost" will resolve to that specific container so you should always use the service name instead and docker will take care of the routing to that specific service. Also by default all service in the same compose file are added to the same network so there is no need to specify it.
So basically change this: const client = mqtt.connect("mqtt://localhost:1883");
To this: const client = mqtt.connect("mqtt://broker:1883");

Docker container can't reach another container using container name

I have 2 Docker containers running in the same network and I want 1 of them to call another via spring Webclient.
I'm sure they all are in the same network -> docker network inspect <network_ID> proves this.
AFAIK I can ping one container from another to check if they can talk to each other by docker exec -ti attachment-loader-prim ping attachment-loader-sec
If I run this - I see responses from attachment-loader-sec like 64 bytes from 172.21.0.5: seq=0 ttl=64 time=0.220 ms, which means they can communicate.
When I send Postman request to attachment-loader-prim by its exposed port localhost:8085, I expect that after some business logic it calls for attachment-loader-sec via Webclient, but on that step I get a 500 error with such a message:
"finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80; nested exception is
io.netty.channel.AbstractChannel$AnnotatedConnectException:
finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80"
Both attachment-loader-prim and attachment-loader-sec can be accessed separately via postman and both send a response, no problem.
This is my docker-compose:
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader-prim
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8085
networks:
- loader_network
expose:
- 8085
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8086
networks:
- loader_network
expose:
- 8086
ports:
- 8006:8005
- 8086:8086
networks:
loader_network:
driver: bridge
And this is a Webclient which makes a call:
class RemoteServiceCaller(private val fetcherWebClientBuilder: WebClient.Builder) {
suspend fun getAttachmentsFromRemote(id: String, params: List<Param>, username: String): Result? {
val client = fetcherWebClientBuilder.build()
val awaitExchange = client.post()
.uri("/{id}/attachment", id)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(params)
.header(usernameHeader, username)
.accept(MediaType.APPLICATION_OCTET_STREAM)
.awaitExchange {
if (it.statusCode().is2xxSuccessful) {
handleSucessCode(it)
} else it.createExceptionAndAwait().run {
LOG.error(this.responseBodyAsString, this)
throw ProcessingException(this)
}
}
return awaitExchange
}
private suspend fun handleSucessCode(response: ClientResponse) {
// some not important logic
}
}
P.S. BasicUri for Webclient defined as Config Bean like http://attachment-loader-sec/list
All my investigations brought me to such problems as:
Calling container using localhost instead of container name
Containers are not in the same network.
All that seems not relevant for me.
Any ideas will be really appreciated.
The problem was in calling a service without its port. The url became now http://attachment-loader-sec:8086/list and it is correct now. In my case I get 404, which means that my url path is not quite correct, but that is outside of current question

How docker HTTP client container send HTTP request on start

I have a docker-compose file which defines an HTTP server and client as follows
version: '3'
services:
serverA:
container_name: serverA
image: xxx/apiserver
restart: unless-stopped
ports:
- '9090:9090'
networks:
- apinet
clientA:
container_name: clientA
image: xxx/apiclient
restart: unless-stopped
ports:
- '20005:20005'
depends_on:
- serverA
networks:
- apinet
networks:
apinet:
driver: bridge
The server image contain a simple golang HTTP server code ready to handle request
func main() {
http.HandleFunc("/itemhandler", headers)
http.ListenAndServe(":9090", nil)
}
func itemhandler(w http.ResponseWriter, req *http.Request) {
//handle http request
}
while the client image contain a function SendhttpPOSTRequest which contain an HTTP client module that sends HTTP request to the server and retry until it is able to send the request to the server. The client image when run as container contain a configuration file config.toml file which contain the IP address and PORT of the server.
//
func main() {
SendhttpPOSTRequest() // sends http post request with retry
}
The problem is when start the docker-compose file, both container get created and running however the client keeps sending the HTTP request but could not reach the server even though in the config.toml I have changed the server information as shown below and restart the client container. I have used the server IP address obtained from the apinet network as well and restart the client container but still could not send the HTTP request.
config.toml:
serverIP = "serverA" or serverIP = "172.x.0.x"
serverPort = "9090"
A ping between serverA and ClientA using their name works, eg ping serverA from clientA and vise versa works. HTTP request from the host machine to the server works but the HTTP request from the client does not get to the server. An HTTP request to the server inside the client container works but that is not what I want. What I want is that the client sends HTTP request (executes the SendhttpPOSTRequest() ) to the server when its image container starts and retry until the server is up and running.
I have search stackoverflow but could not get similar problem. Can anyone help me. Am new to docker.

rsyslog not connecting to elasticsearch in docker

I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.

ClusterJ cannot connect to dockerized Mysql cluster from outside the container

I have setup MySQL cluster on my PC using mysql/mysql-cluster image on docker hub, and it starts up fine. However when I try to connect to the cluster from outside docker (via the host machine) using clusterJ it doesn't connect.
Initially I was getting the following error: Could not alloc node id at 127.0.0.1 port 1186: No free node id found for mysqld(API)
So I created a custom mysql-cluster.cnf, very similar to the one distributed with the docker image, but with a new api endpoint:
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[ndb_mgmd]
NodeId=1
hostname=192.168.0.2
datadir=/var/lib/mysql
[ndbd]
NodeId=2
hostname=192.168.0.3
datadir=/var/lib/mysql
[ndbd]
NodeId=3
hostname=192.168.0.4
datadir=/var/lib/mysql
[mysqld]
NodeId=4
hostname=192.168.0.10
[api]
This is the configuration used for clusterJ setup:
com.mysql.clusterj.connect:
host: 127.0.0.1:1186
database: my_db
Here is the docker-compose config:
version: '3'
services:
#Sets up the MySQL cluster ndb_mgmd process
database-manager:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.2
command: ndb_mgmd
ports:
- "1186:1186"
volumes:
- /c/Users/myuser/conf/mysql-cluster.cnf:/etc/mysql-cluster.cnf
# Sets up the first MySQL cluster data node
database-node-1:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.3
command: ndbd
depends_on:
- database-manager
# Sets up the second MySQL cluster data node
database-node-2:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.4
command: ndbd
depends_on:
- database-manager
#Sets up the first MySQL server process
database-server:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.10
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_DATABASE=my_db
- MYSQL_USER=my_user
command: mysqld
networks:
database_net:
ipam:
config:
- subnet: 192.168.0.0/16
When I try to connect to the cluster I get the following error: '127.0.0.1:1186' nodeId 0; Return code: -1 error code: 0 message: .
I can see that the app running ClusterJ is registered to the cluster, but then it disconnects. Here is a excerpt from the docker mysql manager logs:
database-manager_1 | 2018-05-10 11:18:43 [MgmtSrvr] INFO -- Node 3: Communication to Node 4 opened
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Alloc node id 6 succeeded
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Nodeid 6 allocated for API at 10.0.2.2
Any help solving this issue would be much appreciated.
Here is how ndb_mgmd handles the request to start the ClusterJ application.
You connect to the MGM server on port 1186. In this connection you
will get the configuration. This configuration contains the IP addresses
of the data nodes. To connect to the data nodes ClusterJ will try to
connect to 192.168.0.3 and 192.168.0.4. Since ClusterJ is outside Docker,
I presume those addresses point to some different place.
The management server will also provide a dynamic port to use when
connecting to the NDB data node. It is a lot easier to manage this
by setting ServerPort for NDB data nodes. I usually use 11860 as
ServerPort, 2202 is also popular to use.
I am not sure how you mix a Docker environment with an external
environment. I assume it is possible to solve somehow by setting
up proper IP translation tables in the correct places.

Resources