Connecting Node-Red to Mariadb within Docker - docker
Evening all,
Setting up a fairly straightforward Mosquitto -> Node-Red -> Mariadb deployment, from docker-compose. Compose file as below:
version: '3.8'
services:
mqtt:
container_name: mosquitto
image: eclipse-mosquitto:latest
restart: always
ports:
- "1883:1883"
volumes:
- ./mosquitto.conf:/mosquitto/config/mosquitto.conf
- /mosquitto/data
- /mosquitto/log
nodered:
container_name: node-red
image: nodered/node-red:latest
restart: always
ports:
- "1880:1880"
volumes:
- node_red_user_data:/data
links:
- "mariadb:mariadb"
mariadb:
container_name: mariadb
image: mariadb:latest
restart: always
command: --default-authentication-plugin=mysql_native_password
ports:
- "3306:3306"
- "33060:33060"
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=yes
- MYSQL_DATABASE=test
- MYSQL_USER=testuser
- MYSQL_PASSWORD=password
volumes:
- mariadb_data_container:/var/lib/mysql
volumes:
mariadb_data_container:
node_red_user_data:
mosquitto_persistence:
networks:
default:
name: primary
Mosquitto to Node-Red is working well enough, and I've set up the database and table in mariadb, but I'm not having any luck getting Node-Red to talk to Maria. Getting this error thrown back at me.
Error: connect ECONNREFUSED 127.0.0.1:3306
The Node-red node in question is as follows:
[{"id":"d93d7d2b.ee27f","type":"mysql","z":"ec0540ab.8b4e2","mydb":"68416de0.8f91a4","name":"XDK Environmental Data","x":750,"y":260,"wires":[["d0d7439f.9b88d"]]},{"id":"68416de0.8f91a4","type":"MySQLdatabase","z":"","name":"Write to mariadb","host":"localhost","port":"3306","db":"XDK_FEM","tz":""}
Full Node-red flow here, in case that's in some way useful.
[{"id":"ec0540ab.8b4e2","type":"tab","label":"MQTT_MYSQL_write","disabled":false,"info":""},{"id":"772011e7.51dd4","type":"mqtt in","z":"ec0540ab.8b4e2","name":"XDK1_Output","topic":"BCDS/XDK110/example/out","qos":"2","datatype":"utf8","broker":"ac9b691.6c35998","x":90,"y":260,"wires":[["f7d34d6b.63919","8336338b.e648c"]]},{"id":"d0d7439f.9b88d","type":"debug","z":"ec0540ab.8b4e2","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":990,"y":260,"wires":[]},{"id":"d79d4ae0.fcf958","type":"function","z":"ec0540ab.8b4e2","name":"Create query in topic","func":"var out = \"INSERT INTO XDK1_raw (timestamp,message)\"\nout = out + \"VALUES ('\" + new Date().toISOString() + \"','\" \nout = out + msg.payload + \"');\"\nmsg.topic=out;\n\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","x":500,"y":260,"wires":[["d93d7d2b.ee27f"]]},{"id":"9b43a338.781","type":"comment","z":"ec0540ab.8b4e2","name":"Log everything","info":"","x":100,"y":200,"wires":[]},{"id":"d93d7d2b.ee27f","type":"mysql","z":"ec0540ab.8b4e2","mydb":"68416de0.8f91a4","name":"XDK Environmental Data","x":750,"y":260,"wires":[["d0d7439f.9b88d"]]},{"id":"f7d34d6b.63919","type":"debug","z":"ec0540ab.8b4e2","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":310,"y":180,"wires":[]},{"id":"8336338b.e648c","type":"json","z":"ec0540ab.8b4e2","name":"","property":"payload","action":"str","pretty":true,"x":270,"y":300,"wires":[["cbee55ed.b7a668","d79d4ae0.fcf958"]]},{"id":"cbee55ed.b7a668","type":"debug","z":"ec0540ab.8b4e2","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","statusVal":"","statusType":"auto","x":480,"y":380,"wires":[]},{"id":"ac9b691.6c35998","type":"mqtt-broker","z":"","name":"XDK_Mosquitto","broker":"192.168.1.115","port":"1883","clientid":"","usetls":false,"compatmode":false,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthPayload":"","closeTopic":"","closeQos":"0","closePayload":"","willTopic":"","willQos":"0","willPayload":""},{"id":"68416de0.8f91a4","type":"MySQLdatabase","z":"","name":"Write to mariadb","host":"localhost","port":"3306","db":"XDK_FEM","tz":""}]
I've tried various combinations of ports and configs, and after digging myself deeper into a hole I've just reset everything to start from scratch.
Any insight gratefully received!
EDIT
I realise this isn't a very helpful update for anyone facing the same image, but in the end I deleted the container and started again from the docker-compose and it worked. Lord only knows. Working theory is that before hitting on the right answer (using 'mariadb' instead of 'localhost') I broke something trying a wrong answer.
The important thing to remember is that each container has it's own loopback device (lo, 127.0.0.1) and the host for the containers has it's own as well and they are all totally separate.
So you can not reference the MariaDB container as 127.0.0.1 from the Node-RED container as that address points to the Node-RED container.
You need to use the hostname mariadb not 127.0.0.1 when entering the details or the mariadb config node.
Related
Connecting to Docker from external network: modifying YML file
I am trying to set up a Learning Locker server within Docker (on Windows 10, Docker using WSL for emulation) using the repo from michzimney. This service is composed of several Docker containers (Mongo, Redis, NGINX, etc) networked together. Using the provided docker-compose.yml file I have been able to set up the service and access it from localhost, but I cannot access the server from any machine on the rest of my home network. This is a specific case, but some guidance will be valuable as I am very new to Docker and will need to build many such environments in the future, for now in Windows but later in Docker on Synology, where the services can be access from network and internet. My research has led me to user-defined bridging using docker -p [hostip]:80:80 but this didn't work for me. I have also turned off Windows firewall since that seems to cause a host of issues for some but still no effect. I tried to bridge my virtual switch manager for WSL using Windows 10 Hyper-V manager, but that didn't work, and I have tried bridging the WSL connector to LAN using basic Windows 10 networking but that didn't work and I had to reset my network. So the first question is is this a Windows networking issue or a Docker configuration issue? The second question, assuming it's a Docker configuration issue, is how can I modify the following YML file to make the service accessible to the outside network: version: '2' services: mongo: image: mongo:3.4 restart: unless-stopped volumes: - "${DATA_LOCATION}/mongo:/data/db" redis: image: redis:4-alpine restart: unless-stopped xapi: image: learninglocker/xapi-service:2.1.10 restart: unless-stopped environment: - MONGO_URL=mongodb://mongo:27017/learninglocker_v2 - MONGO_DB=learninglocker_v2 - REDIS_URL=redis://redis:6379/0 depends_on: - mongo - redis volumes: - "${DATA_LOCATION}/xapi-storage:/usr/src/app/storage" api: image: michzimny/learninglocker2-app:${DOCKER_TAG} environment: - DOMAIN_NAME - APP_SECRET - SMTP_HOST - SMTP_PORT - SMTP_SECURED - SMTP_USER - SMTP_PASS command: "node api/dist/server" restart: unless-stopped depends_on: - mongo - redis volumes: - "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage" ui: image: michzimny/learninglocker2-app:${DOCKER_TAG} environment: - DOMAIN_NAME - APP_SECRET - SMTP_HOST - SMTP_PORT - SMTP_SECURED - SMTP_USER - SMTP_PASS command: "./entrypoint-ui.sh" restart: unless-stopped depends_on: - mongo - redis - api volumes: - "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage" - "${DATA_LOCATION}/ui-logs:/opt/learninglocker/logs" worker: image: michzimny/learninglocker2-app:${DOCKER_TAG} environment: - DOMAIN_NAME - APP_SECRET - SMTP_HOST - SMTP_PORT - SMTP_SECURED - SMTP_USER - SMTP_PASS command: "node worker/dist/server" restart: unless-stopped depends_on: - mongo - redis volumes: - "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage" nginx: image: michzimny/learninglocker2-nginx:${DOCKER_TAG} environment: - DOMAIN_NAME restart: unless-stopped depends_on: - ui - xapi ports: - "443:443" - "80:80" So far I have attempted to change the ports option to the following: ports: - "192.168.1.102:443:443" - "192.168.1.102:80:80" But then the container wasn't even accessible from the host machine anymore. I also tried adding network-mode=host under the nginx service but the build failed saying it was not compatible with port mapping. Do I need to set network-mode=host for every service or is the problem something else entirely? Any help is appreciated.
By the looks of your docker-compose.yml, you are exposing ports 80 & 443 to your host (Windows machine). So, if your windows IP is 192.168.1.102 - you should be able to reach http://192.168.1.102 & https://192.168.1.102 on your LAN if there is nothing blocking it (firewall etc.). You can confirm that you are indeed listening on those ports by running 'netstat -a' and checking to see if you are LISTENING on those ports.
How to change target of the Spring Cloud Stream Kafka binder?
Using Spring cloud Stream 2.1.4 with Spring Boot 2.1.10, I'm trying to target a local instance of Kafka. This is an extract of my projetc configuation so far: spring.kafka.bootstrap-servers=PLAINTEXT://localhost:9092 spring.kafka.streams.bootstrap-servers=PLAINTEXT://localhost:9092 spring.cloud.stream.kafka.binder.brokers=PLAINTEXT://localhost:9092 spring.cloud.stream.kafka.binder.zkNodes=localhost:2181 spring.cloud.stream.kafka.streams.binder.brokers=PLAINTEXT://localhost:9092 spring.cloud.stream.kafka.streams.binder.zkNodes=localhost:2181 But the binder keeps on calling a wrong target : java.io.IOException: Can't resolve address: kafka.example.com:9092 How can can I specify the target if those properties won't do he trick? More, I deploy the Kafka instance through a Docker Bitnami image and I'd prefer not to use SSL configuration (see PLAINTEXT protocol) but I'm don't find properties for basic credentials login. Does anyone know if this is hopeless? This is my docker-compose.yml version: '3' services: zookeeper: image: bitnami/zookeeper:latest container_name: zookeeper environment: - ZOO_ENABLE_AUTH=yes - ZOO_SERVER_USERS=kafka - ZOO_SERVER_PASSWORDS=kafka_password networks: - kafka-net kafka: image: bitnami/kafka:latest container_name: kafka hostname: kafka.example.com depends_on: - zookeeper ports: - 9092:9092 environment: - ALLOW_PLAINTEXT_LISTENER=yes - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092 - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092 - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 - KAFKA_ZOOKEEPER_USER=kafka - KAFKA_ZOOKEEPER_PASSWORD=kafka_password networks: - kafka-net networks: kafka-net: driver: bridge Thanks in advance
The hostname isn't the issue, rahter the advertised listeners protocol//:port mapping that causes the hostname to be advertised, by default. You should change that, rather than the hostname. kafka: image: bitnami/kafka:latest container_name: kafka hostname: kafka.example.com # <--- Here's what you are getting in the request ... environment: - ALLOW_PLAINTEXT_LISTENER=yes - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092 - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092 # <--- This returns the hostname to the clients If you plan on running your code outside of another container, you should advertise localhost in addition to, or instead of the container hostname. One year later, my comment still is not been merged into the bitnami README, where I was able to get it working with the following vars (changed to match your deployment) KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_CFG_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092 KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka.example.com:29092,PLAINTEXT_HOST://localhost:9092
All right: got this to work by looking twice to the "dockerfile" (thx to cricket_007): kafka: ... hostname: localhost For the record: I could get rid of all properties above, default being for Kafka localhost:9092
Using docker to setup tendermint testnet and establishing communication between abci and tendermint core
I am trying to integrate my own ABCI-application with the localnet. The docker-compose looks as version: '3' services: node0: container_name: node0 image: "tendermint/localnode" ports: - "26656-26657:26656-26657" environment: - ID=0 - LOG=${LOG:-tendermint.log} volumes: - ./build:/tendermint:Z command: node --proxy_app=tcp://abci0:26658 networks: localnet: ipv4_address: 192.167.10.2 abci0: container_name: abci0 image: "abci-image" volumes: - $GOPATH/src/samplePOC:/go/src/samplePOC ports: - "26658:26658" build: context: . dockerfile: $GOPATH/src/samplePOC/Dockerfile command: /go/src/samplePOC/samplePOC networks: localnet: ipv4_address: 192.167.10.6 Both the nodes and the abci- containers are built successfully. The ABCI server is started successfully and the nodes are trying to make connections. However the main problem is that the I see the two are not able to communicate with each other. I get the following error: node0 |E[2019-10-29|15:14:28.525] abci.socketClient failed to connect to tcp://abci0:26658. Retrying... module=abci-client connection=query err="dial tcp 192.167.10.6:26658: connect: connection refused" Can someone please help me here?
My first thought is that you may need to add a depends_on: ["abci0"] to node0, as the ABCI application must be listening before Tendermint will try to connect. Of course, TM should continue to retry so this may not be the issue. Another thing you can try, is to run tendermint on your host machine, and attempt to connect to the exposed port of ABCI port on abci0 (26658) to isolate the problem to the docker configuration. If you're not able to run tendermint node --proxy_app=tcp://localhost:26658 the problem likely lies in your ABCI application. I assume you've initialized a directory in the volume you mount into node0?
I got this working with the kvstore example from Tendermint. version: "3.4" services: kvstore-app: image: alpine expose: - "26658" volumes: - ./kvstore-example:/home/dev/kvstore-example command: "/home/dev/kvstore-example --socket-addr tcp://kvstore-app:26658" tendermint-node: image: tendermint/tendermint depends_on: - kvstore-app ports: - "26657:26657" environment: - TMHOME=/tmp/tendermint volumes: - ./tmp/tendermint:/tmp/tendermint command: node --proxy_app=tcp://kvstore-app:26658 I'm not exactly sure why your docker-compose.yml isn't working, but it's likely that you are not binding the socket of your abci application in a way that is accessible to the node. I'm explicitly telling the abci application to do so with the argument --socket-addr tcp://kvstore-app:26658". Additionally, I'm just exposing the port of the abci application on the docker network, but I think mapping the port should do this implicitly. Also I would get rid of all the network stuff. Personally, I use the network configuration only if I have some very specific network goals in mind.
Mapping ports in docker-compose file doesn't work. Network unreachable
I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working. After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable. I used docker inspect to view the IP and tried to ping that and still nothing. Here is the docker-compose file I am using. And I doing anything wrong? development.yml version: '3' services: frontend: image: nginx:latest ports: - "8081:80" volumes: - ./frontend/public:/var/www/html api: image: richarvey/nginx-php-fpm:latest ports: - "8080:80" restart: always volumes: - ./api:/var/www/html environment: APPLICATION_ENV: development ERRORS: 1 REMOVE_FILES: 0 links: - db - mq db: image: mariadb restart: always volumes: - ./data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0 mq: image: rabbitmq:latest restart: always environment: RABBITMQ_DEFAULT_USER: developer RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend. As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands): $ docker-machine ip default 192.168.99.100
How to use ipaddreses instead of container names in docker compse networking
I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that? Edit Here is the compose file version: "3" services: postgres: image: 'postgres:9.5' env_file: - '.env' volumes: - 'postgres:/var/lib/postgresql/data' ports: - '5433:5432' redis: image: 'redis:3.0-alpine' command: redis-server --requirepass devpassword volumes: - 'redis:/var/lib/redis/data' ports: - '6378:6379' web: build: . env_file: - '.env' ports: - "8000:80" volumes: - './src/edb/Controllers:/app/Controllers' - './src/edb/Views:/app/Views' - './src/edb/wwwroot:/app/wwwroot' - './src/edb/Lib:/app/Lib' volumes: postgres: redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers. Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers. The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.