eclipse mosquitto - Client <unknown> disconnected due to malformed packet - mosquitto

Testing out Eclipse Mosquitto (2.0.14)
mosquitto.conf
listener 8883
allow_anonymous true
docker-compose.yml
services:
mosquitto:
image: eclipse-mosquitto
container_name: mosquitto
ports:
- 8883:8883
volumes:
- ./mosquitto.conf:/mosquitto/config/mosquitto.conf
- ./data:/mosquitto/data
Testing like this (from inside the container):
mosquitto_pub -h 127.0.0.1 -p 8883 -m "test" -t test
results in
Client <unknown> disconnected due to malformed packet.
Any ideas?

I was able to connect to the broker from outside the container, I assume something to do with the way the broker binds to the ip address causes this not to work from inside the container (??)
mosquitto_pub -h <server> -p 8883 -m "test" -t test

Related

How to connect outside command to rabbitmq docker container?

I'm doing self study using docker. I'm trying to experiment where my rabbitmq is inside the docker and my eclipse-mosquitto is outside the docker. I know there is an image for eclipse-mosquitto, but I just want to know if I can connect it outside the docker.
In my docker-compose.yml, this is my code
version: "3.2"
networks:
test_network:
services:
rabbitmq:
container_name: rabbitmq-test
build:
context: ./docker/rabbitmq
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
- ./docker/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
networks:
- test_network
and then I install mosquitto using homebrew in my machine. I try this command
mosquitto_pub -h 172.20.0.5 -t test.hello -m "hello world" -u guest -P guest -p 1883 -d
I always get error Error: Operation timed out. I try to use the container name of rabbitmq as a host, but it doesn't work. I just want to know if this is possible?
I try this command
mosquitto_pub -h 0.0.0.0 -t rh.mumbai -m "miyatx" -u guest -P guest -p 1883 -d
I get an error
Client null sending CONNECT
Error: The connection was lost.

Cannot connect to docker container (redis) in host mode

This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above
According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.

Can't enable ssl by docker-letsencrypt-nginx-proxy-companion

I want to enable ssl by docker-letsencrypt-nginx-proxy-companion.
This is the docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
db:
# ---
wordpress:
# ---
environment:
# ---
VIRTUAL_HOST: blog.ironsand.net
LETSENCRYPT_HOST: blog.ironsand.net
LETSENCRYPT_EMAIL: mymail#example.com
restart: always
letsencrypt-nginx-proxy-companion:
container_name: letsencrypt
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- certs:/etc/nginx/certs
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_PROXY_CONTAINER: nginx-proxy
restart: always
networks:
default:
external:
name: nginx-proxy
volumes:
certs:
vhostd:
html:
docker logs letsencrypt shows that a certificate exists already.
/etc/nginx/certs/blog.ironsand.net /app
Creating/renewal blog.ironsand.net certificates... (blog.ironsand.net)
2020-04-09 00:03:23,711:INFO:simp_le:1581: Certificates already exist and renewal is not necessary, exiting with status code 1.
/app
But ACME challenge returns nothing. (failure?)
$ docker exec letsencrypt bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
$
The port 443 is listning, but the port is closed from outside.
// in remote server
$ sudo lsof -i:443
[sudo] password for ubuntu:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 10910 root 4u IPv6 633694 0t0 TCP *:https (LISTEN)
// from local pc
❯ nmap -p 443 blog.ironsand.net
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-09 09:44 JST
Nmap scan report for blog.ironsand.net (153.127.40.107)
Host is up (0.035s latency).
rDNS record for 153.127.40.107: ik1-418-41103.vs.sakura.ne.jp
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds
I'm using packet filtering, but it's open for 80 and 443, and I'm not using firewall.
How can I investigate more where the problem exists?
I can't solve your problem directly, but I can wrote some hints, so can solve your problem.
Your command return nothing.
bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
This comand only writes "Hello world!" to the location and normally return nothing. See https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Redirections
Look inside of certs-folder.
Have a look into the certs folder and maybe clean them up. Check that the folder was mounted corretly in your nginx-container. Take a bash into the container and check the ssl-folder.
Check that the firewall nothing breaks up
From outside is no connection possibly? What is from the inside? Login on your docker-host and check the connection from there (maybe openssl and curl are your friends),
Don't use SSL inside container.
I often see problems when sombody tries to use ssl with ACME-images and "wild mounting and shared volumes". But I heard never about problems, when the same people using a normal reverse proxy. I explain a good setup bellow.
So just remove the whole letscrypt-code from your container and close the 443 port of your container.
(Additionally you can switch to a non-root-image and expose only ports which doesn't need root-privileges.)
Then install nginx on your host and setup a reverse proxy (something like proxy_pass 127.0.0.1:8080). And now install certbot and start it. It helps you and is straight-forward.
The certbot can also maintain your certificates.

Docker process opens TCP port, but connection is refused

How can I run a simple server listening on a port, inside a Docker container?
(In case it matters, this is a MacOS X 10.13.6 host.)
When I run a simple HTTP Server:
python3 -m http.server 8001
the process starts correctly, and it listens correctly on that port (confirmed with telnet localhost 8001).
When I run a Docker container, it also runs the process correctly, but now the connection is refused.
web-api/Dockerfile:
FROM python:3.7
CMD python3 -m http.server 8001
docker-compose.yaml:
version: '3'
services:
web-api:
hostname: web-api
build:
context: ./web-api/
dockerfile: Dockerfile
expose:
- "8001"
ports:
- "8001:8001"
networks:
- api_network
networks:
api_network:
driver: bridge
When i start the container(s), it is running the HTTP server:
$ docker-compose up --detach --build
Building web-api
[…]
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1db4cd1daaa4 ipsum_web-api "/bin/sh -c 'python3…" 8 minutes ago Up 8 minutes 0.0.0.0:8001->8001/tcp ipsum_web-api_1
$ docker-machine ssh lorem
docker#lorem:~$ ps -ef | grep http.server
root 12883 12829 0 04:40 ? 00:00:00 python3 -m http.server 8001
docker#lorem:~$ telnet localhost 8001
[connects successfully]
^D
docker#lorem:~$ exit
$ telnet localhost 8001
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
What is configured incorrectly here, such that the server works inside the Docker container; but I get Connection refused on port 8001 when connecting from outside the container on its exposed port?
Try telnet lorem 8001.
Your python container is exposed to port 8001 on its docker host, which is the one you ssh into (lorem).
When you run docker, docker-compose or docker-machine commands, it uses the value of DOCKER_HOST to figure out which host to connect to. I presume that your DOCKER_HOST has the same ip as lorem.

Facing issues while testing connectivity from docker container to validator. curl http://rest-api:8008/blocks

Background: setting up the environment for Hyperledger- Sawtooth.
Running command curl http://rest-api:8008/blocks to test the connectivity of Validator from Client Container.
getting error:
could not resolve host rest-api
If you are using the sawtooth-local-installed.yaml from the Sawtooth master branch, then the REST API service is exposed on port 8008 on the rest-api container, and also forwarded to port 8008 on the host:
rest-api:
image: sawtooth-rest-api:latest
container_name: sawtooth-rest-api-default
expose:
- 8008
ports:
- "8008:8008"
depends_on:
- validator
entrypoint: sawtooth-rest-api --connect tcp://validator:4004 --bind rest-api:8008
The service should therefore be accessible from another Docker container as http://rest-api:8008/blocks or from the host as http://127.0.0.1:8008/blocks via a web browser or curl. If you are still having problems, try changing the entrypoint command to use --bind 0.0.0.0:8008 as the last argument.
Just use this command, it will work
curl http://rest-api-0:8008/blocks
this is because the rest-api-0 is mentioned in the docker file.

Resources