accessing mosquitto via port 443 and apache - docker

I am running a MQTT Mosquitto server listening on port 8883 using TLS in a docker container with name 'mosquitto'.
In another docker container in the same network I am running an Apache webserver with a webpage at my_domain (at port 443).
The Apache should forward all requests to my_domain/mosquitto to the Mosquitto broker.
using my_domain/mosquitto. Thus I add
ProxyPreserveHost On
ProxyPass /mosquitto ws://mosquitto:8883
ProxyPassReverse /mosquitto ws://mosquitto:8883
to my httpd.conf which redirects https-browser-calls to my_domain/mosquitto to mosquitto.
This of course result in an OpenSSL error at Mosquitto.
But using the MQTT client (python) results in Name or service not known
What I am doing wrong?
P.S.:
The SSL keys / certificates for the Apache and the Mosquitto are different.
When disabling the webserver, redirect the Mosquitto to port 443 via docker the connection is working.

To use a HTTP reverse proxy (Apache) to proxy for a MQTT broker you must use MQTT of Websockets (because WebSocket connections are bootstrapped over HTTP).
A native MQTT connection will just not work as Apache has no way of understanding the native protocol format.
You will need to enable a Websocket Listener in Mosquitto and tell the client to make a websocket connect.
You should also probably be using /mqtt not /mosquitto as the path to proxy as this is the default for WebSocket connects

Related

How do server programs work on Docker when *only* the listening port is mapped to the Docker host?

This is just a conceptual question that I have been thinking about recently.
Say I'm running an Nginx container on Docker on a host. Normally, for this to work, we have to map ports like 80 and 443 to host container. This is because these are listening ports, and connections from the outside world to port 80 should be forwarded to port 80 of the container. So far so good.
But also: port 80 is just the listening socket, right? The listening socket only accepts the connection; after this any communication done between a client and the Nginx server is supposedly done on a different socket with a random port number (on the server side). This is to allow multiple connections, and to keep the listening port free to establish more connections, etc. This is where my issue comes in.
Say I'm a client and I connect to this Nginx server. As far as I understand, I first send TCP packets to port 80 of the host that is hosting this Nginx Docker container. But during the establishment of the connection, the server changes their port to another number, say 45670. (Not sure how, but I am guessing the packets that are sent back suddenly mention this port, and our client will continue the rest of the exchange with this port number instead).
But now as I send packets (e.g. HTTP requests) to the host on port 45670, how will the Nginx docker container see those packets?
I am struggling to understand how server processes can run on Docker with only one port exposed / published for mapping.
Thanks!
But also: port 80 is just the listening socket, right? The listening socket only accepts the connection; after this any communication done between a client and the Nginx server is supposedly done on a different socket with a random port number (on the server side).
Nope. When a connection is established, the client side is a random port number (usually) and the server side is the same port that the server listens on.
In TCP there aren't actually listening sockets - they're an operating system thing - and a connection is identified by the combination of both the port numbers and both the IP addresses. The client sends a SYN ("new connection please") from its port 49621 (for example) to port 80 on the server; the server sends a SYN/ACK ("okay") from its port 80 to port 49621 on the client.

SSL and port forwarding in Jelastic: Deploying Strapi

I'm just discovering Jelastic, and I have difficulties to run Strapi.
So far, I just have one node, that is a Docker Strapi image, with SLB (no specific load balancer).
This node is accessed with SLB, and both public IPv4 and IPv6 are available.
I redirect a subdomain to these public IPs
I can launch Strapi in the container. However, it does not work well because of two issues:
SSL is not available. I can't install Let's Encrypt Free SSL: "the add-on cannot be installed on this node"...
Port is not redirected, and I have to explicitly indicate the port in the browser url to access the app homepage.
With these two issues, Strapi cannot work properly.
DOCKER_EXPOSED_PORT 1337 and MASTER_IP are set up for the Docker container.
How can I solve these two issues?
SSL is not available. I can't install Let's Encrypt Free SSL: "the
add-on cannot be installed on this node"...
The Jelastic Let's Encrypt add-on can easily install on top of any container with the Custom SSL support enabled, namely the following servers (the list is constantly extended):
Load Balancers - NGINX, Apache LB, HAProxy, Varnish
Java application servers - Tomcat, TomEE, GlassFish, Payara, Jetty
PHP application servers - Apache PHP, NGINX PHP
Ruby application servers - Apache Ruby, NGINX Ruby
If you require Let's Encrypt SSL for any other stack, just add a load balancer in front of your application servers and install the add-on. SSL termination at load balancing level is used by default in clustered topologies. Docker containers are not on the list of supported nodes.
Port is not redirected, and I have to explicitly indicate the port in
the browser url to access the app homepage.
When using an external IP address, for correct forwarding you can can add two redirection rules to iptables and redirect all requests from port 80 or 443 to 1337, for example:
*nat
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337
*filter
-A INPUT -p tcp -m tcp --dport 1337 -m state --state NEW -j ACCEPT
If you will not use an external IP address, you can apply the solution indicated here or get additional information indicated at this link:

Trying to figure out hostname and port for Azure Service Bus Queue

I need to be able to read/write to an Azure Service Bus Queue and for that, the hostname and ports need to be white-listed by my IT team.
The connection string is: "Endpoint=sb://[myappname].servicebus.windows.net;...".
I have tried the hostname with port 443 (assuming here), but that hasn't worked after white-listing. So now I tried writing to queue while capturing the traffic from Wireshark, but I am getting lost in all the network packet details there.
Can anyone please help me with this?
Thank you
TCP port is used by default for transport operations. Please have a try to open the port 5671 and 5672. We could get more information from AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide.
Azure Service Bus requires the use of TLS at all times. It supports connections over TCP port 5671, whereby the TCP connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then equivalent to AMQP 5671 connections.
If you use a library, please have a try to set the ConnectivityMode to https (443 port)
ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Https

Re-resolve of backend in nginx SNI docker swarm

I am using nginx to do TCP forwarding based on hostname as discussed here: Nginx TCP forwarding based on hostname
When the upstream containers are taken down for a short period of time (5 or so mins), and then brought back up, nginx doesn't seem to re-resolve them (continue to get 111: connection refused error).
I've attempted to put a resolver in the server block of the nginx config:
server {
listen 443;
resolver x.x.x.x valid=30s
proxy_pass $name;
ssl_preread on;
}
I still get the same behaviour with this in place.
Like BMitch says, you can scale to 0 to ensure DNS remains available to Nginx.
But really, if you're using nginx in Swarm, I recommend using a Swarm-aware proxy solution that dynamically updates nginx/haproxy config's based on Services that have the proper labels. In those cases, when the service was removed, the config would also be removed from the proxy. One's I've used include:
Traefik
Docker Flow Proxy

Setting up Mosquitto on home server

I'm struggling with exposing Mosquitto that I setup on my Centos7 homeserver to the outside internet through my router.
Mosquitto runs fine on my localhost and post 1883 on the homeserver. I am able to pub/sub, and it is listening on the port as 127.0.0.1:1883 (tcp)
My home router has a dynamic IP (for now), say 76.43.150.206. On the router I port forwarded 1883 as both internal/external ports to my home server, say 192.168.1.100.
In the mosquitto.conf file, I have one simply line "listener 1883 76.43.150.206".
When I then attempt to pub/sub using a python client on an external computer as mqttc.connect("76.43.150.206", 1883), it says connection refused.
Any hints on what I'm doing wrong or how to get it working? BTW, my understanding of this setup is very basic and I've pretty much been going off blogs.
Here's how it will work:
1.) Setup mosquitto.conf as
listener 1883 0.0.0.0
#cafile <path to ca file>
#certfile <path to server cert>
#keyfile <path to server key>
#require_certificate false
0.0.0.0 binds the server to all interfaces present.
You can uncomment the code to enable TLS for better security. But you'll have to configure the client to use the same as well..
2.) Port forward router's 1883 port number to port 1883 of IP of machine running the broker.
3.) Start the broker and test your client!
You should not put the external address into the mosquitto config file.
You should probably not even have a listen line at all as mosquitto will bind to all available IP addresses on the machine it's running with the default port (1883).
If you really must use the listen directive (e.g. in order to set up SSL) then it should be configured with the internal IP address of the machine running the broker, in this case 192.168.1.100 and with a different port number so it does not clash with the default
listen 1884 192.168.1.100

Resources