I am currently planning to use MQTT protocol for a pub/sub app in IOS. My backend server is NGINX and I want to connect it using websockets. I haven't used MQTT before so my question is can I configure my NGINX server to be the message broker for the MQTT protocol or do I have to use NGINX as a proxy server for a message broker such as mosquitto library running on a different instance.
Nginx is not a MQTT broker, at best nginx can proxy for a broker but it can not act as one.
Nginx can proxy both native MQTT and MQTT over websockets when configured correctly.
The instance of the broker doesn't have to run on a separate machine, but you can use Nginx as a load balancer across a cluster of brokers if needed.
Related
I'm trying to proxy websocket requests from one docker container to the other. I was able to setup HTTP/HTTPS proxy on docker config.json through "httpProxy" and "httpsProxy" but couldn't find a way to do this for websocket. How should I do that for this protocol?
Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.
I'm running a TCP server (Docker instance / Go) on Kubernetes.. It's working and clients can connect and do intended stuff. I would like to make the TCP connection secure with an SSL certificate. I already got SSL working with a HTTP Rest API service running on the same Kubernetes cluster by using ingress controllers, but I'm not sure how to set it up with a regular TCP connection. Can anyone point me in the right direction ?
As you can read in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing
services other than HTTP and HTTPS to the internet typically uses a
service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Depending on the platform you are using you have different kind of LoadBalancers available which you can use to terminate your SSL traffic. If you have on-premise cluster you can set up additional nginx or haproxy server before your Kubernetes cluster which will handle SSL traffic.
I have a gRPC service which listens on a port using a tcp listener. This service is Dockerized and eventually I want to run it in a Kubernetes cluster.
I was wondering what is the best way to implement liveness and readiness probes for checking the health of my service?
Should I run a separate http server in another goroutine and respond to /health and /ready paths?
Or, should I also have gRPC calls for liveness and readiness of my service and use a gRPC client to query these endpoints?!
Previously I've run a separate http server inside the app, just for healthchecks (this was because AWS application load balancers only have http checking, I don't know about kube).
If you run the http server as a separate routine and the grpc server on the main goroutine then you should avoid the situation where the grpc server goes down and http is still 200 - OK (assuming you don't yet have a means for http to healthcheck your grpc).
You could also use a heatbeat pattern of goroutines, that are controlled by the http server and accept heartbeats from the grpc server to make sure that it's all OK.
If you run 2 servers, they will need to be running on different ports, this can be an issue for some schedulers (like ECS) that expects 1 port for a service. There are examples and packages that will allow you to multiplex multiple protocols onto the same port. I guess kube supports multiple port services so this might not be a problem.
Link to example of multiplexing:
https://github.com/gdm85/grpc-go-multiplex/blob/master/greeter_multiplex_server/greeter_multiplex_server.go
I have installed Mosquitto broker on EC2 instance but the mqtt client cannot access the broker.
What's the solution?
Likely you need to add the MQTT port 1883 (or 8883) to your security group to allow access from external client.