I trying to create a bridge between one Solace router (partner organization) to another Solace router bridge (my organization).
Preference is through Solace router configuration instead of application based bridge, and it make sense since routers on both ends are from Solace and possibly Solace 3560 Appliance.
I have already developed application based bridge but looking forward to achieve it through router configuration.
Other words, I am looking something which will provide flexible and scalable interconnect two independently managed Solace routers to exchange agreed messages on specific queue and topic with guaranteed delivery.
I could not find details to achieve this using router configuration on Solace Developer portal and documents so interested to know implementation ideas to achieve this through router configuration.
Rational:
No middle-ware application dependency
Better control over
Ease of maintenance etc.
Thank you in advance!
The feature you are looking for is a VPN bridge.
The VPN bridge will allow you to link the routers together, and exchange guaranteed messages.
More details can be found in the documentation:
http://docs.solace.com/Features/Working-With-Message-VPN-Bridges.htm
Related
I am willing to create a mediator which is subscribed and published to 2 separate broker who have no access to topics of each other. The aim is to updates and create a logic of the message published by broker 1 and send it to broker 2 according to the set of rules
Do I need 2 separate ports ? As the topic level might be different in both brokers
Any help is much appreciated!!!
There is no MQTT standard (note, can only speak for 3.1.1) defined features that would allow a client maintain two concurrent connections. Therefore, this is entirely broker implementation-dependent and necessitates a bridge.
For example, the Eclipse Mosquitto broker can be configured as a bridge to another broker and even remap topics from itself to a different topic structure of the other. Please refer to the Mosquitto man page section Configuring Bridges for the specifics.
As far as creating a bespoke application, you can always write a simple Python program that is running two instances of an MQTT client (Eclipse Paho for example, which has a lightweight asyncio wrapper to facilitate concurrency), each connected to different brokers. The glue logic between them just has to re-publish an incoming subscribed topic message from Broker A to some topic, with or without a remapping step, to Broker B.
If the two brokers are both running locally on a single NIC, then you would need to use different ports.
I love using Prometheus for monitoring and alerting. Until now, all my targets (nodes and containers) lived on the same network as the monitoring server.
But now I'm facing a scenario, where we will deploy our application stack (as a bunch of Docker containers) to several client machines in thier networks. Nearly all of the clients networks are behind a firewall or NAT. So scraping becomes quite difficult.
As we're still accountable for our stack, I'd like to have a central montioring server, altering and dashboards.
I was wondering what could be the best architecture if want to implement it with Prometheus, but I couldn't find any convincing approaches. My ideas so far:
Use a Pushgateway on our side and push all data out of the client networks. As the docs state, it's not intended that way: https://prometheus.io/docs/practices/pushing/
Use a federation setup (https://prometheus.io/docs/prometheus/latest/federation/): Place a Prometheus server in every client network behind a reverse proxy (to enable SSL and authentication) and aggregate relevant metricts there. Open/forward just a single port for federation scraping.
Other more experimental setups, such as SSH Tunneling (e.g. here https://miek.nl/2016/february/24/monitoring-with-ssh-and-prometheus/) or VPN!?
Thank you in advance for your help!
Nobody posted an answer so I will try to give my opinion on the second choice because that's what I think I would do in your situation.
The second setup seems the most flexible, you have access to the datas and only need to open one port on for the federating server, so it should still be secure.
One other bonus of this type of setup is that even if the firewall stop working for a reason or another, you will still have a prometheus scraping, you will have an alert because you won't be able to access the server(s) but when the connexion comes again you will have all the datas. You won't have a hole in the grafana dashboards because there was no datas, apart during the incident.
The issue with this setup is the fact that you need to maintain a number of server equivalent to the number of networks. A solution for this would be to have a packer image or maybe an ansible playbook to deploy.
We are implementing an MQ/IIB architecture where we will have one QM and one Broker each on 2 RHEL servers load-balanced with each other to divide incoming traffic.
We have consumer applications which connect our servers through JMS bindings file. We also have IIB applications running on both of them.
Now, since one bindings file could have only one QMGR name while creating a connection factory, it's not recommended to keep different QM/Broker names on each servers. Since this bindings file would be shared with consumers, it has to be with unique QM name.
But if we have same QM/Broker names on each server, all logs on IIB record and replay tool will have one Broker name (from both servers) which is again difficult to identify which server actually served the incoming request.
Could you please suggest best possible approach in such scenario?
Or else suggest if above approach can be modified to achieve our goal.
In general it is not a good practice to have two queue managers with the same name. The same would be true for IIB brokers for the reasons you stated.
In the Binding file you can leave QMANAGER blank (null). This will allow the application to connect to any queue manager listening on the HOSTNAME and PORT that you specify.
If the queue managers on the 2 RHEL servers use the same port you could even set hostname to localhost and use the same binding file on both servers.
Example is below if both queue managers listened on the same port:
DEFINE CF(CF_NAME) QMANAGER() TRANSPORT(CLIENT) CHANNEL(MY.SVRCONN) HOSTNAME(localhost) PORT(1414)
I have started reading some details about MQTT protocol and its implementation. I came across the term 'cluster' a lot. Can anyone help me understand what does 'cluster' mean for MQTT protocol?
In this comparison of various MQTT protocol, there is a column for the term 'cluster'
Forwarding messages with topic bridge loops will not result in a true MQTT broker cluster, which will lead to drawbacks lined out above.
A true MQTT broker cluster is a distributed system that represents one logical MQTT broker. A cluster consists of various individual MQTT broker nodes, that are typically installed on separate physical or virtual machines and or connect over a network.
Typical advantages of MQTT broker clusters include:
Elimination of the single point of failure
Load distribution across multiple cluster nodes
The ability for clients to resume sessions on any broker cluster
Scalability
Resilience and fault tolerance - especially useful in cloud environments
I recommend this blogpost, if you're looking for a more detailed explanation.
A cluster is a collection of MQTT brokers set up to bridge all topics between each other so that a client can connect to any one of the cluster members and still publish and receive messages to all other clients no matter which cluster member they are connected to.
A few things to be aware of:
Topic bridge loops, where a message is published to one cluster member which is then forwarded to another cluster member, then another and finally back to the original. If this happens the original broker doesn't have a way to know it originally pushed this to the other cluster members so the message and end up in a loop. Shared message state databases or using a single bridging replication broker can fix this.
Persistent subscriptions/sessions, unless brokers have a pooled session cache then clients will not retain session or subscription status if they connect to a different cluster member when reconnecting.
I have a strange szenario:
Webserver / Appserver (Java) sends requests to many different satellite systems (on customers site). Only satellite systems can initiate connection due to firewall rules.
The model I think should be something like REQ/REP, but here the REQuester have to bind and the REPlyer would have to connect.
Is this possible and a stable architecture?
Are there better solutions? (We first had WebSockets in mind...)
Remark: we don't have to use Java on both ends. To be precise on customers site we have Delphi, but we could bridge it somehow.
The model I think should be something like REQ/REP, but here the
REQuester have to bind and the REPlyer would have to connect.
This will be problematic. When the server initiates the connection, it must be aware of all peers and their bind address. Not a big deal for a handful of peers, but for many peers changing constantly, it's a mess.
Only satellite systems can initiate connection due to firewall rules.
If that's the case, your mileage will vary with WebSockets; google around, lots of info on this.
Are there better solutions?
Well, with ZeroMq, one solution that comes to mind to support client request initiation is this:
Server binds with ROUTER
Clients connect with DEALER.
This approach offers bi-directional request/reply, does not block (asynchronous), and eliminates the client-side bind problem mentioned in your question. Here, the server binds, and either side can initiate the conversation.
I recommend reading this section in the guide, it covers extended async request/reply and message enveloping, important when using ROUTER/DEALER sockets.