IBM MQ : Multiple QMs in JMS bindings file - binding

We are implementing an MQ/IIB architecture where we will have one QM and one Broker each on 2 RHEL servers load-balanced with each other to divide incoming traffic.
We have consumer applications which connect our servers through JMS bindings file. We also have IIB applications running on both of them.
Now, since one bindings file could have only one QMGR name while creating a connection factory, it's not recommended to keep different QM/Broker names on each servers. Since this bindings file would be shared with consumers, it has to be with unique QM name.
But if we have same QM/Broker names on each server, all logs on IIB record and replay tool will have one Broker name (from both servers) which is again difficult to identify which server actually served the incoming request.
Could you please suggest best possible approach in such scenario?
Or else suggest if above approach can be modified to achieve our goal.

In general it is not a good practice to have two queue managers with the same name. The same would be true for IIB brokers for the reasons you stated.
In the Binding file you can leave QMANAGER blank (null). This will allow the application to connect to any queue manager listening on the HOSTNAME and PORT that you specify.
If the queue managers on the 2 RHEL servers use the same port you could even set hostname to localhost and use the same binding file on both servers.
Example is below if both queue managers listened on the same port:
DEFINE CF(CF_NAME) QMANAGER() TRANSPORT(CLIENT) CHANNEL(MY.SVRCONN) HOSTNAME(localhost) PORT(1414)

Related

Stomp over SockJS ActiveMQ relay to multiple servers

I'm trying to create a reasonable setup for client-client-communication for our existing infrastructure. I've been reading the docs for Spring, Websocket, STOMP, SockJS and ActiveMQ for over a week now and I'm not not sure whether what I am trying to do is feasible or sensible. Spring server and JavaScript client were up and running relatively quickly and sending messages between clients just works (direct connection from JS client to Spring server). This setup won't suffice for our needs so we decided to put dedicated brokers in between. Configuring ActiveMQ is a nightmare probably because I don't really know where to start. I have not worked with a dedicated broker so far.
Environment
170 independent servers (Tomcat, Spring, SockJS, STOMP)
2 ActiveMQ (Artemis) brokers (load balance, failure safety)
a few thousand clients (JavaScript/.NET, SockJS, STOMP)
Requirement
I need every client to be able to talk to every other client. Every message has to be curated by one of the servers. I'd like the clients to connect to one of the ActiveMQ brokers. The ActiveMQ brokers would hold a single connection to every single server. The point is to avoid that all my clients would have to open 170 WebSocket connections to all the servers. The servers do not need to talk to each other (yet/necessarily) since they are independent with different responsibilities.
Question
Is ActiveMQ or any other dedicated broker viable as transparent proxy/relay i.e. can it handle this situation and are there ways to dynamically decide the correct recipients or should I go another route like rolling my own Spring-based relay?
In a traditional messaging use-case (e.g. using ActiveMQ Artemis with STOMP) the broker manages "destinations" and any messages sent to those destinations. Messages are only dispatched to clients if the client specifically creates a consumer on a destination.
In your use-case all of your 170 "servers" would actually be messaging clients. They would need to create a consumer on the broker in order to receive messages. To be clear, once the consumer is created then the broker would dispatch messages to it as soon as they arrived.
I'm not sure what exactly you mean by "transparent," but if it means that the process(es) receiving the message(s) don't have to do anything then no message broker will handle your use-case.

Can I connect to 2 seperate mqtt brokers without a bridge between them and subscibe/ publish accordingly?

I am willing to create a mediator which is subscribed and published to 2 separate broker who have no access to topics of each other. The aim is to updates and create a logic of the message published by broker 1 and send it to broker 2 according to the set of rules
Do I need 2 separate ports ? As the topic level might be different in both brokers
Any help is much appreciated!!!
There is no MQTT standard (note, can only speak for 3.1.1) defined features that would allow a client maintain two concurrent connections. Therefore, this is entirely broker implementation-dependent and necessitates a bridge.
For example, the Eclipse Mosquitto broker can be configured as a bridge to another broker and even remap topics from itself to a different topic structure of the other. Please refer to the Mosquitto man page section Configuring Bridges for the specifics.
As far as creating a bespoke application, you can always write a simple Python program that is running two instances of an MQTT client (Eclipse Paho for example, which has a lightweight asyncio wrapper to facilitate concurrency), each connected to different brokers. The glue logic between them just has to re-publish an incoming subscribed topic message from Broker A to some topic, with or without a remapping step, to Broker B.
If the two brokers are both running locally on a single NIC, then you would need to use different ports.

Docker Swarm - Route a request to ALL containers

Is there any sort of way to broadcast an incoming request to all containers in a swarm?
EDIT: More info
I have a distributed application with many docker containers. The client can send requests to the swarm and have it respond. However, in some cases, the client needs to change a state on all server instances and therefore I would either need to be able to broadcast a message or have all the Docker containers talk to each other similar to MPI, which I'm trying to avoid.
There is no built-in way to turn a unicast packet into a multicast packet, nor any common 3rd party way of doing (That I've seen or heard of).
I'm not sure what "change a state on all server instances" means. Are we talking about the running state on all containers in a single service?
Or the actual underlying OS? All containers on all services? etc.
Without knowing more about your use case, I'd say it's likely better to design something where the request is received by one Swarm service, and then it's stored in a queue system where a backend worker would pick it up and "change the state on all server instances" for you.
It depends on your specific use case. One way to do it is to send a docker service update --force, which will cause all containers to reboot. If your containers fetch the information that is changed at startup, it would have the required effect

microservices & service discovery with random ports

My question is related to microservices & service discovery of a service which is spread between several hosts.
The setup is as follows:
2 docker hosts (host A & host B)
a Consul server (service discovery)
Let’s say that I have 2 services:
service A
service B
Service B is deployed 10 times (with random ports): 5 times on host A and 5 times on host B.
When service A communicates with service B, for example, it sends a request to serviceB.example.com (hard coded).
In order to get an IP and a port, service A should query the Consul server for an SRV record.
It will get 10 ip:port pairs, for which the client should apply some load-balancing logic.
Is there a simpler way to handle this without me developing a client resolver (+LB) library for that matter ?
Is there anything like that already implemented somewhere ?
Am I doing it all wrong ?
There are a few options:
Load balance on client as you suggest for which you'll either need to find a ready-build service discovery library that works with SRV records and handles load balancing and circuit breaking. Another answer suggested Netflix' ribbon which I have not used and will only be interesting if you are on JVM. Note that if you are building your own, you might find it simpler to just use Consul's HTTP API for discovering services than DNS SRV records. That way you can "watch" for changes too rather than caching the list and letting it get stale.
If you don't want to reinvent that particular wheel, another popular and simple option is to use a HAProxy instance as the load balancer. You can integrate it with consul via consul-template which will automatically watch for new/failed instances of your services and update LB config. HAProxy then provides robust load balancing and health checking with a lot of options (http/tcp, different balancing algorithms, etc). One possible setup is to have a local HAProxy instance on each docker host and a fixed port assigned statically to each logical service (can store it in Consul KV) so you connect to localhost:1234 for service A for example and localhost:2345 for service B. Local instance means you don't pay for extra round trip to loadbalancer instance then to the actual service instance but this might not be an issue for you.
I suggest you to check out Kontena. It will solve this kind of problem out of the box. Every service will have an internal DNS that you can use in communication between services. Kontena has also built-in load balancer that is very easy to use making it very easy to create and scale micro services.
There are also lot's of built-in features that will help developing containerized applications, like private image registry, VPN access to running services, secrets management, stateful services etc.
Kontena is open source project and the code is visible on Github
If you look for a minimal setup, you can wrap the values you receive from Consul via ribbon, Netflix' client based load balancer.
You will find it as a module for Spring Cloud.
I didn't find an up-to-date standalone example, only this link to chrisgray's dropwizard-consul implementation that is using it in a Dropwizard context. But it might serve as a starting point for you.

Can multiple ClientSocket Components can be placed on a Form?

I am looking to write a program that will connect to many computers from a single computer. Sort of like "Command Center" where you can monitor all the remote system remotely on a single PC.
My plan is to have multiple Client Sockets on a form. They will connect to individual PCs remotely. So, they can request information from them to display on the Window. Remote PCs will be hosts. Is this possible?
Direct answer to your question: Yes, you can do that.
Long answer: Yes, you can do that but are you sure your design is correct? Are you sure you want to create parallel connections, one to each client? Probably you don't! If yes, then you probably want to run them in separate threads.
If you want to send some commands from time to time (and you are not doing some kind of constant video monitoring) why don't you just use one connection and 'switch' between clients?
I can't tell you more about the design because from your question is not clear about what you want to build (what exactly you are 'monitoring').
VERY IMPORTANT!
Two important notices to take into account before designing your app (both relevants only if the remote computers are not in the LAN (you connect to them via Internet)):
If the remote computers are running as servers, you will have lots of problems to explain your customers (if they are connected (and they probably are) to Internet via a router) how to setup the router and the software firewall. For example, if a remote computer is listening for commands from you, on port 1234 (for example) the firewall in the router will block BY DEFAULT any connection attempt from a 'foreign' computer (from you) to that port.
If your remote computers are running as clients, how they will know master's IP (your IP). Do you have a static IP?
What you actually need is one ServerSocket on the module running on your machine.
To which all your remote PC's will connect through their individual ClientSocket.
You can make your design other way round by putting ClientSocket on the module running on your machine and ServerSocket on the module running on remote machine.
But you will end up creating one ClientSocket to each ServerSocket, what if you have the number of remote servers increase.
Now if you still want to have multiple ClientSockets on your machine then as Altar said you could need a multi threaded application where each thread is responsible for one ClientSocket.
I would recommend Internet Direct (Indy) as they work well in threads, and you can specify a connect time-out per connection, so that your monitoring app will be able to get a 'negative' test result faster than with the default OS connect time-out.
Instead of placing them on the form, I would wrap each client in a class which runs an internal monitoring thread. More work initially but easier to keep independent from each other.

Resources