We need to send a JWT token alongside each JMS message. Other than packing it in the payload itself, I am unsure if attachments or headers are possible within JMS and/or ActiveMQ message structure.
Looked into ActiveMQMessage but did not find attachments or headers.
I recommend you take a look at the JMS documentation, specifically the documentation on messages which says:
JMS messages are composed of the following parts:
Header - All messages support the same set of header fields. Header fields contain values used by both clients and providers to identify and route messages.
Properties - Each message contains a built-in facility for supporting application-defined property values. Properties provide an efficient mechanism for supporting application-defined message filtering.
Body - The JMS API defines several types of message body, which cover the majority of messaging styles currently in use.
In short, use a property. Attachments don't exist in JMS.
The aforementioned documentation details the necessary API call(s) to set a property.
Related
I'm trying to run an MQTT broker and I want to store the published data, but I need to know which user sent the message so I can store payload for each user and study them later. The problem is when two different user try to publish message on same topic I can not tell whose data it is. Is there a way to figure out the publisher of a message? I'm using Mosquitto btw.
Short answer, you don't.
MQTT messages do not contain any information about the user or client that sent it, unless you choose to encode it in the message (as part of the payload for v3.x or alternatively in the header properties for v5.0)
Longer answer:
Some MQTT brokers have plugin APIs that may allow you access to more meta data for a message. You may be able to write a plugin that will take the message + the meta data and then store them. Last time I looked, mosquitto's plugin API was only for writing authentication plugins, and did not give access to the messages themselves. But a different broker may allow this.
I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.
I'm setting up an Asp.Net WebHook receiver for an internal webhook from a different server in the same application. I'm therefore using the built-in CustomWebHookReceiver. The webhook needs to send several different payload types as JSON, which need to be de-serialized into different strong types by the receiver and be processed differently.
Since the only difference between invocations of the webhook are the payload, a single webhook receiver will be configured for a single id, following the pattern of placing the shared secret in the web.config AppSetting as:
MS_WebHookReceiverSecret_<receiver>
What is the best way to implement different webhook handling behavior for the different payload types. Creating separate receivers or separate ids for the different payload types does not seem appropriate, since the security models are identical and writing a new receiver seems like overkill.
Creating different handlers seems like the right direction, but built-in settings appear to only allow a handler to specify the receiver and the priority. This leaves the option of inspecting the payload inside the handler's ExecuteAsync method and deciding how to process the message.
Is this correct, or is there a better way?
Tom, you can use the {id} part of the URI to receive as many WebHooks using the same handler and then differentiate them based on the {id}. You can find more details about how to set this up here.
Hope this helps!
Henrik
Suppose I have an existing Java service implementing a JSON HTTP API, and I want to add a Swagger schema and automatically validate requests and responses against it without retooling the service to use the Swagger framework / code generation. Is there anything providing a Java API that I can tie into and pass info about the requests / responses to validate?
(Just using a JSON schema validator would mean manually implementing a lot of the additional features in Swagger.)
I don't think there's anything ready to do this alone, but you can easily do this by the following:
Grab the SchemaValidator from the Swagger Inflector project. You can use this to validate inbound and outbound payloads
Assign a schema portion to your request/response definitions. That means you'll need to assign a specific section of the JSON schema to your operations
Create a filter for your API to grab the payloads and use the schema
That will let you easily see if the payloads match the expected structure.
Of course, this is all done for you automatically with Inflector but there should be enough of the raw components to help you do this inside your own implementation
The problem is that the Inbound Message Template is expecting different type than I want to specify or work with. The requirement is to have a SINGLE channel convert HL7 v2 to v3, call a web service, then convert the SOAP resulting XML, and convert that to HL7 v2.x and send it back to the original caller. This must be done asynchronously.
Setup:
Consider the situation in a Mirth channel:
Source is LLP listener. Type is HL7 v2.x. The sender is the HCIS (Health Care Information System).
Source Transformer, not relevant to problem at hand.
4 Destinations (in order):
Javascript Writer - calling into Code Templates to do some database work.
SOAP Sender - calling a web service which returns HL7 v3.
Javascript Writer - containing a handful of Transformers DB writer calling into Code Templates. The problem lies here.
Javascript Writer - again calling into Code Templates.
the PostProcessor generates a custom Acknowledgement to send back to the HCIS.
Problem:
The Inbound Message Template expects HL7 v2.x because it inherits the datatype from the Source. I need to map an HL7 v3 template to an Outbound Message Template. The Outbound Template is working fine, as it's not bound to anything.
tmp['PID']['PID.5']['PID.5.1'] = msg['controlActProcess']['subject']['target']['identifiedPerson']['name']['family'].toString();
I have tested this setup in another channel with HL7 v3 as the incoming datatype, and it works perfectly.
Question:
How can I force Mirth to recognize my Inbound Message Template as HL7 v3 instead of inheriting the channel's incoming data type?
A little late, I know, but could you break it into 2 channels: a HL7 v2.x to a channel writer, and then another set up as a channel reader to HL7 v3.x?
If you've solved this, I'd be curious to know how.
Okay, I'm writing this two and a half years after you posted the question, so by now you've dealt with it somehow. But, for the sake of making the information available, here's a reply.
You have an output connector whose input is HL7 v2.x. You need the input data in XML format (HL7 v3 is XML) so that you can manipulate it with E4X.
Solution: Mirth Connect handles this automatically. Whenever the connector has a filter or a transformer, Mirth transforms the input message to XML. You said that this connector has transformers, so the XML representation of the HL7 input message should be available to you.
If you are using a channel with no filters and no transformers, you can force the transformation by adding a filter where the condition is always true.