Has any one used the spring-kafka 2.0.0.Release and created a consumer that uses confluent schema registry as a source for deserializing the message ?If so can you point me to an example?
The problem I'm trying to solve is i have a Debezium CDC connector on my kafka connect platform that streams events from MongoDB as they happen. I have to intercept those events transform and re-stream. To understand the event I have to deserialize the payload. I'm currently stuck at this step.
Sastry
This issue appears to be addressed a while back, so I would like to point out the following test-code and the specific part in it where you can configure kafka schema-registry.
Please take a look and see if it is clear or you need more help.
Related
I need to implement rabbitMQ as sort of a middleware between Azure service bus and my rails app.
The message that is being sent uses headers which i also need to capture.
So, is there a way for shovel-plugin to also include headers with the shoveled payload?
Whenever i get the message from the shovel all the original headers are gone.
Also I am using Bunny as an amqp client in the app
Thanks!
EDIT:
Found this PR thanks to RabbitMQ slack:
https://github.com/rabbitmq/rabbitmq-server/issues/2745
It seems that they are aware of the issue
I just created a Python middleware that handles appending headers to the payload and passes the data to RabbitMQ queue later on since it's literally 30 lines of code
I have a two part question.
First, what Dart command should I use to "start"
the VM service listening for requests, with possibly
giving it what host and port number to use.
I'm using Windows, and I don't need the Observatory possibly
interfering.
I'm currently trying to use this, after I CD into the project's directory:
dart --pause_isolates_on_start bicycle
And the second part of the question is, is it possible to verify
that the VM service is there and listening on whatever port?
I want to be able to send a request to the VM service,
from a WebSocket client, and get back a response.
After I give the above command, if I do a 'netstat'
it doesn't look like there is anything there listening.
And any attempts at trying to connect to the VM service get
a connection refused Exception, same as if I didn't even
try to start the VM service.
UPDATE:
I was looking at the intelliJ plugin code, to see how they did their connect,
and saw that they used "ws://localhost:8181/ws", I was trying to use
"ws://localhost:8181", and now it's finally getting past the handshake,
the server was returning "200 OK" instead of "101" before.
I'm assuming that I'm talking to the Observer at this point,
and not the VM service, I'm not sure, but at least I'm further along..
When it worked, I was using:
dart --enable-vm-service --pause_isolates_on_start bicycle.dart
Thanks!!
dart --help -v prints
--observe[=<port>[/<bind-address>]]
The observe flag is a convenience flag used to run a program with a
set of options which are often useful for debugging under Observatory.
These options are currently:
--enable-vm-service[=<port>[/<bind-address>]]
--pause-isolates-on-exit
--pause-isolates-on-unhandled-exceptions
--warn-on-pause-with-no-debugger
This set is subject to change.
Please see these options for further documentation.
It depends on what exactly you want to do, but as far as I know the Observatory just uses this service and if you don't access any of its features, it won't add additional load to the process.
There is a Dart client API https://pub.dartlang.org/packages/vm_service_client and a documentation about the protocol https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md
Perhaps this is what you are looking for
enum EventKind {
// Notification that VM identifying information has changed. Currently used
// to notify of changes to the VM debugging name via setVMName.
VMUpdate,
// Notification that a new isolate has started.
IsolateStart,
used with Events https://github.com/dart-lang/sdk/blob/master/runtime/vm/service/service.md#events
I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.
my question is regarding the configuration of the RSMB using MQTT topic names and MQTT-SN topic ids over a MQTT-SN gateway.
Using the "Getting started with the Really Small Message Broker" information is very useful to figure out how to configure topic name mapping in the case of connecting two Really Small Message Brokers together.
Regarding to the MQTT-SN specification v1.2 in section "6.10 Gateway's Publish Procedure", the gateway (in my case a gateway included in the RSMB, using the broker_mqtts implementation) may send a REGISTER message to inform the client about the topic name and its assigned topic ID value. Now, I would like to configure the mapping of MQTT topic names to pre-defined MQTT-SN topic IDs.
Is it possible to configure a mapping in the RSMB broker.cfg configuration to tell a MQTT-SN client the pre-defined topic ID after a successful connection to the RSMB?
Unfortunately no.
RSMB does not support predefined topics at the moment.
However you can register topics from client side.
Or you can subscribe on real topics.
I found RSMB nowhere near production ready. You can experiment with it, but it has a LOT of bugs.
I was facing same problem with RSMB. Then I decided to fork original Git project on Github and add this feature myself. It is available on https://github.com/MichalFoksa/rsmb. Feature is documented in Getting started.
It supports:
Dynamic pre-defined topic name, where place-holder [ClientId] is replace by replaced by actual value of client Id. For example a message published by a client called "Sensorduino" sent to a pre-defined topic name sensor/[ClientId]/meter will be published on topic: sensor/ Sensorduino/meter.
Client specific configuration. It is topic name to topic Id mapping specific only for a particular client.
Hope it helps and it is not too late.
Michal
A more advanced fork of #michal-foksa RSMB supports predefined topic in a config file.
https://github.com/tonnenpinguin/rsmb
Exist a container on the spring-amqp that support a reply-to feature?
I want make RPCs like https://www.rabbitmq.com/tutorials/tutorial-six-java.html, but using spring-amqp.
Yes. Documentation here.
On the server side, the message listener container, when used with a message listener adapter will automatically handle the replies. You can also use the template's ...receiveAndReply methods on the server side.
EDIT
Note that we now have Spring Boot implementations of all 6 tutorials.