Recommended way to process different payload types in Asp.Net WebHooks for same sender - asp.net-webhooks

I'm setting up an Asp.Net WebHook receiver for an internal webhook from a different server in the same application. I'm therefore using the built-in CustomWebHookReceiver. The webhook needs to send several different payload types as JSON, which need to be de-serialized into different strong types by the receiver and be processed differently.
Since the only difference between invocations of the webhook are the payload, a single webhook receiver will be configured for a single id, following the pattern of placing the shared secret in the web.config AppSetting as:
MS_WebHookReceiverSecret_<receiver>
What is the best way to implement different webhook handling behavior for the different payload types. Creating separate receivers or separate ids for the different payload types does not seem appropriate, since the security models are identical and writing a new receiver seems like overkill.
Creating different handlers seems like the right direction, but built-in settings appear to only allow a handler to specify the receiver and the priority. This leaves the option of inspecting the payload inside the handler's ExecuteAsync method and deciding how to process the message.
Is this correct, or is there a better way?

Tom, you can use the {id} part of the URI to receive as many WebHooks using the same handler and then differentiate them based on the {id}. You can find more details about how to set this up here.
Hope this helps!
Henrik

Related

ROS2 Synchronizer with ApproxTime policy: why there is no registerDropCallback to use a callback method with non synced messages?

I have a class the does some processing on the information coming from two different messages, a point cloud and an image. I know how to use the message_filters::Synchronizer to grasp the two messages when they are considered synchronized. However, I would like to do some processing on the point cloud only, when the two incoming messages are not in sync.
In ExactTime policy based Synchronizer, it is possible to assign a callback in this case of dropped message, by assigning the callback function trough registerDropCallback(). Unfortunately, this method does not exist when creating a Synchronizer with ApproximateTime policy
Do you know if I can achieve the same result in case of ApproximatePolicy?
Thank you

Using Parameters of One Request to Dynamically Change the Response of Another

I have been using response templating to give dynamic responses, given that all the request and query parameters are associated with that request itself. However, I wanted to make a POST request with several parameters, and later use those parameters in a stubbed GET method's body response by using response templating. Is this something possible to do in wiremock? Any input is greatly appreciated, thank you!
Storing state between requests is not a default feature of WireMock outside of mocking the behavior through Stateful Behaviour, which is different from being actually stateful.
Without a custom plugin being able to share information between several requests is therefor not possible. In the WireMock documentation there is a section in the documentation on how to create such a plugin yourself. With a little development experience this is certainly doable.
On GitHub there are several plugin that create a storage mechanism to store information
WireMockCsv: store and retrieve information using HSQL Database.
wiremock-redis-extension does something similar using Redis.
An alternative to these approaches is to create mappings/data just before the test starts. For example generating all the responses beforehand and then using Templated BodyFileName tag to retrieve the just-in-time created file. Another way of achieving this result is to use the Admin API to create the mappings themselves directly.

Has anyone built a custom RabbitMQ exchange which creates queues for unbound routing keys?

I'd like to setup an Alternate Exchange which catches all messages with routing keys that aren't bound to any queues. I'd also like the exchange to then create a new queue bound to the new routing key and then republish the message there. For example, if two messages arrive with routing keys 'a' and 'b' to exchange ex_1, and it has no matching queues, the message will automatically be forwarded to the alternate exchange ex_1_ae. I'd like to then have two queues dynamically created for those messages and any future ones with those routing keys and bound to exchange ex_1. If another message comes along later with a routing key of 'c' I'd like yet another queue automatically created and bound to ex_1. Has anyone built such a custom exchange and / or could point me in the direction of the source?
I've started trying to write a custom exchange myself but I don't want to re-invent the wheel. I'd also upvote an answer that provides a simple custom exchange that I could use as a base to create the custom exchange I want. I've looked at the community exchanges in RabbitMQ's GitHub account but these all seem very complicated. I tried using https://github.com/thecederick/rabbitmq-arguments-to-headers-exchange as a starting point but at this stage I'm having trouble getting it to work so I'm trying to find another example or in the best case a custom exchange that does exactly what I want.
I'm trying to implement this because I need to have an isolated subscriber agent for each individual routing key. I realise that when the agent subscribes with a particular routing key a queue can be dynamically created, but in my scenario the messages themselves will trigger the creation of the subscriber and by the time it starts up multiple messages with it's routing key may have already been encountered. This is why I need the queues dynamically created when the first novel message arrives at the exchange, not when the subscriber agent starts running. When the subscriber does start running, there will already be a queue waiting for it with one or more pending messages.

iOS REST design pattern advice

I’d like some input on whether there is a better design pattern to use for my iOS app, which uses a REST model to communicate asynchronously with a Django back end.
The server can presently return three types of responses to requests:
a JSON object
a server status code integer
a long Django error message
When an action is performed in the iOS app that requires data from the server, my design pattern looks like this:
An observer is added to notification center, specifying a method that can process the server response
The method puts together and sends a NSURLConnection
A NSURLConnection delegate method receives the response, does some interpretation to check what kind of server response it is, and then posts the appropriate notification to the notification center
This triggers the response method to run, processing the response
My issue with this pattern is that there are a large number of methods written to send and receive individual request and response types. For instance, if I am requesting an item list, I need to add several observers to the notification center, one to process a user list, one to process a blank user list, and one to process errors. Then I need to write custom methods for each one of those three to perform the appropriate actions and remove the observers, based on what kind of response the server sends.
Furthermore, the NSURLConnection delegate ends up being fairly complex, because I’m trying to interpret what type of a response was received (what types of items were in the list received?) without much context of what was requested, to make sure I don’t call the wrong response method when a server message comes back.
I am fairly new to both iOS programming and to REST programming, so I may be missing something obvious. Any advice or links to resources is appreciated.
I'd initially look at using RestKit to abstract your code away from the network comms so you can worry more about the data model and high level requests. Secondly, I wouldn't use notifications for this as it will likely get messy and be very hard to manage multiple simultaneous requests - delegation or block callbacks will be much better for this.
Your REST implementation is mostly server side, and emprirically you'd be passing and receiving binary. There are factors to consider, including whether you are utilizing HTTP.
Working with JSON with NSJSONSerialization class, and NSURLConnection keeps your program more lean and mean.

Client Server API pattern in REST (unreliable network use case)

Let's assume we have a client/server interaction happening over unreliable network (packet drop). A client is calling server's RESTful api (over http over tcp):
issuing a POST to http://server.com/products
server is creating an object of "product" resource (persists it to a database, etc)
server is returning 201 Created with a Location header of "http://server.com/products/12345"
! TCP packet containing an http response gets dropped and eventually this leads to a tcp connection reset
I see the following problem: the client will never get an ID of a newly created resource yet the server will have a resource created.
Questions: Is this application level behavior or should framework take care of that? How should a web framework (and Rails in particular) handle a situation like that? Are there any articles/whitepapers on REST for this topic?
The client will receive an error when the server does not respond to the POST. The client would then normally re-issue the request as they assume that it has failed. Off the top of my head I can think of two approaches to this problem.
One is that the client can generate some kind of request identifier, such as a guid, which it includes in the request. If the server receives a POST request with a duplicate GUID then it can refuse it.
The other approach is to PUT instead of POST to create. If you cannot get the client to generate the URI then you can ask the server to provide a new URI with a GET and then do a PUT to that URI.
If you search for something like "make POST idempotent" you will probably find a bunch of other suggestions on how to do this.
If it isn't reasonable for duplicate resources to be created (e.g. products with identical titles, descriptions, etc.), then unique identifiers can be generated on the server which can be tracked against created resources to prevent duplicate requests from being processed. Unlike Darrel's suggestion of generating unique IDs on the client, this would also prevent separate users from creating duplicate resources (which you may or may not find desirable). Clients will be able to distinguish between "created" responses and "duplicate" responses by their response codes (201 and 303 respectively, in my example below).
Pseudocode for generating such an identifier — in this case, a hash of a canonical representation of the request:
func product_POST
// the canonical representation need not contain every field in
// the request, just those which contribute to its "identity"
tags = join sorted request.tags
canonical = join [request.name, request.maker, tags, request.desc]
id = hash canonical
if id in products
http303 products[id]
else
products[id] = create_product_from request
http201 products[id]
end
end
This ID may or may not be part of the created resources' URIs. Personally, I'd be inclined to track them separately — at the cost of an extra lookup table — if the URIs were going to be exposed to users, as hashes tend to be ugly and difficult for humans to remember.
In many cases, it also makes sense to "expire" these unique hashes after some time. For example, if you were to make a money transfer API, a user transferring the same amount of money to the same person a few minutes apart probably indicates that the client never received the "success" response. If a user transfers the same amount of money to the same person once a month, on the other hand, they're probably paying their rent. ;-)
The problem as you describe it boils down to avoiding what are called double-adds. As mentioned by others, you need to make your posts idempotent.
This can be easily implemented at the framework level. The framework can keep a cache of completed responses. The requests have to have a request unique so that any retries are treated as such, and not as new requests.
If the successful response gets lost on its way to the client, the client will retry with the same request unique, the server will then respond with its cached response.
You are left with durability of the cache, how long to keep responses, etc. One approach is to remove responses from the server cache after a given period of time, this will depend on your app domain and traffic and can be left as a configurable step on the framework piece. Another approach is to force the client to sent acknowledgements. The acks can be sent either as separate requests (note that these could be lost too), or as extra data piggy backed on real requests.
Although what I suggest is similar to what others suggest, I strongly encourage you to keep this layer of network resiliency to do only that, deal with drop requests/responses and not allow it to deal with duplicate resources from separate requests which is an application level task. Merging both pieces will mush all functionality and will not leave you with a clear separation of responsibilities.
Not an easy problem, but if you keep it clean you can make your app much more resilient to bad networks without introducing too much complexity.
And for some related experiences by others go here.
Good luck.
As the other responders have pointed out, the basic problem here is that the standard HTTP POST method is not idempotent like the other methods. There is an effort underway to establish a standard for an idempotent POST method known as Post-Once-Exactly, or POE.
Now I'm not saying that this is a perfect solution for everybody in the situation you describe, but if it is the case that you are writing both the server and the client, you may be able to leverage some of the ideas from POE. The draft is here: https://datatracker.ietf.org/doc/html/draft-nottingham-http-poe-00
It isn't a perfect solution, which is probably why it hasn't really taken off in the six years since the draft was submitted. Some of the problems, and some clever alternate options are discussed here:
http://tech.groups.yahoo.com/group/rest-discuss/message/7646
HTTP is a stateless protocol, meaning the server can't open an HTTP connection. All connections get initialized by the client. So you can't solve such an error on the server side.
The only solution I can think of: If you know, which client created the product, you can supply it the products it created, if it pulls that information. If the client never contacts you again, you won't be able to transmit information about the new product.

Resources