Amazon EventBridge - how to measure non matched event? - aws-event-bridge

As I understand all unmatched events in Amazon EventBridge will expire after 24 hours. So how can we count them to make sure that my events are not lost and all matched
For example if producer changes schema and it stops matching rule pattern as a result - how can I setup some sort of alarm of DLQ for such events?
Hope that make sense,
Regards,
Max

You can set up DLQ as an EventBridge target. They are standard SQS queque which you can use to track and count undelivered message. You can receive notifications when events are moved to a DLQ from CloudWatch Alarms. There are some already discussions here you can try Configure SQS Dead letter Queue to raise a cloud watch alarm on receiving a message

Related

MQTT - Getting the notification when message retention period is over

I have a requirement where I want to get some kind of notification while the message retention period is over and the message is about to be discarded from the MQTT topic.
So the actual requirement is, we have Bluetooth bands, which are send their presence through a centralized agent and an MQTT broker. Now we got a requirement where we need to upgrade the band firmware. For doing so, we will send a message to the topic with a message and a specific retention period. Infra will receive the message notification and look for the band. If the band is found then it's ok otherwise it will wait for new bands to be available. Once the retention period is over, in some cases we have to retry, so to implement the retry mechanism, I wanted to receive the notification from the MQTT broker if a message retention period is over.
Please help me if this is even possible into MQTT?
The broker won't tell you when it drops messages, but since you know when you sent the message and what expiry time you set there is nothing to stop you implementing this yourself.

Duplicate Lifecycle Events

I am building a web app that uses AWS IoT lifecycle events and logs device connection/disconnection.
Using AWS IoT rules, I am sending all events to a lambda and after some validation I'm saving all lifecycle events to a DynamoDB table. I'm aware that messages may be delayed, out of order and duplicates may happen.
I am validating for all these scenarios, so my connection log is as accurate as possible.
My question is: Is it possible for duplicate messages to come with a distinct timestamp? Such as a disconnection being sent twice with the same sessionIdentifier but a different timestamp?
Just some guesses
MQTT QoS 1 implies the "You might receive duplicate messages." thing. The message could be resend by one side if no ack is received from the other side. Thus, it is the same old message, and the timestamp would not change.
The timestamp field refers to the time the event occurred, not the time the message is sent. Thus, it should remain the same value.
Reference: http://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html

get entire past mqtt message queue?

When in QOS 1 & 2 it replays all past messages. Is there a way in standard implementations to receive the entire past queue (as array) when becoming live again? (Of course only subscribed ones)
When a client has subscribed to a topic at QOS 1 or 2 and then disconnects. If when that client reconnects with the same client id and with the clean session flag set to false, the broker should replay any missed messages.
The broker should not replay any messages that had already been sent during the first connected period (with the possible exception of any QOS 1 messages that may have been in flight at the time of the disconnect)

Missing MWS subscription notifications pushed to SQS

It appears that we aren't getting all AnyOfferChanged notifications on an Amazon SQS. Many are arriving in the queue, but a manual analysis is showing that many are also just going missing.
Is there any way to query MWS to see a list of notifications or even a simple count for the day?
Any common causes for losing MWS subscription notifications sent to SQS?
I don't believe there is a way to query the SQS system. They are just messages in a queue and you read them, process them, and then delete them. We have been using the AnyOfferChanged notifications for about a year and just have to trust that they work. If you're sure that the criteria is met (a product you sell, and a price changes in the top 20 offers, new or used) and there is not message in the queue for it, the I would open a ticket with Seller Central. We have seen that it is near-real time that an SQS notification arrives for one of our product price changes.
To get a count for the day, you'd just have to read messages from your queue starting at a certain point, add +1 to a count variable for each message, delete them from the queue, and then look at the counter after 24 hours. Best I can think of.

Periodic notification with Pusher

I have a queue of messages that must be displayed to the user every two minutes, one at a time.
Does Pusher have a cron feature?
An example of the desired behavior:
00:00 – User_A sends Message_A and it is enqueued. Once the queue is empty, Message_A is delivered immediately.
00:30 - User_B sends Message_B and it is enqueued.
02:00 - ???? checks the queue and uses Push (or other websocket service) to deliver Message_B
I need the ???? piece.
No, Pusher doesn't offer a cron feature.
But, it would be easy to use a service like iron.io and then hit your own endpoint every two minutes and from that endpoint publish to Pusher.
See:
IronMQ Push Queues
Add Messages to a Queue docs - specifically the 'delay'. This means you can define when you want the message to be queued and effectively published.

Resources