It appears that we aren't getting all AnyOfferChanged notifications on an Amazon SQS. Many are arriving in the queue, but a manual analysis is showing that many are also just going missing.
Is there any way to query MWS to see a list of notifications or even a simple count for the day?
Any common causes for losing MWS subscription notifications sent to SQS?
I don't believe there is a way to query the SQS system. They are just messages in a queue and you read them, process them, and then delete them. We have been using the AnyOfferChanged notifications for about a year and just have to trust that they work. If you're sure that the criteria is met (a product you sell, and a price changes in the top 20 offers, new or used) and there is not message in the queue for it, the I would open a ticket with Seller Central. We have seen that it is near-real time that an SQS notification arrives for one of our product price changes.
To get a count for the day, you'd just have to read messages from your queue starting at a certain point, add +1 to a count variable for each message, delete them from the queue, and then look at the counter after 24 hours. Best I can think of.
Related
As I understand all unmatched events in Amazon EventBridge will expire after 24 hours. So how can we count them to make sure that my events are not lost and all matched
For example if producer changes schema and it stops matching rule pattern as a result - how can I setup some sort of alarm of DLQ for such events?
Hope that make sense,
Regards,
Max
You can set up DLQ as an EventBridge target. They are standard SQS queque which you can use to track and count undelivered message. You can receive notifications when events are moved to a DLQ from CloudWatch Alarms. There are some already discussions here you can try Configure SQS Dead letter Queue to raise a cloud watch alarm on receiving a message
We had some sort of bug that queued up the same message thousands of times, each of them is undelivered because of the spam carrier restrictions, because it was a not real number or something.
We've looked around their docs and stack overflow but can't find anything that looks relevant.
It seems like Twilio keeps trying though - over and over - so it's send out thousands of the same message and keeps queueing them. Or at the very least
How can we clear our whole SMS message queue? We're happy if we never send it again, as nothing in there is mission critical.
The best approach is a ticket to Twilio support via the Twilio Console or help#twilio.com as a P1 (with you Account SID), indicating you have an out of control process queuing up thousands of SMS messages.
They will ask that you fix the process and fail the messages in queue.
Based on my understanding of how subcriptions work, it seems like we should build some sort of cron job to check if almost about to expire users were renewed since notifications won't be sent in some cases.
Based on reading these:
https://developer.apple.com/library/archive/technotes/tn2413/_index.html#//apple_ref/doc/uid/DTS40016228-CH1-SUBSCRIPTIONS-MY_SERVER_PROCESS_RARELY_RECEIVES_RENEWAL_NOTICES_WHEN_THE_AUTO_RENEWING_SUBSCRIPTION_RENEWS_
https://medium.com/revenuecat-blog/ios-subscriptions-are-hard-d9b29c74e96f
my question is:
is my conclusion true? are notifications not sent for renewals?
if I have to build a cron job and call verifyReceipt myself I can imagine that I would call it quite a lot per day, is there a limit to how many times I can call this endpoint? When will I be throttled?
Notifications are not sent for regular renewals. You will get a notification (INTERACTIVE_RENEWAL) if the user cancels and resubscribes. Best practice would be to check the /verifyReciept endpoint to get subscription status from Apple. There aren't any published throttles on this endpoint and would imagine it's extremely scalable.
Also, highly recommend RevenueCat to completely manage subscription status for you.
The company I am working for has evaluated MQTT and decided to use it as a core messaging platform for a large scale system. The main reason is how compact the protocol is and how easy it can actually be implemented. I have a single issue with MQTT though and I'm seeking for an answer to the following question:
QoS1 and QoS2 messages require confirmation from the client. The only thing I know about the message (identifying it) when receiving PUBACK, PUBREC, PUBREL and PUBCOMP is messageId and the clientId. Message id is an unsigned int16 so the max value is 65535. It doesn't seem to be large enough for long running clients, say a year, sending 15 QoS2 messages an hour.
I am not quite sure if there's any other way to identify the message? I would like to be as compliant with the standard as possible.
Probably the first point to make clear is that message IDs are handled on a per client and per direction basis. That is to say that the broker will create a message ID for each outgoing message with QoS>0 for each client that is connected and these message IDs will be completely independent of any other message IDs used for the same message published to other clients. Likewise, each client generates its own message IDs for messages that it sends.
The message ID doesn't have to be unique, so your client sending 15 messages per hour with QoS level 2 would simply overflow at some point. The real limitation is that there can only be a maximum of 65535 messages per direction "in flight" at once (i.e. part way through the message handshake). Once a message with a given ID has been fully processed then that message ID can be reused.
Another way of looking at it is to consider how it would work if your client only ever had one message in flight at once, whether because of the rate the messages are being transmitted or by design in the way you handle the messages. In this case, you could keep message ID set to 1 for every single message because there is never a chance that there will be a duplicate.
If you wish to support having multiple messages in flight at once it would be relatively straightforward to check there are no message ID duplicates before you assign a new one.
Because the message ID is per client, if you send a single message to >65535 clients there will be no chance of message ID collisions. If you send >65535 messages to each client at once and the message flows aren't complete then there will be problems.
Answering the comment "I have noticed that every MQTT broker tends to deliver only the last QoS1/2 message":
The broker will only send messages to clients it knows about. If you connect for the first time there is no way to get messages from the past, with one exception: retained messages. If a message is set to retained then it is a "last known good" value. When a new client subscribes it will be sent the retained message immediately which makes it useful for things that are updated infrequently. I suspect this is what you are referring to. If you want a client to have messages queued when it is not connected then you must connect with the "clean session" option disabled to make the client persistent. You must also use QoS>0 subscriptions and QoS>0 publications. When your client reconnects (with clean session still set to disabled), the queued messages will be delivered. You can normally configure the number of messages to queue in this way in the broker, where any further messages will be discarded. An important point is that queueing messages for a client that has not previously connected is not supported by design.
For delivering more messages at QOS1 or QOS2, you should use concept of persistant memory. In this when ever a subscriber is not available the message get stored in persistant memory and deliver once subscriber is connected. You can do this at QOS0 also after configuring mosquitto.conf file.
I am doing Comet chat with Erlang. I only use one connection (long-polling) for the message transportation. But, as you know, the long-polling connection can not be stay connected all the time. Every time a new message comes or reaches the timeout, it will break and then connect to the server again. If a message is sent before the connection re-connected, it is a problem to keep the integrity of chat.
And also, if a user opens more than one window with Comet-chat, all the chat messages have to keep sync, which means a user can have lots of long-polling connections. So it is hard to keep every message delivered on time.
Should I build a message queue for every connection? Or what else better way to solve this?
For me seems simplest way to have one process/message queue per user connected to chat (even have more than one chat window). Than keep track of timestamp of last message in chat window application and when reconnect ask for messages after this timestamp. Message queue process should keeps messages only for reasonable time span. In this scenario reconnecting is all up to client. In another scenario you can send some sort of hart beats from server but it seems less reliable for me. It is not solving issue with other reason of disconnection than timeout. There are many variant of server side queuing as one queue per client, per user, per chat room, per ...