how to know when amazon machine instance start,stop and terminated? is there any way to send messege in sqs when instance start,stop and terminated? - amazon-sqs

who will notify when instance start or stop? i want to send message to sqs when instance start stop or terminated and sqs messeage will be read by DAS(Domain Administration server) asynchronously.

You may write commissioning and decommissioning scripts for your EC2 instances, wherein you may also send some message to any SQS queue.

Related

Consumer Processing Timeout?

Here is our scenario:
We are publishing messages to a topic, and have a queue subscribed to that topic. We have several node.js consumers connected to that queue to process incoming messages, via the solclient package.
What we want to do is process the incoming messages, and after successfully processing the message, acknowledge the message so that it is removed from the queue. The challenge we're having is how to deal with error conditions. We are trying to figure out how to flag to the broker that the message failed to be processed? The expectation would be that the broker would then attempt to send to a consumer again, and once max redeliveries is hit, it moves to the DMQ.
I don't believe there's a way in Solace for a consumer to "NACK" a received message to signal an error in processing. I believe your option would be to unbind from the queue (i.e. disconnect() the Flow object, or MessageConsumer in the Node.js API) which will place allow any unacknowledged messages back on the queue and available for redelivery, and then rebind to the queue.
Alternatively, if you do your processing in the message received callback method, you could throw an exception in there, and that should (?) accomplish the same thing. But probably a lot less graceful.
https://docs.solace.com/Solace-PubSub-Messaging-APIs/Developer-Guide/Creating-Flows.htm
https://docs.solace.com/API-Developer-Online-Ref-Documentation/js/solace.MessageConsumer.html#disconnect

Rebooting server with MQTT service

Imagine an MQTT broker with remote clients connected, which continuously send QoS 2 data - the standard situation. Clients are configured with "cleansession false" - they have a queue to send messages in case of a connection failure.
On the server, local clients subscribe to topics to receive messages.
Server load:
Launch the MQTT Broker
Running local clients
Connecting remote clients and receiving data from the queue
What if the third point occurs before the second? Are there standard solutions? How not to lose the first messages?
Assuming you are talking about all later reboots of the broker, not the very first time the system is started up then the broker should have stored the persistent subscription state of the clients to disk before it was shutdown and restored this when it restarted. This means that it should queue messages for the local clients.
Also you can always use a firewall to stop the remote client being able to connect until all the local clients have started, this would solve the very first startup issue as well.

How to retrive all AWS SQS queue messages in lumen without dispatching

I am trying to fetch the AWS SQS queue messages using lumen Queue worker and SQS queue messages will be created by other API's which is on other end. How can I retrieve all the messages from queue and process it.
I have created the job and installed the AWS sdk package into the lumen but it is not calling the handle written into the Job.
Can anyone please guide me step by step to sort out the problem

Is it possible to send a message to the future?

Is there a best practice for publishing scheduled/delayed messages with MQTT, for example, using Mosquitto or HiveMQ brokers?
The use case is: Tell a subscriber to perform some maintenance in 15 minutes.
Optimally, the use case would be then solved by publishing the message "perform maintenance now please", and mark the message with "deliver no sooner than 15 minutes from now".
While I wouldn't recommend this to do in any scenario with high throughput, at least with HiveMQ you can do the following:
Implement a OnPublishReceivedCallback.
Schedule a Runnable that uses the PublishService to some kind of shared ScheduledExecutorService. The Runnable re-publishes the publish via the PublishService
The OnPublishReceivedCallback needs to throw away the original publish by throwing an OnPublishReceivedException (use false as constructor parameter so you don't disconnect the publishing client)
No, messages are delivered immediately for all connected clients subscribed to a topic and at reconnection for disconnected clients with persistent subscriptions.
If you want to do delayed messages you will have to implement your own store and forward mechanism before they are published to the broker.

Gen_bunny (Erlang / Elixir) subscribe to queue (Rabbitmq) using bunnyc

Does anyone know how can I subscribe to Rabbitmq queue using gen_bunny?
I am able to connect and push the message and by using get method I can also receive message. However, I cant find out how to subscribe to a queue and get the message in my gen_server.

Resources