KMS Access Denied exception on SNS Topic to SQS queue - amazon-sqs

Background: I am trying to execute a lambda function by using a SQS queue as the trigger. And once the lambda function finishes executing, I am trying to send a response to another SQS queue via a SNS topic.
SQS Queue -> Lambda -> SNS topic -> SQS Queue
I initially tried to use Destinations to send Lambda's response to SQS, but that only works for async invocations and SQS is considered as a sync invocation. Fine. So now I trigger a SNS topic which then handles adding the message to the SQS queue. This integration works fine.
Problem: The SNS topic always ends up failing to post to the SQS queue. I can see that the deadletter queue has the messages always and never the actual queue. This is the error message I found on CloudWatch
{
"delivery": {
"providerResponse": "{\"ErrorCode\":\"KMS.AccessDeniedException\",\"ErrorMessage\":\"null (Service: AWSKMS; Status Code: 400; Error Code: AccessDeniedException; Request ID: c)\",\"sqsRequestId\":\"Unrecoverable\"}",
"dwellTimeMs": 51,
"attempts": 1,
"statusCode": 400
},
"status": "FAILURE"
}
I can see that KMS is denying access to something. But I'm not sure who is getting denied by KMS. The SNS topic has no encryption set. It is disabled. I initially enabled it, but after facing the problem, I disabled it. But the problem still persists.
What have I tried:
I've tried disabling and enabling the encryption on SNS topic settings.
I've tried looking at IAM roles associated with the SNS topic and SQS queue. Very few service roles were created and none of them have any restrictions.

As I was writing this question and crossing my t's and dotting my i's I stumbled upon the solution to my problem.
The encryption on the SQS queue side is configured from the Queue Actions (on SQS AWS Console) -> Configure Queue -> SSE section.
On the actual queue, the SSE option was selected with the default SQS KMS key while the dead letter queue had no setting and so could work without problems.
To solve this problem there are two possible solutions:
Uncheck the SSE section to stop using KMS keys. Depending on your use case encryption may be needed for you.
Create a separate key which you can share across SNS and SQS. By default the keys that are used by the two services cannot be shared.

Related

Suave runs out of sockets when receiving messages from AWS' SNS service

This is linked to question: Suave stops responding with "Socket failed to accept a client" error
When I first started to use Suave, I was taking commands from a 3rd party service pushing messages and I would have an out of socket situation.
There is now more understanding of the problem:
I am receiving messages that are posted to the SNS service on AWS (it's just a queue). SNS will forward me each message it receives through a HTTP connection, as a POST message.
If I reply with Ok, I will run out of sockets. So this means that Suave is trying to keep the connection open and AWS is somehow initiating a new connection every time.
If I reply with CLOSE, the AWS' delivery starts to become odd and messages get delivered in batches followed by periods of nothing.
Since AWS will not change their system for me, I'm wondering if I can reply Ok but then somehow close the connection in Suave to not run out of sockets. Is it possible?
Or, is there a better way to handle this?

Consumer Processing Timeout?

Here is our scenario:
We are publishing messages to a topic, and have a queue subscribed to that topic. We have several node.js consumers connected to that queue to process incoming messages, via the solclient package.
What we want to do is process the incoming messages, and after successfully processing the message, acknowledge the message so that it is removed from the queue. The challenge we're having is how to deal with error conditions. We are trying to figure out how to flag to the broker that the message failed to be processed? The expectation would be that the broker would then attempt to send to a consumer again, and once max redeliveries is hit, it moves to the DMQ.
I don't believe there's a way in Solace for a consumer to "NACK" a received message to signal an error in processing. I believe your option would be to unbind from the queue (i.e. disconnect() the Flow object, or MessageConsumer in the Node.js API) which will place allow any unacknowledged messages back on the queue and available for redelivery, and then rebind to the queue.
Alternatively, if you do your processing in the message received callback method, you could throw an exception in there, and that should (?) accomplish the same thing. But probably a lot less graceful.
https://docs.solace.com/Solace-PubSub-Messaging-APIs/Developer-Guide/Creating-Flows.htm
https://docs.solace.com/API-Developer-Online-Ref-Documentation/js/solace.MessageConsumer.html#disconnect

Increase MQTT Time between PUBLISH and PUBCOMP

I have configured a MQTT subscriber in spring using spring mqtt integration. In the handleMessage method I am doing certain business logic which takes time. While testing I noticed that when I am sending bulk number of messages the Broker republishes the same message as an original message (I checked whether the Message payload is duplicate , it was sending as original). The MQTT Broker is publishing the message again even before the Subscriber can send PUCOMP. QOS level is set to 2
You should not be doing long running tasks in the handleMessage callback as this is run on the MQTT Clients network thread.
If you have a long running task you should be handing it off to a separate thread pool to run.

How to retrive all AWS SQS queue messages in lumen without dispatching

I am trying to fetch the AWS SQS queue messages using lumen Queue worker and SQS queue messages will be created by other API's which is on other end. How can I retrieve all the messages from queue and process it.
I have created the job and installed the AWS sdk package into the lumen but it is not calling the handle written into the Job.
Can anyone please guide me step by step to sort out the problem

Spring Cloud Data Flow programmable application's error channel

How can I define or program an app's error channel that will receive all messages that have failed in processors/sinks?
The documentation says the following:
When the number of retry attempts has exceeded the maxAttempts value, the exception and the failed message will become the payload of a message and be sent to the application’s error channel. By default, the default message handler for this error channel logs the message. You can change the default behavior in your application by creating your own message handler that subscribes to the error channel.
After that documentation talks about enabling dead letter queues in binders. If I understand that correctly, the whole concept means that I can write my own handler that will subscribe to the DLQ of the binder and receive the messages.
I am curious, if it is possible to define a separate stream that will receive failed messages, or write an additional app that will receive those failed payloads and process them how it wants without utilizing the DLQ of the binder?
Assuming you've enabled DLQ and depending on the binder implementation in use, you may have to create a separate application to drain and process messages from DLQ.
The recommended approaches for rabbit-binder, for example, can be found in the docs.

Resources