MassTransit with SQS/SNS. Publish into SNS? - amazon-sqs

There is an official examples of MassTransit with SQS. The "bus" is configured to use SQS (x.UsingAmazonSqs). The receive endpoint is an SQS which in turn subscribed to an SNS topic. However there is no example how to Publish into SNS.
How to publish into SNS topic?
How to configure SQS/SNS to use http, since I develop against localstack?
AWS sdk version:
var cfg = new AmazonSimpleNotificationServiceConfig { ServiceURL = "http://localhost:4566", UseHttp = true };
Update:
After Chris's reference and experiments with configuration I came up with the following for the 'localstack' SQS/SNS. This configuration executes without errors and Worker gets called, and publishes a message to a bus. However consumer class is not triggered and doesn't seem messages end up in the queue (or rather topic).
public static readonly AmazonSQSConfig AmazonSQSConfig = new AmazonSQSConfig { ServiceURL = "http://localhost:4566" };
public static AmazonSimpleNotificationServiceConfig AmazonSnsConfig = new AmazonSimpleNotificationServiceConfig {ServiceURL = "http://localhost:4566"};
...
services.AddMassTransit(x =>
{
x.AddConsumer<MessageConsumer>();
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host(new Uri("amazonsqs://localhost:4566"), h =>
{
h.Config(AmazonSQSConfig);
h.Config(AmazonSnsConfig);
h.EnableScopedTopics();
});
cfg.ReceiveEndpoint(queueName: "deal_queue", e =>
{
e.Subscribe("deal-topic", s =>
{
});
});
});
});
services.AddMassTransitHostedService(waitUntilStarted: true);
services.AddHostedService<Worker>();
Update 2:
When I look at sns subscriptions I see that the first which was created and subscribed manually through aws cli has a correct Endpoint, while the second that was created by MassTransit library has incorrect one. How to configure Endpoint for the SQS queue?
$ aws --endpoint-url=http://localhost:4566 sns list-subscriptions-by-topic --topic-arn "arn:aws:sns:us-east-1:000000000000:deal-topic"
{
"Subscriptions": [
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:deal-topic:c804da4a-b12c-4203-83ec-78492a77b262",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "http://localhost:4566/000000000000/deal_queue",
"TopicArn": "arn:aws:sns:us-east-1:000000000000:deal-topic"
},
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:deal-topic:b47d8361-0717-413a-92ee-738d14043a87",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "arn:aws:sqs:us-east-1:000000000000:deal_queue",
"TopicArn": "arn:aws:sns:us-east-1:000000000000:deal-topic"
}
Update 3:
I've cloned the project and ran some unit tests of the project for AmazonSQS bus configuration, consumers don't seem to work.
When I list subscriptions after the test run I can tell that Endpoints are incorrect.
...
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:MassTransit_TestFramework_Messages-PongMessage:e16799c2-9dd3-458d-bc28-52a16d646de3",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "arn:aws:sqs:us-east-1:000000000000:input_queue",
"TopicArn": "arn:aws:sns:us-east-1:000000000000:MassTransit_TestFramework_Messages-PongMessage"
},
...
Could it be that AmazonSQS for localstack has a major bug?
It's not clear how to use library with 'localstack' sqs, how to point out to actual endpoint (QueueUrl) of an SQS queue.

Whenever Publish is called in MassTransit, messages are published to SNS. Those messages are then routed to receive endpoints as configured. There is no need to understand SQS or SNS when using MassTransit with Amazon SQS/SNS.
In MassTransit, you create consumers, those consumers consume message types, and MassTransit configures topics/queues as needed. Any of the samples using RabbitMQ, Azure Service Bus, etc. are easily converted to SQS by changing UsingRabbitMq to UsingAmazonSqs (and adding the appropriate NuGet package).

Looks like your configuration is setup properly to publish, but there are probably at least a few reasons I can think of why you are not receiving messages:
Issue with the current version of localstack. I had to use 0.11.2 - see Localstack with MassTransit not getting messages
You are publishing to a different topic. Masstransit will create the topic using the name of the message type. This may not match the topic you configured on the receive endpoint. You can change the topic name by configuring the topology - see How can I configure the topic name when using MassTransit SQS?
Your consumer is not configured on the receive endpoint - see the example below
public static readonly AmazonSQSConfig AmazonSQSConfig = new AmazonSQSConfig { ServiceURL = "http://localhost:4566" };
public static AmazonSimpleNotificationServiceConfig AmazonSnsConfig = new AmazonSimpleNotificationServiceConfig {ServiceURL = "http://localhost:4566"};
...
services.AddMassTransit(x =>
{
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host(new Uri("amazonsqs://localhost:4566"), h =>
{
h.Config(AmazonSQSConfig);
h.Config(AmazonSnsConfig);
});
cfg.ReceiveEndpoint(queueName: "deal_queue", e =>
{
e.Subscribe("deal-topic", s => {});
e.Consumer<MessageConsumer>();
});
});
});
services.AddMassTransitHostedService(waitUntilStarted: true);
services.AddHostedService<Worker>();
From what I see in the docs about Consumers you should be able to add your consumer to the AddMastTransit configuration like your original sample, but it didn't work for me.

Related

connect from web to iot core using a custom authorizer

I'm trying to use a custom authorizer to authenticate a web client.
I have succesfully created a dedicated lambda and a custom authorizer. If I launch aws iot describe-authorizer --authorizer-name <authorizer-name> I can see
{
"authorizerDescription": {
"authorizerName": "<authorizer-name>",
"authorizerArn": "...",
"authorizerFunctionArn": "...",
"tokenKeyName": "<token-key-name>",
"tokenSigningPublicKeys": {
"<public-key-name>": "-----BEGIN PUBLIC KEY-----\n<public-key-content>\n-----END PUBLIC KEY-----"
},
"status": "ACTIVE",
"creationDate": "...",
"lastModifiedDate": "...",
"signingDisabled": false,
"enableCachingForHttp": false
}
}
Moreover I can test it succesfully:
$ aws iot test-invoke-authorizer --authorizer-name '<authorizer-name>' --token '<public-key-name>' --token-signature '<private-key-content>'
{
"isAuthenticated": true,
"principalId": "...",
"policyDocuments": [ "..." ],
"refreshAfterInSeconds": 600,
"disconnectAfterInSeconds": 3600
}
$
But I cannot connect using the browser.
I'm using aws-iot-device-sdk and according the SDK documentation I should set customAuthHeaders and/or customAuthQueryString (my understanding is that the latter should be used in web environment due to a limitation of the browsers) with the headers / queryparams X-Amz-CustomAuthorizer-Name, X-Amz-CustomAuthorizer-Signature and TestAuthorizerToken but no matter what combination I set for these values the iot endpoint always close the connection (I see a 1000 / 1005 code for the closed connection)
What I've written so far is
const CUSTOM_AUTHORIZER_NAME = '<authorizer-name>';
const CUSTOM_AUTHORIZER_SIGNATURE = '<private-key-content>';
const TOKEN_KEY_NAME = 'TestAuthorizerToken';
const TEST_AUTHORIZER_TOKEN = '<public-key-name>';
function f(k: string, v?: string, p: string = '&'): string {
if (!v)
return '';
return `${p}${encodeURIComponent(k)}=${encodeURIComponent(v)}`;
}
const client = new device({
region: '...',
clientId: '...',
protocol: 'wss-custom-auth' as any,
host: '...',
debug: true,
// customAuthHeaders: {
// 'X-Amz-CustomAuthorizer-Name': CUSTOM_AUTHORIZER_NAME,
// 'X-Amz-CustomAuthorizer-Signature': CUSTOM_AUTHORIZER_SIGNATURE,
// [TOKEN_KEY_NAME]: TEST_AUTHORIZER_TOKEN
// },
customAuthQueryString: `${f('X-Amz-CustomAuthorizer-Name', CUSTOM_AUTHORIZER_NAME, '?')}${f('X-Amz-CustomAuthorizer-Signature', CUSTOM_AUTHORIZER_SIGNATURE)}${f(TOKEN_KEY_NAME, TEST_AUTHORIZER_TOKEN)}`,
} as any);
As you can see I started having also doubts about the headers names!
After running my code I see that the client tries to do a GET to the host with the querystring that I wrote.
I also see that IoT core responds with a 101 Switching Protocols, and then that my client send the CONNECT command to IoT via websocket and then another packet from my browser to the backend system.
Then the connection is closed by IoT.
Looking at cloudwatch I cannot see any interaction with the lambda, it's like the request is blocked.
my doubts are:
first of all, is it possible to connect via mqtt+wss using only a custom auth, without cognito/certificates? keep in mind that I am able to use a cognito identity pool without errors, but I need to remove it.
is it correct that I just need to set up the customAuthQueryString parameter? my understanding is that this should be used on the web.
what are the values I should set up for the various headers/queryparams? X-Amz-CustomAuthorizer-Name is self explanatory, but I'm not sure about X-Amz-CustomAuthorizer-Signature (it's correct to fill it with the content of my private key?). moreover I'm not sure about the TestAuthorizerToken. Is it the correct key to set up?
I've also tried to run the custom_authorizer_connect of the sdk v2 but it's still not working, and I run out of ideas.
turns out the problem was in the permissions set on the backend systems.

Masstransit with Amazon SQS not working in EKS

I have configured the bus like this
services.AddMassTransit(x =>
{
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Durable = true;
cfg.AutoDelete = false;
cfg.Host("us-east-2", h =>
{
});
});
});
I've not specified credential to allow AWS SDK to resolve the credential automatically.
https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/creds-assign.html
The setup works fine on my local machine with a AWS session profile. However when I deploy the code in EKS, I got the following message:
Health check masstransit-bus with status Degraded completed after 1.9351ms with message 'Degraded Endpoints: {{THE BUS ENDPOINT}}'
There's no other error or warning, but the bus is not working. I have verified IRSA (https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) has been configured correctly for the pod.
Am I missing something? What can I do to track down the underlying issue?

How to properly configure SQS without using SNS topics in MassTransit?

I'm having some issues configuring MassTransit with SQS. My goal is to have N consumers which create N queues and each of them accept a different message type. Since I always have a 1 to 1 consumer to message mapping, I'm not interested in having any sort of fan-out behaviour. So publishing a message of type T should publish it directly to that queue. How exactly would I configure that? This is what I have so far:
services.AddMassTransit(x =>
{
x.AddConsumers(Assembly.GetEntryAssembly());
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host("aws", h =>
{
h.AccessKey(mtSettings.AccessKey);
h.SecretKey(mtSettings.SecretKey);
h.Scope($"{mtSettings.Environment}", true);
var sqsConfig = new AmazonSQSConfig() { RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) };
h.Config(sqsConfig);
var snsConfig = new AmazonSimpleNotificationServiceConfig()
{ RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) };
h.Config(snsConfig);
});
cfg.ConfigureEndpoints(context, new BusEnvironmentNameFormatter(mtSettings.Environment));
});
});
The BusEnvironmentNameFormatter class overrides KebabCaseEndpointNameFormatter and adds the environment as a prefix, and the effect is that all the queues start with 'dev', while the h.Scope($"{mtSettings.Environment}", true) line does the same for topics.
I've tried to get this working without configuring topics at all, but I couldn't get it working without any errors. What am I missing?
The SQS docs are a bit thin, but is at actually possible to do a bus.Publish() without using sns topics or are they necessary? If it's not possible, how would I use bus.Send() without hardcoding queue names in the call?
Cheers!
Publish requires the use of topics, which in the case of SQS uses SNS.
If you want to configure the endpoints yourself, and prevent the use of topics, you'd need to:
Set ConfigureConsumeTopology = false – this prevents topics from being created and connected to the receive endpoint queue.
Set PublishFaults = false – this prevents fault topics from being created when a consumer throws an exception.
Don't call Publish, because, obviously that will create a topic.
If you want to somehow establish a convention for your receive endpoint names that aligns with your ability to send messages, you could create your own endpoint name formatter that would use message types and then use those same names to call GetSendEndpoint using the queue:name short name syntax to Send messages directly to those queues.

How to use bluetooth devices and FIWARE IoT Agent

I would like to use my bluetooth device (for example I'm going to create an app to be installed in a tablet) to send data (set of attributes) in Orion Context Broker via IoT Agent.
I'm looking for the FIWARE IoT Agent and probably I've to use IoT Agent LWM2M. Is it correct?
Thanks in advance and regards.
Pasquale
Assuming you have freedom of choice, you probably don't need an IoT Agent for that, you just need a service acting as a bluetooth receiver which can receive your message and pass it on using a recognisable transport.
For example, you can receive data using the following Stack Overflow answer
You can then extract the necessary information to identify the device and the context to be updated.
You can programmatically send NGSI requests in any language capable of HTTP - just generate a library using the NGSI Swagger file - an example is shown in the tutorials
// Initialization - first require the NGSI v2 npm library and set
// the client instance
const NgsiV2 = require('ngsi_v2');
const defaultClient = NgsiV2.ApiClient.instance;
defaultClient.basePath = 'http://localhost:1026/v2';
// This is a promise to make an HTTP PATCH request to the /v2/entities/<entity-id>/attr end point
function updateExistingEntityAttributes(entityId, body, opts, headers = {}) {
return new Promise((resolve, reject) => {
defaultClient.defaultHeaders = headers;
const apiInstance = new NgsiV2.EntitiesApi();
apiInstance.updateExistingEntityAttributes(
entityId,
body,
opts,
(error, data, response) => {
return error ? reject(error) : resolve(data);
}
);
});
}
If you really want to do this with an IoT Agent, you can use the IoT Agent Node lib and and create your own IoT Agent

easiest way to schedule a Google Cloud Dataflow job

I just need to run a dataflow pipeline on a daily basis, but it seems to me that suggested solutions like App Engine Cron Service, which requires building a whole web app, seems a bit too much.
I was thinking about just running the pipeline from a cron job in a Compute Engine Linux VM, but maybe that's far too simple :). What's the problem with doing it that way, why isn't anybody (besides me I guess) suggesting it?
This is how I did it using Cloud Functions, PubSub, and Cloud Scheduler
(this assumes you've already created a Dataflow template and it exists in your GCS bucket somewhere)
Create a new topic in PubSub. this will be used to trigger the Cloud Function
Create a Cloud Function that launches a Dataflow job from a template. I find it easiest to just create this from the CF Console. Make sure the service account you choose has permission to create a dataflow job. the function's index.js looks something like:
const google = require('googleapis');
exports.triggerTemplate = (event, context) => {
// in this case the PubSub message payload and attributes are not used
// but can be used to pass parameters needed by the Dataflow template
const pubsubMessage = event.data;
console.log(Buffer.from(pubsubMessage, 'base64').toString());
console.log(event.attributes);
google.google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
console.error('Error occurred: ' + err.toString());
throw new Error(err);
}
const dataflow = google.google.dataflow({ version: 'v1b3', auth: authClient });
dataflow.projects.templates.create({
projectId: projectId,
resource: {
parameters: {},
jobName: 'SOME-DATAFLOW-JOB-NAME',
gcsPath: 'gs://PATH-TO-YOUR-TEMPLATE'
}
}, function(err, response) {
if (err) {
console.error("Problem running dataflow template, error was: ", err);
}
console.log("Dataflow template response: ", response);
});
});
};
The package.json looks like
{
"name": "pubsub-trigger-template",
"version": "0.0.1",
"dependencies": {
"googleapis": "37.1.0",
"#google-cloud/pubsub": "^0.18.0"
}
}
Go to PubSub and the topic you created, manually publish a message. this should trigger the Cloud Function and start a Dataflow job
Use Cloud Scheduler to publish a PubSub message on schedule
https://cloud.google.com/scheduler/docs/tut-pub-sub
There's absolutely nothing wrong with using a cron job to kick off your Dataflow pipelines. We do it all the time for our production systems, whether it be our Java or Python developed pipelines.
That said however, we are trying to wean ourselves off cron jobs, and move more toward using either AWS Lambdas (we run multi cloud) or Cloud Functions. Unfortunately, Cloud Functions don't have scheduling yet. AWS Lambdas do.
There is a FAQ answer to that question:
https://cloud.google.com/dataflow/docs/resources/faq#is_there_a_built-in_scheduling_mechanism_to_execute_pipelines_at_given_time_or_interval
You can automate pipeline execution by using Google App Engine (Flexible Environment only) or Cloud Functions.
You can use Apache Airflow's Dataflow Operator, one of several Google Cloud Platform Operators in a Cloud Composer workflow.
You can use custom (cron) job processes on Compute Engine.
The Cloud Function approach is described as "Alpha" and it's still true that they don't have scheduling (no equivalent to AWS cloudwatch scheduling event), only Pub/Sub messages, Cloud Storage changes, HTTP invocations.
Cloud composer looks like a good option. Effectively a re-badged Apache Airflow, which is itself a great orchestration tool. Definitely not "too simple" like cron :)
You can use cloud scheduler to schedule your job as well. See my post
https://medium.com/#zhongchen/schedule-your-dataflow-batch-jobs-with-cloud-scheduler-8390e0e958eb
Terraform script
data "google_project" "project" {}
resource "google_cloud_scheduler_job" "scheduler" {
name = "scheduler-demo"
schedule = "0 0 * * *"
# This needs to be us-central1 even if the app engine is in us-central.
# You will get a resource not found error if just using us-central.
region = "us-central1"
http_target {
http_method = "POST"
uri = "https://dataflow.googleapis.com/v1b3/projects/${var.project_id}/locations/${var.region}/templates:launch?gcsPath=gs://zhong-gcp/templates/dataflow-demo-template"
oauth_token {
service_account_email = google_service_account.cloud-scheduler-demo.email
}
# need to encode the string
body = base64encode(<<-EOT
{
"jobName": "test-cloud-scheduler",
"parameters": {
"region": "${var.region}",
"autoscalingAlgorithm": "THROUGHPUT_BASED",
},
"environment": {
"maxWorkers": "10",
"tempLocation": "gs://zhong-gcp/temp",
"zone": "us-west1-a"
}
}
EOT
)
}
}

Resources