Netflix Conductor SQS - amazon-sqs

Has anyone successfully integrated Netflix Conductor with AWS SQS?
I have tried below steps but the workflow is not triggered.
Create SQS queue
Added AWS creds to environment
Registered tasks, workflows and the event listener below
{
"name": "sqs_event_listener",
"event": "sqs:name_of_sqs_queue",
"condition": "true",
"active": true,
"actions": [{
"action": "start_workflow",
"start_workflow": {
"name": "mywf"
}
}]
}

I know this is too late to help the original poster, but adding a response to improve the hive mind of SO:
In your Conductor application.properties file, make sure you have the following values
conductor.default-event-queue.type=sqs
conductor.event-queues.sqs.enabled=true
conductor.event-queues.sqs.authorized-accounts=(your AWS account number)
We need to update annotations-processor/awssqs-event-queue/src/main/java/com/netflix/conductor/SQSEventQueueConfiguration.java
#Bean
AWSCredentialsProvider createAWSCredentialsProvider() {
return new DefaultAWSCredentialsProviderChain();
}
With this configuration in Conductor, you can now restart your instance, and your event should receive events from the SQS message queue.
For a full post with workflow and tasks SENDING ans RECEIVING SQS messages - check out: https://orkes.io/content/docs/how-tos/Tasks/SQS-event-task

Related

How can you answer a client with his/her name in Dialogflow when it's provided by an external service?

I'm creating a Dialogflow agent in which the client identifies with a clientId. This uses Twilio for Whatsapp chatbot integration.
DIALOG
- Hi, tell me your clientId
- abcde1234
At this point I need to get the client name from an external service...
GET Authentication: Basic xxx:yyy http://xxx/clients/id/abcde1234
-> {"id":"abcde1234", "name": "John", ...}
... and answer with it:
DIALOG
- Hi, John, how can I help you?
Is this possible with Dialogflow?
So in order to fetch the value of the user's input, we can create something called a session parameter. Basically, this will be a JSON object in the API request sent to your webhook API which will present throughout the lifespan of your conversation (due to the high lifetime set for the same). You can read more in depth about contexts here.
We can then set up a simple NodeJS codebase on a Cloud Function (used this only due to its simplicity of deployment, though you are free to use any cloud provider/platform of your choice).
I made some minor modifications to the boiler plate codebase present in every Dialogflow ES agent.
So for example, here's the changes made in the index.js file
.
.
.
function welcome(agent) {
const globalParameters = agent.getContext('global-parameters');
const questionNumber = globalParameters.parameters.number;
const sampleNameFromGetCall = 'John'
agent.add(`Welcome to my agent! ${sampleNameFromGetCall}`);
}
and here's the package.json
{
"name": "dialogflowfirebasefulfillment",
"description": "This is the default fulfillment for a Dialogflow agents using Cloud Functions for Firebase",
"version": "0.0.1",
"private": true,
"license": "MIT",
"author": "Google Inc.",
"engines": {
"node": "16"
},
"dependencies": {
"actions-on-google": "^2.2.0",
"dialogflow": "^1.2.0",
"dialogflow-fulfillment": "^0.5.0",
"firebase-admin": "^11.4.1",
"firebase-functions": "^4.1.1"
},
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
}
}
Here's the library we used which is the library built by Google for this purpose.
https://github.com/googleapis/nodejs-dialogflow
Once I enabled the webhook fulfillment on my agent, I quickly tested it and
There's a major caveat to using this as this repo has been archived by Google and doesn't receive any updates, so you may have to figure out how to parse the incoming request in your webhook API OR you can use this library with some major changes to its codebase.
You would need to make sure the overall latency of your request isn't too much,
So, in a nutshell, yes, we can definitely fetch a value from your Dialogflow Agent, use it to a call an API, parse that response and use that as a part of our dynamic response. The value would be stored in a JSON object called context, which will be a part of any incoming request to your webhook API.

Create notifications only bot for Microsoft Teams

I'm trying to create a notifier via MS Teams. I want to send a direct message to a named user. Here's what I've done so far:
Created a bot at https://dev.botframework.com in my azure account
Tied the bot to an app registration in AzureAD
Retrieved a token
I'm trying to create a new conversation by posting:
{
"bot": {
"name": "OpenUnison Notifications Bot",
"id": "openunison"
},
"members": [
{
"name": "Matt Mosley",
"id": "mmosley#marcboorshteintremolosecuri.onmicrosoft.com"
}
],
"topicName": "OpenUnison Notifications",
"isGroup": false
}
to https://smba.trafficmanager.net/apis/v3/conversations, the response I get is
{"error":{"code":"BadSyntax","message":"Bad format of conversation ID"}}
When I look in the activity log I don't see anything for the Teams channel, but for web I see Activity dropped because the bot's endpoint is missing. I think I'm missing something. I don't want to handle responses, this is a no-reply notifications only bot. How can I avoid requiring a bot endpoint? Also, am I even taking the right approach for my goal?
Notification-only bots use proactive messaging to communicate with the user.
A proactive message is a message that is sent by a bot to start a conversation.
When using proactive messaging to send notifications you need to make sure your users have a clear path to take common actions based on your notification, and a clear understanding of why the notification occurred.
POST {Service URL of your bot}/v3/conversations
{
"bot": {
"id": "c38eda0f-e780-49ae-86f0-afb644203cf8",
"name": "The Bot"
},
"members": [
{
"id": "29:012d20j1cjo20211"
}
],
"channelData": {
"tenant": {
"id": "197231joe-1209j01821-012kdjoj"
}
}
}
Sample Link-https://github.com/OfficeDev/microsoft-teams-sample-complete-csharp/blob/32c39268d60078ef54f21fb3c6f42d122b97da22/template-bot-master-csharp/src/dialogs/examples/teams/ProactiveMsgTo1to1Dialog.cs

How do I safely grant permission for Cloud Scheduler to create a Dataflow job?

I have a Dataflow template that I can use for a Dataflow job running as a service account of my choosing. I've actually used one of Google's provided samples: gs://dataflow-templates/latest/GCS_Text_to_BigQuery.
I now want to schedule this using Cloud Scheduler. I've set up my scheduler job like so:
When the scheduler job runs it errors with PERMISSION_DENIED:
{
"insertId": "1kw7uaqg3tnzbqu",
"jsonPayload": {
"#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished",
"url": "https://dataflow.googleapis.com/v1b3/projects/project-redacted/locations/europe-west2/templates:launch?gcsPath=gs%3A%2F%2Fdataflow-templates%2Flatest%2FGCS_Text_to_BigQuery",
"jobName": "projects/project-redacted/locations/europe-west2/jobs/aaa-schedule-dataflow-job",
"status": "PERMISSION_DENIED",
"targetType": "HTTP"
},
"httpRequest": {
"status": 403
},
"resource": {
"type": "cloud_scheduler_job",
"labels": {
"job_id": "aaa-schedule-dataflow-job",
"project_id": "project-redacted",
"location": "europe-west2"
}
},
"timestamp": "2021-12-16T16:41:17.349974291Z",
"severity": "ERROR",
"logName": "projects/project-redacted/logs/cloudscheduler.googleapis.com%2Fexecutions",
"receiveTimestamp": "2021-12-16T16:41:17.349974291Z"
}
I have no idea what permission is missing or what I need to grant in order to make this work and am hoping someone here can help me.
In order to reproduce the problem I have built a terraform configuration that creates the Dataflow job from the template along with all of its prerequisites and it executes successfully.
In that same terraform configuration I have created a Cloud Scheduler job that purports to execute an identical Dataflow job and it is that which fails with the error given above.
All this code is available at https://github.com/jamiet-msm/dataflow-scheduler-permission-problem/tree/6ef20824af0ec798634c146ee9073b4b40c965e0 and I have created a README that explains how to run it:
I figured it out, the service account needs to be granted roles/iam.serviceAccountUser on itself
resource "google_service_account_iam_member" "sa_may_act_as_itself" {
service_account_id = google_service_account.sa.name
role = "roles/iam.serviceAccountUser"
member = "serviceAccount:${google_service_account.sa.email}"
}
and roles/dataflow.admin is required also, roles/dataflow.worker isn't enough. I assume that's because dataflow.jobs.create is required which is not provided by roles/dataflow.worker (see https://cloud.google.com/dataflow/docs/concepts/access-control#roles for reference)
resource "google_project_iam_member" "df_admin" {
role = "roles/dataflow.admin"
member = "serviceAccount:${google_service_account.sa.email}"
}
Here is the commit with the required changes: https://github.com/jamiet-msm/dataflow-scheduler-permission-problem/commit/3fd7cabdf13d5465e01a928049f54b0bd486ed73

Create and get new channel incoming webhook in slack

I just created a channel via Slack Api using channels.create method. How do I add incoming webhook and get the URL programmatically? I have other tools that will use it further.
You can not create new incoming webhooks programmatically, but you don't have to. Just override the channel property on an existing incoming webhook for your current Slack team to use the new channel.
Example:
{
"text": "This is a line of text.\nAnd this is another one.",
"channel": "channel-name"
}
Note that this will only work for incoming webhooks defined via custom integrations, but not for those defined as part of a Slack app.
data = {
"attachments": [
{
"author_name": "[Alert] - A Jenkins Job is Already Running!",
"color": "#36a64f",
"title": "Android Jenkins Job",
"title_link": "http://xx.xxx.xxx.xxx/job/Mobile_Regression/",
"footer": "Android Build Attempted",
"ts": time.time()
}
],
"channel": "#channel"
}
json_params_encoded = json.dumps(data)
slack_response = requests.post(url=hook_url, data=json_params_encoded, headers={"Content-type": "application/json"})

How do I start rabbitmq-native consumers configured in application.yml?

I have a Grails 3.1.12 app with the rabbitmq-native 3.3.1 plugin.
In build.gradle:
compile "org.grails.plugins:rabbitmq-native:3.3.1"
This app runs in a cluster and I want the instances to act as workers. A message written to the exchange should go to one instance, the next message to another instance, and so on.
I can bind consumers to queues using a static block in each consumer:
static rabbitConfig = [
"queue": "my.queue.that.is.bound.to.some.exchange"
]
Or I can bind them in application.yml:
rabbitmq:
exchanges:
- name: some.exchange
type: fanout
queues:
- name: my.queue.that.is.bound.to.some.exchange
exchange: some.exchange
consumers:
MyConsumer:
queue: my.queue.that.is.bound.to.some.exchange
But when I map consumers to queues in application.yml, the consumer is not consuming messages on the queue. I managed to dump the RabbitMQ status report, which shows the consumer is stopped:
[
{
"consumers":
[
{
"fullName": "MyConsumer",
"load": 0.0,
"name": "MyConsumer",
"numConfigured": 1,
"numConsuming": 1,
"numProcessing": 0,
"queue": "my.queue.that.is.bound.to.some.exchange",
"runningState": {
"enumType": "com.budjb.rabbitmq.RunningState",
"name": "STOPPED"
}
}
],
"host": "localhost",
"name": "35d07d1d-9cdc-460f-a63d-da24eb72b479",
"port": 5672,
"runningState": {
"enumType": "com.budjb.rabbitmq.RunningState",
"name": "RUNNING"
},
"virtualHost": "/"
}
]
I tried calling rabbitContext.startConsumers() or even consumerManager.start() from Bootstrap.init(), but the consumers are not populated yet (consumerManager.consumers == []), so it does nothing.
I'm trying to keep the consumer bindings in an externalized configuration, so I'd can selectively turn consumers on or off depending on context. I might turn off heavy consumers on nodes that serve web traffic, for example. It would be more awkward to do in a static initializer block in the consumer, as opposed to a configuration file.
So, how do I start my consumers when their queue binding is defined in application.yml?
I did some debugging on the plugin, and found out that when it starts the consumers it validates the configuration with this method:
#Override
boolean isValid() {
boolean valid = true
if (!queue && !exchange) {
log.warn("consumer is not valid because it has no queue nor exchange defined")
valid = false
}
if (queue && exchange) {
log.warn("consumer is not valid because is has both a queue and an exchange defined")
valid = false
}
if (binding instanceof Map && !(match in ["any", "all"])) {
log.warn("match must be either 'any' or 'all'")
valid = false
}
return valid
}
In my case, the validation was failing because my configuration didn't have the binding and match properties, which shouldn't be required for queue consumers.
Adding them with bogus values made the configuration validate and the consumers start properly and consume messages.
Not sure if this would help your case as well, but in mine it did :)

Resources