Triggering StepFunction with EventBridge with Serverless - serverless

I've set an event inside a Step Function as following:
events:
- S3EventBridge:
Type: EventBridgeRule
Properties:
EventBusName: default
pattern:
source:
- aws.s3
detail-type:
- Object Created
detail:
bucket:
name:
- "${self:custom.xxxx.${self:provider.stage}}-${self:provider.stage}"
object:
key:
- prefix: 'payloads/to_process'
Stack is deployed successfully, but when I put a new object in the bucket, at the specified path nothing happens.
I enabled the EventBridge in the S3 resource:
Data:
Type: AWS::S3::Bucket
Properties:
BucketName: "${self:custom.xxxx.${self:provider.stage}}-${self:provider.stage}"
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: True
If I attach the identical EventBridge event to a Lambda it works, correctly triggering it.
What am I missing?

Related

How to add PTR record for EC2 Instance's Private IP in CDK?

I've two private hosted zones created for populating A records and PTR records corresponding to my EC2 instance's private ip. Yes, it's the private ip that I need. This subnet is routed to our corporate data center, so we need non-cryptic hostnames and consistent reverse lookup on them within the account.
I've got the forward lookup working well, however I'm confused how exactly it should be for the reverse lookup on the IP. Assume, my CIDR is 192.168.10.0/24 where the EC2 instances will get created.
const fwdZone = new aws_route53.PrivateHostedZone(
this, "myFwdZone", {
zoneName: "example.com",
vpc: myVpc,
});
const revZone = new aws_route53.PrivateHostedZone(
this, "myRevZone", {
zoneName: "10.168.192.in-addr.arpa",
vpc: myVpc,
}
);
I'm later creating the A record by referencing the EC2 instance's privateIp property. This worked well.
const myEc2 = new aws_ec2.Instance(this, 'myEC2', {...})
new aws_route53.RecordSet(this, "fwdRecord", {
zone: fwdZone,
recordName: "myec2.example.com",
recordType: aws_route53.RecordType.A,
target: aws_route53.RecordTarget.fromIpAddresses(
myEc2.instancePrivateIp
),
});
However, when I try to create the PTR record for the same, I've got some trouble. I needed to extract the fourth octet and specify as the recordName
new aws_route53.RecordSet(this, "revRecord", {
zone: revZone,
recordName: myEc2.instancePrivateIp.split('.')[3],
recordType: aws_route53.RecordType.PTR,
target: aws_route53.RecordTarget.fromValues("myec2.example.com"),
});
The CDK synthesized CloudFormation template looks odd as well, especially the token syntax.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name: ${Token[TOKEN.10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Is this the right way to achieve this ? If I specified the recordName as just the privateIp, then the synthesized template ends up doing something else, which I see is incorrect too.
revRecordDEADBEEF:
Type: AWS::Route53::RecordSet
Properties:
Name:
Fn::Join:
- ""
- - Fn::GetAtt:
- myEC2123A01BC
- PrivateIp
- .10.168.192.in-addr.arpa.
Type: PTR
HostedZoneId: A12345678B00CDEFGHIJ3
ResourceRecords:
- myec2.example.com
TTL: "1800"
Answering the CDK part of your question: the original error was because you were performing string manipulation on an unresolved token. Your CDK code runs before any resources are provisioned. This has to be the case, since it generates the CloudFormation template that will be submitted to CloudFormation to provision the resources. So when the code runs, the instance does not exist, and its IP address is not knowable.
CDK still allows you to access unresolved properties, returning a Token instead. You can pass this token around and it will be resolved to the actual value during deployment.
To perform string manipulation on a token, you can use CloudFormation's bult-in functions, since they run during deployment, after the token has been resolved.
Here's what it would look like:
recordName: Fn.select(0, Fn.split('.', myEc2.instancePrivateIp))
As you found out yourself, you were also selecting the wrong octet of the IP address, so the actual solution would include replacing 0 with 3 in the call.
References:
https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib-readme.html#intrinsic-functions-and-condition-expressions

AWS CDK - inline IAM Policies with conflicting names are generated for different stacks using a shared role

I'm using the CDK to deploy several stacks, and one of the roles used is shared across multiple stacks. The constructs (e.g. CodeBuildAction) which use the role frequently attach necessary permissions as an inline policy. However, despite knowing that it is an "imported" role, the inline policy name that is generated is not unique across stacks, and therefore both CloudFormation stacks contain the same Policy resource, and fight over the contents. (Neither stack contains the Role resource.)
import * as cdk from "#aws-cdk/core";
import * as iam from "#aws-cdk/aws-iam";
const sharedRoleArn = "arn:aws:iam::1111111111:role/MyLambdaRole";
const app = new cdk.App();
const stackOne = new cdk.Stack(app, "StackOne");
const roleRefOne = iam.Role.fromRoleArn(stackOne, "SharedRole", sharedRoleArn);
// Under normal circumstances, this is called inside constructs defined by AWS
// (like a CodeBuildAction that grants permission to access Artifact S3 buckets, etc)
roleRefOne.addToPrincipalPolicy(new iam.PolicyStatement({
actions: ["s3:ListBucket"],
resources: ["*"],
effect: iam.Effect.ALLOW,
}));
const stackTwo = new cdk.Stack(app, "StackTwo");
const roleRefTwo = iam.Role.fromRoleArn(stackTwo, "SharedRole", sharedRoleArn);
roleRefTwo.addToPrincipalPolicy(new iam.PolicyStatement({
actions: ["dynamodb:List*"],
resources: ["*"],
effect: iam.Effect.ALLOW,
}));
The following are fragments of the cloud assembly generated for the two stacks:
SharedRolePolicyA1DDBB1E:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: s3:ListBucket
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: SharedRolePolicyA1DDBB1E
Roles:
- MyLambdaRole
Metadata:
aws:cdk:path: StackOne/SharedRole/Policy/Resource
SharedRolePolicyA1DDBB1E:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action: dynamodb:List*
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: SharedRolePolicyA1DDBB1E
Roles:
- MyLambdaRole
Metadata:
aws:cdk:path: StackTwo/SharedRole/Policy/Resource
You can see above that the aws:cdk:paths for two policies are different, but they end up with the same name (SharedRolePolicyA1DDBB1E). That is used as the physical name of the inline policy attached to the MySharedRole role. (The same behavior occurs for stacks in separate "Apps" as well.)
There's no affordance for setting the PolicyName for the "default policy" generated for a role (or which policies a construct attaches permissions to). I could also make the shared role immutable (using { mutable: false } on fromRoleArn, but then I need to reconstruct the potentially complicated Policies a set of constructs would have given the role, and attache it myself.
I was able to work around the issue by templating the stack name into the imported role's "id", as in:
const stack = cdk.Stack.of(scope)
const role = iam.Role.fromRoleArn(scope, `${stack.stackName}SharedRole`, sharedRoleArn);
where I construct my role.
Is this expected behavior? Do I misunderstand something about imported resources with CDK? Is there a better alternative? (My understanding with the construct ids is that they are only intended to need to be unique within a given scope.)

Google Cloud Storage solution to join 2 CSV in a bucket file based on a common column

I need help/suggestions to implement the below use case.
I have 2 csv files in Google Cloud Storage bucket, I need to join these 2 files, based on one common column, and I need to save the output file back into the Google Cloud Storage bucket.
I need to implement this using any Google Cloud solution (cloud data flow with beam python), Cloud function or any other Cloud solutions, since I am new to Google cloud platform, I request all to help me on implementing this use case.
Looking forward to hearing from you
Thanks in advance
You have several way to achieve this. If the result of the merge take less than 1Gb and you want only 1 output file, you can do like this
Query external CSV files from BigQuery (federated query) and save the result in a temporary table like this
CREATE OR REPLACE EXTERNAL TABLE mydataset.table1
OPTIONS (
format = 'CSV',
uris = ['gs://mybucket/file1.csv'],
skip_leading_rows = 1
)
CREATE OR REPLACE EXTERNAL TABLE mydataset.table2
OPTIONS (
format = 'CSV',
uris = ['gs://mybucket/file2.csv'],
skip_leading_rows = 1
)
CREATE TABLE mydataset.newtable
OPTIONS(
expiration_timestamp=TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)
) AS
SELECT *
FROM mydataset.table1 join mydataset.table2 ON ....
Then, export the temp table mydataset.newtable to GCS
Else, you can use the solution that I describe in this article (that I wrote)
EDIT 1
You can use this sample of workflow definition that do what you need.
- loadFile1:
call: http.post
args:
url: https://bigquery.googleapis.com/bigquery/v2/projects/<projectID>/jobs
auth:
type: OAuth2
body:
configuration:
query:
query: CREATE OR REPLACE EXTERNAL TABLE mydataset.table1 OPTIONS (format = 'CSV', uris = ['gs://mybucket/file1.csv'], skip_leading_rows = 1)
useLegacySql: false
- loadFile2:
call: http.post
args:
url: https://bigquery.googleapis.com/bigquery/v2/projects/<projectID>/jobs
auth:
type: OAuth2
body:
configuration:
query:
query: CREATE OR REPLACE EXTERNAL TABLE mydataset.table2 OPTIONS (format = 'CSV', uris = ['gs://mybucket/file2.csv'], skip_leading_rows = 1)
useLegacySql: false
- joinQuery:
call: http.post
args:
url: https://bigquery.googleapis.com/bigquery/v2/projects/<projectID>/jobs
auth:
type: OAuth2
body:
configuration:
query:
query: CREATE TABLE mydataset.newtable OPTIONS( expiration_timestamp=TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)) AS SELECT * ......
useLegacySql: false
result: queryResult
- getState:
call: http.get
args:
url: ${"https://bigquery.googleapis.com/bigquery/v2/projects/<projectID>/jobs/" + queryResult.body.jobReference.jobId}
auth:
type: OAuth2
result: jobState
next: testState
- testState:
switch:
- condition: ${jobState.body.status.state == "DONE"}
next: extractData
next: waitAndGetState
- waitAndGetState:
call: sys.sleep
args:
seconds: 1
next: getState
- extractData:
call: http.post
args:
url: https://bigquery.googleapis.com/bigquery/v2/projects/<projectID>/jobs
auth:
type: OAuth2
body:
configuration:
extract:
destinationUri: gs://<YourBucket>/bq-extract.csv
destinationFormat: CSV
sourceTable:
projectId: <projectID>
datasetId: mydataset
tableId: newtable
result: extractResult
- returnOutput:
return: ${extractResult}
Then, use Cloud Scheduler to call directly the Create Workflow execution API with an empty body {} and a OAuth2 authentication mode.

Swagger client, adding a fixed parameter to the request

I am looking for a way to add a fixed parameter to every request the client sends to the server.
For example: param1=false. The default value for the server is param1=true, but I want the generated client to send false with every request. Is this somehow possible?
I have tried:
default: false - which is documented to not work for this case
defaultValue: false - which seems to only work for the UI
enum: -false - which also seems to only work for the UI
Edit
When I generate Java Code with
- name: param1
in: query
type: boolean
required: true
enum: [true]
The generated code looks like this:
private com.squareup.okhttp.Call routeGetCall(Boolean param1){
Object localVarPostBody = null;
// verify the required parameter 'param1' is set
if (param1 == null) {
throw new ApiException("Missing the required parameter 'param1' when calling routeGet(Async)");
}
... more code ...
Param1 is never forced to be true. I can even set it false. Therefore, enum seems to be only working for the UI?
While it's possible to have a constant parameter with just one possible value, such as ?param1=true:
parameters:
- name: param1
in: query
type: boolean
required: true
enum: [true]
if a parameter has multiple possible values, such as true / false (as in your example), the spec cannot force any specific value for the parameter. It's up to the client to decide which value to use.
That is, the generated client code needs to be modified to use a specific parameter value.

How to create a new content entry via Locomotive CMS RESTful API

I have created a site using LocomotiveCMS, I have created two content types called Photo and Gallery, these content types have a relationship so that I can create image galleries on my site.
I am currently looking to use the RESTful API in order to create multiple content entries for Photo as it traverses through a file.
I can connect to the API with no issue and modify the site etc.
I would assume that the cURL command for a new content entry would take the form of:
curl -X POST -d 'photo[image_id]=blah&photo[gallery]=1234&photo[file]=<filepath>photo[published]=true' 'http://<your site>/locomotive/api/current_site.json?auth_token=xxxx'
However I am unsure how to pass a file through in this command, I have substituted this for for now, how would you write this part?
My fields are set up as follows for Photo:
fields:
- image_id:
label: Image ID
type: string
required: true
localized: false
- file: # Name of the field
label: File
type: file
required: true
localized: false
- gallery: # Name of the field
label: Gallery
type: belongs_to
required: true
localized: false
# Slug of the target content type (eg post if this content type is a comment)
class_name: gallery
I ended up making a Ruby Script to parse files and upload them by sending the post data to
/locomotive/api/content_types/photos/entries.json?auth_token=XXXX
The following code can potentially help with this task:
data = {
content_entry: {
title: 'Title',
image: File.new('media/images/screen.png'),
}
}
HTTMultiParty.post(
"http://localhost:8080/locomotive/content_types/blogs/entries.json?auth_token=#{#token}",
query: data,
headers: { 'Content-Type' => 'application/json' }
)
I'm using HTTMultiParty since we actually need to do a multipart-post. Helpful information on how to do this with curl:
https://github.com/locomotivecms/documentation/pull/175
To get the token you need something like this:
HTTParty.post(
'http://localhost:8080/locomotive/api/tokens.json',
body: { api_key: 'YOUR_API_KEY_HERE' }
)
I hope that helps.
There is an api gem for LocomotiveCMS by now, works for 2.5.x and 3.x https://github.com/locomotivecms/coal
the attribute used need to end with _url for content entry fields with type=file https://github.com/locomotivecms/engine/pull/511/commits/f3a47ba5672b7a560e5edbef93cc9a4421192f0a

Resources