Conditional resource in serverless - amazon-sqs

I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine

The Type has to be String, not string. See the supported parameter data types section in the docs.

Related

Bitbucket Cloud interceptor for Tekton EventListener

I'm creating an eventlisterner for my repo on Bitbucket Cloud and saw on the curent example on the Tekton documentation that the Bitbucket interceptor only support Bitbucket Server.
I've created the eventlistener and looks like this:
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: bitbucket-el
spec:
serviceAccountName: tekton-triggers-admin
triggers:
- name: bitbucket-triggers
interceptors:
- bitbucket:
secretRef:
secretName: bitbucket-secret
secretKey: secretToken
eventTypes:
- cel:
filter: "header.match('X-Event-Key', 'repo:push')"
overlays:
- key: extensions.tag_name
expression: "split(body.ref, '/')[2]"
- key: extensions.mangledtag
expression: "split(split(body.ref, '/')[2], '.')[0]+'-'+split(split(body.ref, '/')[2], '.')[1]+'-'+split(split(body.ref, '/')[2], '.')[2]"
bindings:
- ref: bitbucket-binding
template:
ref: bitbucket-template
and I pass it the token generated (bitbucket-secret) from Bitbucket Cloud consumer secret by going through this doc: https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/
I used basic auth on Ingress and the webhook return 401 Unauthorized and now after I remove the basic auth and then trigger the webhook with a push I'm seeing 403 Forbiden.
Check the image below for illustartion:
enter image description here
Thank you in advance
I spend alot of time on this issue and finally fixed it by using the CEL expression interceptors, as follows.
In this Trigger, we are using the overlays to add the "X-Hub-Signature" to the body of the payload, where the expression value i.e., 1234567 doesn't matter it can be anything, we are just adding the HMAC to the body so that we will not get an error.
Note: By default, there is no interceptor for the bitbucket CLOUD
apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
name: energy
spec:
serviceAccountName: pipeline
interceptors:
- ref:
name: "cel"
params:
- name: "filter"
value: "header.match('X-Event-Key', 'repo:push')"
- name: "overlays"
value:
- key: X-Hub-Signature
expression: "1234567"
bindings:
- ref: energy
template:
ref: energy
I am trying to achieve the same, starting a build when a PR merge has been done in BitBucket cloud.
I was able to create the EventListener resource, but my pipeline is not triggered after merging a PR.
Looking at your example, I still have some questions
How is the GIT repository and the secret configured?
How can you specify a specific branch?
I was looking for a complete example but it seems like Tekton is just ignoring Bitbucket Cloud as a VCS ...
Kind regards,
Bregt

Prometheus and Alertmanager - route based on env label

I'm trying to configure alertmanager so that it sends alerts to the right channels, based on value of a specific label. I have 3 slack channels - dev/staging/prod and I want the alerts coming from instances that have "env" label set to dev to be sent to the dev slack channel. Staging and prod would obviously work in the same manner. Here is part of my config:
global:
resolve_timeout: 1m
slack_api_url: 'https://slack-url'
route:
group_by: [...]
receiver: 'default'
routes:
- match:
env: 'prod'
receiver: 'slack-notifications-prod'
- match:
env: 'staging'
receiver: 'slack-notifications-staging'
- match:
env: 'dev'
receiver: 'slack-notifications-dev'
receivers:
- name: 'default'
- name: 'slack-notifications-prod'
...
- name: 'slack-notifications-staging'
...
- name: 'slack-notifications-dev'
...
The slack-notifications receivers are all the same and they only differ in one thing, which is the appropriate channel name.
Current behaviour: All alerts are sent to the prod slack channel
Expected behaviour: Alerts from "dev" env are sent to dev channel, "staging" to staging channel, and "prod" to prod channel.
Alertmanager sees these labels just fine (judging from the info from alertmanager webUI).
Turns out my config was fine and I was using a webhook URL which was tied only to one slack channel, I wasn't aware of that.
You have to add continue: true attribute on the first match:
global:
resolve_timeout: 1m
slack_api_url: 'https://slack-url'
route:
group_by: [...]
receiver: 'default'
routes:
- match:
env: 'prod'
receiver: 'slack-notifications-prod'
continue: true
- match:
env: 'staging'
receiver: 'slack-notifications-staging'
- match:
env: 'dev'
receiver: 'slack-notifications-dev'
receivers:
- name: 'default'
- name: 'slack-notifications-prod'
...
- name: 'slack-notifications-staging'
...
- name: 'slack-notifications-dev'
...
The AlertManager will evaluate children routes until there are no routes left or no routes for a given level are matching the current alert.
In that case, the AlertManager will take the configuration of the current node evaluated.
The continue attribute is a value used to define if you want to evaluate route siblings (belonging to the same level) if a route on the same level was already matching.
https://devconnected.com/alertmanager-and-prometheus-complete-setup-on-linux/

How to access AWS SAM Mapping in swagger.yaml / openapi.yaml files

I have declared a mapping named StageMap in my sam.yaml file:
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Parameters:
ProjectName:
Type: String
SubProjectName:
Type: String
Stage:
Type: String
AllowedValues:
- dev
- test
- preprod
- prod
...
Mappings:
StageMap:
dev:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
test:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-test-AuthorizerFunction-UQ1EQ2SP5W6G/invocations
preprod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-preprod-AuthorizerFunction-UQ1W6EQ2SP5G/invocations
prod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-prod-AuthorizerFunction-5STBUB61RR2YJ/invocations
I would like to use this mapping in my swagger.yaml I have tried the following:
...
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerArn
I also tried this solution but I got an error Every Mappings attribute must be a String or a List.
Can you please let me know how to access one of the values in the mapping in the swagger.yaml? Thanks!
I found the following in the AWS SAM docs:
You cannot include parameters, pseudo parameters, or intrinsic functions in the Mappings section.
So I changed:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
For:
AuthorizerFunctionName: auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6
And in the swagger.yaml I used the following:
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::Sub:
- arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${AuthorizerFunctionName}/invocations
- AuthorizerFunctionName:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerFunctionName'

serverless framework with aws import function returns 404

I have two serverless app which are sharing the same custom authorizer. Suddenly the import function in the second serverless.yml file stopped working.
The app is based on https://github.com/medwig/serverless-shared-authorizer
gateway.serverless
service: authorizer-stack
provider:
name: aws
runtime: nodejs12.x
region: ap-south-1
profile: xxx-dev
functions:
authorizer:
handler: handler.auth
test:
handler: handler.privateEndpoint
events:
- http:
path: /api/test
method: get
authorizer:
type: CUSTOM
authorizerId:
Ref: Authorizer
test2:
handler: handler.publicEndpoint
events:
- http:
path: /api/test/public
method: get
resources:
Resources:
AuthorizerPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt: AuthorizerLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal:
Fn::Join: ["",["apigateway.", { Ref: "AWS::URLSuffix"}]]
Authorizer:
DependsOn:
- ApiGatewayRestApi
Type: AWS::ApiGateway::Authorizer
Properties:
Name: ${self:provider.stage}-Authorizer
RestApiId: { "Ref" : "ApiGatewayRestApi" }
Type: TOKEN
IdentitySource: method.request.header.Authorization
AuthorizerResultTtlInSeconds: 300
AuthorizerUri:
Fn::Join:
- ''
-
- 'arn:aws:apigateway:'
- Ref: "AWS::Region"
- ':lambda:path/2015-03-31/functions/'
- Fn::GetAtt: "AuthorizerLambdaFunction.Arn"
- "/invocations"
Outputs:
AuthorizerId:
Value:
Ref: Authorizer
Export:
Name: authorizerId
apiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: rootResourceId
products serverless
service: products-list
provider:
name: aws
runtime: nodejs12.x
region: ap-south-1
profile: xxx-dev
apiGateway:
restApiId:
Fn::ImportValue: authorizer-stack-dev-restApiId
restApiRootResourceId:
Fn::ImportValue: authorizer-stack-dev-rootResourceId
functions:
get-products:
handler: handler.getProducts
events:
- http:
path: /api/products
method: get
authorizer:
type: CUSTOM
authorizerId:
Fn::ImportValue: authorizer-stack-dev-authorizerId
I am getting the following errors at random
An error occurred: products-list-dev - No export named authorizer-stack-dev-restApiId found.
An error occurred: products-list-dev - No export named authorizer-stack-dev-rootResourceId found.
An error occurred: products-list-dev - No export named authorizer-stack-dev-authorizerId found.
What am I missing here?
serverless -v
Framework Core: 1.74.1
Plugin: 3.6.15
SDK: 2.3.1
Components: 2.31.10
From the shared authorizers I have configured in the past it is not necessary to go to the effort you have undergone. The documentation on the Serverless Framework site has a much simpler setup to achieve a shared authoriser and I will always go with the simplest solution possible: https://www.serverless.com/framework/docs/providers/aws/events/apigateway#share-authorizer

Set environment variables from external file in serverless.yml

I'm using serverless and serverless-local for local development.
I've got an external file which holds references to environment variables which I retrieve from node.env in my app.
From what I understand, I should be able to set my environment variables such as
dev:
AWS_KEY: 'key',
SECRET: 'secret
test:
AWS_KEY: 'test-key',
SECRET: 'test-secret',
etc:
...
and have those environment variables included in my app through the following line in my serverless.yml
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.default_stage}
deploymentBucket: serverless-deploy-packages/${opt:stage, self:custom.default_stage}
environment:
${file(./serverless-env.yml):${opt:stage, self:custom.default_stage}}
then in the commandline, I call
serverless offline --stage dev --port 9000
I thought this would include the correct vars in my app, but it isn't working. Is this not how it is supposed to work? Am I doing something wrong here?
From docs:
You can set the contents of an external file into a variable:
file: ${file(./serverless-env.yml)}
And later you can use this new variable to access the file variables.
secret: file.dev.SECRET
Or you can use the file directly:
secret: ${file(./serverless-env.yml):dev.SECRET}
You can also now use remote async values with the serverless framework. See https://serverless.com/blog/serverless-v1.13.0/
This means you can call values from s3 or remote databases etc.
Example:
serverless.yml
service: serverless-async-vars
provider:
name: aws
runtime: nodejs6.10
custom:
secret: ${file(./vars.js):fetchSecret} # JS file running async / promised
vars.js
module.exports.fetchSecret = () => {
// async code
return Promise.resolve('SomeSecretKey');
}
This is how you can separate your environments by different stages:
serverless.yml:
custom:
test:
project: xxx
prod:
project: yyy
provider:
...
stage: ${opt:stage, 'test'}
project: ${self:custom.${opt:stage, 'test'}.project}
environment:
${file(.env.${opt:stage, 'test'}.yml):}
package:
exclude:
- .env.*
.env.test.yml:
VARIABLE1: value1
VARIABLE2: value2
During deploy, pass --stage=prod or skip and test project will be deployed. Then in your JS code you can access ENV variables with process.env.VARIABLE1.
Set Lambda environment variables from JSON file ( using AWS CLI)
aws lambda update-function-configuration --profile mfa --function-name test-api --cli-input-json file://dev.json
I had this correct, but I was referencing the file incorrectly.
I don't see this in the docs, but passing a file to environment will include the files yaml file, and the above structure does work.

Resources