serverless framework with aws import function returns 404 - serverless

I have two serverless app which are sharing the same custom authorizer. Suddenly the import function in the second serverless.yml file stopped working.
The app is based on https://github.com/medwig/serverless-shared-authorizer
gateway.serverless
service: authorizer-stack
provider:
name: aws
runtime: nodejs12.x
region: ap-south-1
profile: xxx-dev
functions:
authorizer:
handler: handler.auth
test:
handler: handler.privateEndpoint
events:
- http:
path: /api/test
method: get
authorizer:
type: CUSTOM
authorizerId:
Ref: Authorizer
test2:
handler: handler.publicEndpoint
events:
- http:
path: /api/test/public
method: get
resources:
Resources:
AuthorizerPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt: AuthorizerLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal:
Fn::Join: ["",["apigateway.", { Ref: "AWS::URLSuffix"}]]
Authorizer:
DependsOn:
- ApiGatewayRestApi
Type: AWS::ApiGateway::Authorizer
Properties:
Name: ${self:provider.stage}-Authorizer
RestApiId: { "Ref" : "ApiGatewayRestApi" }
Type: TOKEN
IdentitySource: method.request.header.Authorization
AuthorizerResultTtlInSeconds: 300
AuthorizerUri:
Fn::Join:
- ''
-
- 'arn:aws:apigateway:'
- Ref: "AWS::Region"
- ':lambda:path/2015-03-31/functions/'
- Fn::GetAtt: "AuthorizerLambdaFunction.Arn"
- "/invocations"
Outputs:
AuthorizerId:
Value:
Ref: Authorizer
Export:
Name: authorizerId
apiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: rootResourceId
products serverless
service: products-list
provider:
name: aws
runtime: nodejs12.x
region: ap-south-1
profile: xxx-dev
apiGateway:
restApiId:
Fn::ImportValue: authorizer-stack-dev-restApiId
restApiRootResourceId:
Fn::ImportValue: authorizer-stack-dev-rootResourceId
functions:
get-products:
handler: handler.getProducts
events:
- http:
path: /api/products
method: get
authorizer:
type: CUSTOM
authorizerId:
Fn::ImportValue: authorizer-stack-dev-authorizerId
I am getting the following errors at random
An error occurred: products-list-dev - No export named authorizer-stack-dev-restApiId found.
An error occurred: products-list-dev - No export named authorizer-stack-dev-rootResourceId found.
An error occurred: products-list-dev - No export named authorizer-stack-dev-authorizerId found.
What am I missing here?
serverless -v
Framework Core: 1.74.1
Plugin: 3.6.15
SDK: 2.3.1
Components: 2.31.10

From the shared authorizers I have configured in the past it is not necessary to go to the effort you have undergone. The documentation on the Serverless Framework site has a much simpler setup to achieve a shared authoriser and I will always go with the simplest solution possible: https://www.serverless.com/framework/docs/providers/aws/events/apigateway#share-authorizer

Related

serverless deploy error - Resource handler returned message: "Lambda function xxxxxxxx could not be found"

Hi can anyone help me how to deploy serverless with specific stage, I have 1 app with 2 stage dev and prod. When deploy to dev its working fine and successfully deployed, but with prod stage always get below error:
Error:
UPDATE_FAILED: FilterOptionLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Lambda function xxxxxxx-api-prod-xxxxxx could not be found" (RequestToken: ee621797-de45-aa3f-118b-8f512d4a5f62, HandlerErrorCode: NotFound)
I tried to comment all function and leave 1 function to test deploy, but received another error as below:
Error:
UPDATE_FAILED: EnterpriseLogAccessIamRole (AWS::IAM::Role)
Unable to retrieve Arn attribute for AWS::Logs::LogGroup, with error message Resource of type 'AWS::Logs::LogGroup' with identifier '{"/properties/LogGroupName":"/aws/lambda/xxxxx-api-prod-api"}' was not found.
Here is my serverless.yml:
org: xxxxxx
app: comeby-api
service: comeby-scheduler-api
frameworkVersion: "3"
custom:
serverless-offline:
noPrependStageInUrl: true
myEnvironment:
MESSAGE:
prod: "This is production environment"
staging: "This is staging environment"
dev: "This is development environment"
useDotenv: true
provider:
name: aws
runtime: nodejs14.x
region: ap-southeast-1
stage: prod
functions:
api:
handler: handler.handler
events:
- httpApi: "*"
# Alikhsan
SyncAlikhsanSB2:
SyncAlikhsanAMT:
SyncAlikhsanASG:
SyncAlikhsanIOI:
SyncAlikhsanJSB:
SyncAlikhsanSPY:
# Sync Product
Shopify:
SyncSenheng:
SyncXilnix:
Puma:
# Anything
FilterOption:
AriadneMaps:
handler: scheduler/update/AriadneMaps.handler
description: "Update Ariadne Maps (to view report of total visitor of specific store) in Database"
memorySize: 512
timeout: 900
events:
- schedule:
rate: cron(00 22 * * ? *)
enabled: true
- http:
path: /cron/ariadne
method: get
SendEmailUpdateProduct:
ReportPurchasing:
UpdateProductPricePuma:
UpdateFootFallCam:
plugins:
# - serverless-dotenv-plugin
- serverless-offline
- serverless-offline-scheduler
I am guessing from those UPDATE_FAILEDs, you are using the same serverless file for both dev and prod deployment. Based on this assumption, you may have to provide separate service names for both of your deployments. If you have deployed to the dev environment already with service name comeby-scheduler-api, the next deployment for prod stage with the same service name will try to override the previous deployment.
In my case, I tackled this using 2 separate serverless configuration files (one for dev and the other for prod). For dev deployment, my config file serverless-dev.yml looks like the following.
service: service-dev
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
environment:
DB_HOST: <host>
DB_PASSWORD: <pass>
DB_PORT: <port>
DB_DATABASE: <db_name>
DB_USER: <db_user>
plugins:
- serverless-python-requirements
- serverless-secrets-plugin
- serverless-api-compression
package:
patterns:
- '!venv/**'
- '!__pycache__/**'
- '!node_modules/**'
- '!test/**'
functions:
Lambda1:
handler: lambda_file_name.handler_function_name
memorySize: 512
timeout: 900
events:
- s3:
bucket: <bucket_name_for_this_lambda_trigger>
event: s3:ObjectCreated:*
rules:
- prefix: <filter_trigger_file_prefix>
- suffix: <filter_trigger_file_suffix>
existing: <true if an existing s3 bucket, false otherwise>
Whereas for the prod, the serverless-prod.yml file is,
service: service-prod
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
... rest is similar
My deployment commands for these separate stages are.
sls deploy -s dev -c serverless-dev.yml
sls deploy -s prod -c serverless-prod.yml

Implement Envoy OAuth2 filter with disabled routes

I deployed an envoy as a side car to manage oauth2. Everything work fine for all the resources and the client is redirected to the OIDC in order to authenticate.
Here is a part of my conf (managed in a Helm chart):
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: my-service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: my-service
http_filters:
- name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: {{ .Values.back.envoy.oidc.name }}
uri: https://{{ .Values.back.envoy.oidc.address }}/oidc/token
timeout: 5s
authorization_endpoint: https://{{ .Values.back.envoy.oidc.address }}/oidc/authorize
redirect_uri: "%REQ(x-forwarded-proto)%://%REQ(:authority)%/oidc/callback"
redirect_path_matcher:
path:
exact: /oidc/callback
signout_path:
path:
exact: /oidc/signout
credentials:
client_id: {{ required "back.envoy.oidc.client_id is required" .Values.back.envoy.oidc.client_id }}
token_secret:
name: token
sds_config:
resource_api_version: V3
path: "/etc/envoy/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
resource_api_version: V3
path: "/etc/envoy/hmac-secret.yaml"
forward_bearer_token: true
# (Optional): defaults to 'user' scope if not provided
auth_scopes:
- user
- openid
- email
- homelan_devices_read
- homelan_topology_read
- homelan_devices_write
# (Optional): set resource parameter for Authorization request
#resources:
#- oauth2-resource
#- http://example.com
- name: envoy.filters.http.router
typed_config: {}
Now I'd like that some of the exposed resources don't need to be authenticated.
I see in the doc the Oauth filter doc "Leave this empty to disable OAuth2 for a specific route, using per filter config." (see https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/oauth2/v3/oauth.proto#envoy-v3-api-msg-extensions-filters-http-oauth2-v3-oauth2config)
This phrase make me think that it may be possible.
I tried to manage it changing my conf throught virtual_hosts this way :
virtual_hosts:
- name: no-oauth
domains: ["*"]
typed_per_filter_config:
envoy.filters.http.oauth2:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
routes:
- match:
prefix: "/api/v1/myResource1"
route:
cluster: my-service
- name: my-service
domains: ["*"]
routes:
- match:
prefix: "/api/v1/myResource2"
route:
cluster: my-service
I have the error : [critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': The filter envoy.filters.http.oauth2 doesn't support virtual host-specific configurations
Any idea ? Did someone implement Envoy OAuth2 filter with disabled routes ?
After looking at my envoy logs, I realized that path is know as header ":path".
The pass_through_matcher math the header.
Then only adding:
pass_through_matcher:
- name: ":path"
prefix_match: "/healthz"
- name: ":path"
prefix_match: "/api/v1/myResource1"
in my conf without the lua filter (see my previous answer) it works.
For information, I found a workaround:
I added a LUA filter before my OAuth2 one:
- name: envoy.filters.http.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("X-Path", request_handle:headers():get(":path"))
end
In order to add the path in a header.
Then I can use this element of conf Oauth2:
pass_through_matcher
(repeated config.route.v3.HeaderMatcher) Any request that matches any of the provided matchers will be passed through without OAuth validation.
So I add this to my OAuth2 filter:
pass_through_matcher:
- name: "X-path"
prefix_match: "/healthz"
- name: "X-path"
prefix_match: "/api/v1/myResource1"
Then my /api/v1/myResource1 requests (and healthz also) don't need authentication (are disable from the OAuth2) while my /api/v1/myResource2 requests need it.
I still have got the unanswered question:
What do the OAuth filter doc means with :"Leave this empty to disable OAuth2 for a specific route, using per filter config."

How to access AWS SAM Mapping in swagger.yaml / openapi.yaml files

I have declared a mapping named StageMap in my sam.yaml file:
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Parameters:
ProjectName:
Type: String
SubProjectName:
Type: String
Stage:
Type: String
AllowedValues:
- dev
- test
- preprod
- prod
...
Mappings:
StageMap:
dev:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
test:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-test-AuthorizerFunction-UQ1EQ2SP5W6G/invocations
preprod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-preprod-AuthorizerFunction-UQ1W6EQ2SP5G/invocations
prod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-prod-AuthorizerFunction-5STBUB61RR2YJ/invocations
I would like to use this mapping in my swagger.yaml I have tried the following:
...
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerArn
I also tried this solution but I got an error Every Mappings attribute must be a String or a List.
Can you please let me know how to access one of the values in the mapping in the swagger.yaml? Thanks!
I found the following in the AWS SAM docs:
You cannot include parameters, pseudo parameters, or intrinsic functions in the Mappings section.
So I changed:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
For:
AuthorizerFunctionName: auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6
And in the swagger.yaml I used the following:
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::Sub:
- arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${AuthorizerFunctionName}/invocations
- AuthorizerFunctionName:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerFunctionName'

Conditional resource in serverless

I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine
The Type has to be String, not string. See the supported parameter data types section in the docs.

How to implement Log4j2 custom layout yaml configuration

I am using custom layout for log4j2 framework. How to specify custom layout in log4j2 yaml configuration file ?
The sample I tried is given below. This does not work at the moment saying invalid element 'layout'.
Configutation:
status: warn
packages: uk.co.logging.layout
Properties:
Property:
name: logging.dir
value: ./default_log_dir/
Property:
name: service.name
value: default
Appenders:
Console:
name: CONSOLE
layout: ConnectJsonLayout
policies:
TimeBasedTriggeringPolicy:
interval: 1
modulate: true
SizeBasedTriggeringPolicy:
size: 250MB
RollingFile:
- name: APPLICATION
fileName: ${logging.dir}/${service.name}.log
filePattern: ${logging.dir}/${date:yyyy-MM}/${service.name}-%d{yyyy-MM-dd}-%i.log.gz
layout: ConnectJsonLayout
policies:
TimeBasedTriggeringPolicy:
interval: 1
modulate: true
SizeBasedTriggeringPolicy:
size: 250MB
Loggers:
Root:
level: error
AppenderRef:
- ref: CONSOLE
- ref: APPLICATION
Logger:
- name: uk.co.xxx
additivity: false
level: debug
AppenderRef:
- ref: CONSOLE
- ref: APPLICATION
Kind Regards,
Kiran
Configutation:
status: warn
packages: uk.co.logging.layout
Properties:
Property:
name: logging.dir
value: ./default_log_dir/
Property:
name: service.name
value: default
Appenders:
Console:
name: CONSOLE
ConnectJsonLayout: {}
Loggers:
Root:
level: error
AppenderRef:
- ref: CONSOLE

Resources