Can't send Email by AWS SES with serverless - serverless

I can't send an Email by AWS SES with serverless, but it is working on localhost.
this is my serverless.yml
postSend:
handler: dist/src/handler/emailHandler.emailPost
events:
- http:
path: emailPost
method: post
cors: true
iamRoleStatements:
- Effect: "Allow"
Action:
- ses:SendEmail
Resource: "*"

Related

serverless deploy error - Resource handler returned message: "Lambda function xxxxxxxx could not be found"

Hi can anyone help me how to deploy serverless with specific stage, I have 1 app with 2 stage dev and prod. When deploy to dev its working fine and successfully deployed, but with prod stage always get below error:
Error:
UPDATE_FAILED: FilterOptionLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Lambda function xxxxxxx-api-prod-xxxxxx could not be found" (RequestToken: ee621797-de45-aa3f-118b-8f512d4a5f62, HandlerErrorCode: NotFound)
I tried to comment all function and leave 1 function to test deploy, but received another error as below:
Error:
UPDATE_FAILED: EnterpriseLogAccessIamRole (AWS::IAM::Role)
Unable to retrieve Arn attribute for AWS::Logs::LogGroup, with error message Resource of type 'AWS::Logs::LogGroup' with identifier '{"/properties/LogGroupName":"/aws/lambda/xxxxx-api-prod-api"}' was not found.
Here is my serverless.yml:
org: xxxxxx
app: comeby-api
service: comeby-scheduler-api
frameworkVersion: "3"
custom:
serverless-offline:
noPrependStageInUrl: true
myEnvironment:
MESSAGE:
prod: "This is production environment"
staging: "This is staging environment"
dev: "This is development environment"
useDotenv: true
provider:
name: aws
runtime: nodejs14.x
region: ap-southeast-1
stage: prod
functions:
api:
handler: handler.handler
events:
- httpApi: "*"
# Alikhsan
SyncAlikhsanSB2:
SyncAlikhsanAMT:
SyncAlikhsanASG:
SyncAlikhsanIOI:
SyncAlikhsanJSB:
SyncAlikhsanSPY:
# Sync Product
Shopify:
SyncSenheng:
SyncXilnix:
Puma:
# Anything
FilterOption:
AriadneMaps:
handler: scheduler/update/AriadneMaps.handler
description: "Update Ariadne Maps (to view report of total visitor of specific store) in Database"
memorySize: 512
timeout: 900
events:
- schedule:
rate: cron(00 22 * * ? *)
enabled: true
- http:
path: /cron/ariadne
method: get
SendEmailUpdateProduct:
ReportPurchasing:
UpdateProductPricePuma:
UpdateFootFallCam:
plugins:
# - serverless-dotenv-plugin
- serverless-offline
- serverless-offline-scheduler
I am guessing from those UPDATE_FAILEDs, you are using the same serverless file for both dev and prod deployment. Based on this assumption, you may have to provide separate service names for both of your deployments. If you have deployed to the dev environment already with service name comeby-scheduler-api, the next deployment for prod stage with the same service name will try to override the previous deployment.
In my case, I tackled this using 2 separate serverless configuration files (one for dev and the other for prod). For dev deployment, my config file serverless-dev.yml looks like the following.
service: service-dev
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
environment:
DB_HOST: <host>
DB_PASSWORD: <pass>
DB_PORT: <port>
DB_DATABASE: <db_name>
DB_USER: <db_user>
plugins:
- serverless-python-requirements
- serverless-secrets-plugin
- serverless-api-compression
package:
patterns:
- '!venv/**'
- '!__pycache__/**'
- '!node_modules/**'
- '!test/**'
functions:
Lambda1:
handler: lambda_file_name.handler_function_name
memorySize: 512
timeout: 900
events:
- s3:
bucket: <bucket_name_for_this_lambda_trigger>
event: s3:ObjectCreated:*
rules:
- prefix: <filter_trigger_file_prefix>
- suffix: <filter_trigger_file_suffix>
existing: <true if an existing s3 bucket, false otherwise>
Whereas for the prod, the serverless-prod.yml file is,
service: service-prod
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
... rest is similar
My deployment commands for these separate stages are.
sls deploy -s dev -c serverless-dev.yml
sls deploy -s prod -c serverless-prod.yml

oauth2: cannot fetch token: 400 Bad Request Response: in Prometheus targets, while federating metrics

I am trying to use oauth2 in the kube-prom-stack for the authetication for metrics federate from a https node.
Below is my configuration:
`additionalScrapeConfigs:
- job_name: 'test-federation'
scrape_interval: 20s
scrape_timeout: 20s
scheme: https
oauth2:
client_id: 'auth-server'
client_secret: 'XXXXXXXXXXX'
token_url: 'http://XXX.XXX.XX.XX:80/auth/token '
endpoint_params:
grant_type: 'client_credentials'
metrics_path: /federate
honor_labels: true
tls_config:
insecure_skip_verify: true
metric_relabel_configs:
- source_labels: [id]
regex: '^static-agent$'
action: drop
params:
match[]:
- '{job="xyz"}'
static_configs:
- targets: ['XXX.XX.XX.XX:9090']`
But, when i checked my prometheus tarhgets i see below error:
oauth2: cannot fetch token: 400 Bad Request Response: {"code":"400","description":"Invalid credentials"}
Please help.
oauth2:
client_id: 'auth-server'
client_secret: 'XXXXXXXXXXX'
token_url: 'http://XXX.XXX.XX.XX:80/auth/token '
endpoint_params:
grant_type: 'client_credentials'
I tried the above for Oauth2 authentication, but i see the below error in prometheues targets while scraping metrics from other node.
oauth2: cannot fetch token: 400 Bad Request Response: {"code":"400","description":"Invalid credentials"}

Implement Envoy OAuth2 filter with disabled routes

I deployed an envoy as a side car to manage oauth2. Everything work fine for all the resources and the client is redirected to the OIDC in order to authenticate.
Here is a part of my conf (managed in a Helm chart):
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: my-service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: my-service
http_filters:
- name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: {{ .Values.back.envoy.oidc.name }}
uri: https://{{ .Values.back.envoy.oidc.address }}/oidc/token
timeout: 5s
authorization_endpoint: https://{{ .Values.back.envoy.oidc.address }}/oidc/authorize
redirect_uri: "%REQ(x-forwarded-proto)%://%REQ(:authority)%/oidc/callback"
redirect_path_matcher:
path:
exact: /oidc/callback
signout_path:
path:
exact: /oidc/signout
credentials:
client_id: {{ required "back.envoy.oidc.client_id is required" .Values.back.envoy.oidc.client_id }}
token_secret:
name: token
sds_config:
resource_api_version: V3
path: "/etc/envoy/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
resource_api_version: V3
path: "/etc/envoy/hmac-secret.yaml"
forward_bearer_token: true
# (Optional): defaults to 'user' scope if not provided
auth_scopes:
- user
- openid
- email
- homelan_devices_read
- homelan_topology_read
- homelan_devices_write
# (Optional): set resource parameter for Authorization request
#resources:
#- oauth2-resource
#- http://example.com
- name: envoy.filters.http.router
typed_config: {}
Now I'd like that some of the exposed resources don't need to be authenticated.
I see in the doc the Oauth filter doc "Leave this empty to disable OAuth2 for a specific route, using per filter config." (see https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/oauth2/v3/oauth.proto#envoy-v3-api-msg-extensions-filters-http-oauth2-v3-oauth2config)
This phrase make me think that it may be possible.
I tried to manage it changing my conf throught virtual_hosts this way :
virtual_hosts:
- name: no-oauth
domains: ["*"]
typed_per_filter_config:
envoy.filters.http.oauth2:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
routes:
- match:
prefix: "/api/v1/myResource1"
route:
cluster: my-service
- name: my-service
domains: ["*"]
routes:
- match:
prefix: "/api/v1/myResource2"
route:
cluster: my-service
I have the error : [critical][main] [source/server/server.cc:117] error initializing configuration '/etc/envoy/envoy.yaml': The filter envoy.filters.http.oauth2 doesn't support virtual host-specific configurations
Any idea ? Did someone implement Envoy OAuth2 filter with disabled routes ?
After looking at my envoy logs, I realized that path is know as header ":path".
The pass_through_matcher math the header.
Then only adding:
pass_through_matcher:
- name: ":path"
prefix_match: "/healthz"
- name: ":path"
prefix_match: "/api/v1/myResource1"
in my conf without the lua filter (see my previous answer) it works.
For information, I found a workaround:
I added a LUA filter before my OAuth2 one:
- name: envoy.filters.http.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("X-Path", request_handle:headers():get(":path"))
end
In order to add the path in a header.
Then I can use this element of conf Oauth2:
pass_through_matcher
(repeated config.route.v3.HeaderMatcher) Any request that matches any of the provided matchers will be passed through without OAuth validation.
So I add this to my OAuth2 filter:
pass_through_matcher:
- name: "X-path"
prefix_match: "/healthz"
- name: "X-path"
prefix_match: "/api/v1/myResource1"
Then my /api/v1/myResource1 requests (and healthz also) don't need authentication (are disable from the OAuth2) while my /api/v1/myResource2 requests need it.
I still have got the unanswered question:
What do the OAuth filter doc means with :"Leave this empty to disable OAuth2 for a specific route, using per filter config."

AWS CDK Create API Gateway using OpenAPI Spec with Private Endpoint

I am trying to work out the correct way to attach a VPC Endpoint to an API gateway using an OpenAPI Spec.
Here is what I have so far:
CDK:
var apiDefinition = ApiDefinition.FromAsset("./openapi_replaced.yaml");
var api = new SpecRestApi(this, "api", new SpecRestApiProps {
ApiDefinition= apiDefinition,
});
OpenAPI Spec:
servers:
- url: ""
x-amazon-apigateway-endpoint-configuration:
vpcEndpointIds:
- vpce-1111aaaa00001111
Will this attach the VPC Endpoint to my API Gateway as a Private Endpoint?

serverless framework with aws import function returns 404

I have two serverless app which are sharing the same custom authorizer. Suddenly the import function in the second serverless.yml file stopped working.
The app is based on https://github.com/medwig/serverless-shared-authorizer
gateway.serverless
service: authorizer-stack
provider:
name: aws
runtime: nodejs12.x
region: ap-south-1
profile: xxx-dev
functions:
authorizer:
handler: handler.auth
test:
handler: handler.privateEndpoint
events:
- http:
path: /api/test
method: get
authorizer:
type: CUSTOM
authorizerId:
Ref: Authorizer
test2:
handler: handler.publicEndpoint
events:
- http:
path: /api/test/public
method: get
resources:
Resources:
AuthorizerPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt: AuthorizerLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal:
Fn::Join: ["",["apigateway.", { Ref: "AWS::URLSuffix"}]]
Authorizer:
DependsOn:
- ApiGatewayRestApi
Type: AWS::ApiGateway::Authorizer
Properties:
Name: ${self:provider.stage}-Authorizer
RestApiId: { "Ref" : "ApiGatewayRestApi" }
Type: TOKEN
IdentitySource: method.request.header.Authorization
AuthorizerResultTtlInSeconds: 300
AuthorizerUri:
Fn::Join:
- ''
-
- 'arn:aws:apigateway:'
- Ref: "AWS::Region"
- ':lambda:path/2015-03-31/functions/'
- Fn::GetAtt: "AuthorizerLambdaFunction.Arn"
- "/invocations"
Outputs:
AuthorizerId:
Value:
Ref: Authorizer
Export:
Name: authorizerId
apiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: rootResourceId
products serverless
service: products-list
provider:
name: aws
runtime: nodejs12.x
region: ap-south-1
profile: xxx-dev
apiGateway:
restApiId:
Fn::ImportValue: authorizer-stack-dev-restApiId
restApiRootResourceId:
Fn::ImportValue: authorizer-stack-dev-rootResourceId
functions:
get-products:
handler: handler.getProducts
events:
- http:
path: /api/products
method: get
authorizer:
type: CUSTOM
authorizerId:
Fn::ImportValue: authorizer-stack-dev-authorizerId
I am getting the following errors at random
An error occurred: products-list-dev - No export named authorizer-stack-dev-restApiId found.
An error occurred: products-list-dev - No export named authorizer-stack-dev-rootResourceId found.
An error occurred: products-list-dev - No export named authorizer-stack-dev-authorizerId found.
What am I missing here?
serverless -v
Framework Core: 1.74.1
Plugin: 3.6.15
SDK: 2.3.1
Components: 2.31.10
From the shared authorizers I have configured in the past it is not necessary to go to the effort you have undergone. The documentation on the Serverless Framework site has a much simpler setup to achieve a shared authoriser and I will always go with the simplest solution possible: https://www.serverless.com/framework/docs/providers/aws/events/apigateway#share-authorizer

Resources