Travis-CI Deployment issue for AWS Lambda and Java - travis-ci

When deploying code to lambdas for a java project there was an issue at first with index. being added to my handler path. Using: https://stackoverflow.com/a/49620548/2612651 I was able to get by that problem.
Now the issue is in Java it is using module_name.handler_name but instead of . it should be ::.
Side note is there a concise list of all the .travis.yml commands anywhere? I cannot seem to find it.
Here is the deploy section of my .travis.yml file it puts the two artifact where I want I believe.
deploy:
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "grant-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function.GrantJwt"
handler_name: "handleRequest"
file: "./grant-jwt/target/grant-jwt-0.0.1-SNAPSHOT.jar"
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "verify-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function.VerifyJwt"
handler_name: "handleRequest"
file: "./verify-jwt/target/verify-jwt-0.0.1-SNAPSHOT.jar"

So I actually ran into this issue too, and when i found the answer i felt silly. The correct way to to this would be to have the module_name be the package path and the handler_name name be Class::handlerFunc. Also file is not supported, looking at the documentation the parameter you want is zip.
so for your case it would be
deploy:
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "grant-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function"
handler_name: "GrantJwt::handleRequest"
zip: "./grant-jwt/target/grant-jwt-0.0.1-SNAPSHOT.jar"
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "verify-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function"
handler_name: "VerifyJwt::handleRequest"
zip: "./verify-jwt/target/verify-jwt-0.0.1-SNAPSHOT.jar"

Related

serverless deploy error - Resource handler returned message: "Lambda function xxxxxxxx could not be found"

Hi can anyone help me how to deploy serverless with specific stage, I have 1 app with 2 stage dev and prod. When deploy to dev its working fine and successfully deployed, but with prod stage always get below error:
Error:
UPDATE_FAILED: FilterOptionLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Lambda function xxxxxxx-api-prod-xxxxxx could not be found" (RequestToken: ee621797-de45-aa3f-118b-8f512d4a5f62, HandlerErrorCode: NotFound)
I tried to comment all function and leave 1 function to test deploy, but received another error as below:
Error:
UPDATE_FAILED: EnterpriseLogAccessIamRole (AWS::IAM::Role)
Unable to retrieve Arn attribute for AWS::Logs::LogGroup, with error message Resource of type 'AWS::Logs::LogGroup' with identifier '{"/properties/LogGroupName":"/aws/lambda/xxxxx-api-prod-api"}' was not found.
Here is my serverless.yml:
org: xxxxxx
app: comeby-api
service: comeby-scheduler-api
frameworkVersion: "3"
custom:
serverless-offline:
noPrependStageInUrl: true
myEnvironment:
MESSAGE:
prod: "This is production environment"
staging: "This is staging environment"
dev: "This is development environment"
useDotenv: true
provider:
name: aws
runtime: nodejs14.x
region: ap-southeast-1
stage: prod
functions:
api:
handler: handler.handler
events:
- httpApi: "*"
# Alikhsan
SyncAlikhsanSB2:
SyncAlikhsanAMT:
SyncAlikhsanASG:
SyncAlikhsanIOI:
SyncAlikhsanJSB:
SyncAlikhsanSPY:
# Sync Product
Shopify:
SyncSenheng:
SyncXilnix:
Puma:
# Anything
FilterOption:
AriadneMaps:
handler: scheduler/update/AriadneMaps.handler
description: "Update Ariadne Maps (to view report of total visitor of specific store) in Database"
memorySize: 512
timeout: 900
events:
- schedule:
rate: cron(00 22 * * ? *)
enabled: true
- http:
path: /cron/ariadne
method: get
SendEmailUpdateProduct:
ReportPurchasing:
UpdateProductPricePuma:
UpdateFootFallCam:
plugins:
# - serverless-dotenv-plugin
- serverless-offline
- serverless-offline-scheduler
I am guessing from those UPDATE_FAILEDs, you are using the same serverless file for both dev and prod deployment. Based on this assumption, you may have to provide separate service names for both of your deployments. If you have deployed to the dev environment already with service name comeby-scheduler-api, the next deployment for prod stage with the same service name will try to override the previous deployment.
In my case, I tackled this using 2 separate serverless configuration files (one for dev and the other for prod). For dev deployment, my config file serverless-dev.yml looks like the following.
service: service-dev
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
environment:
DB_HOST: <host>
DB_PASSWORD: <pass>
DB_PORT: <port>
DB_DATABASE: <db_name>
DB_USER: <db_user>
plugins:
- serverless-python-requirements
- serverless-secrets-plugin
- serverless-api-compression
package:
patterns:
- '!venv/**'
- '!__pycache__/**'
- '!node_modules/**'
- '!test/**'
functions:
Lambda1:
handler: lambda_file_name.handler_function_name
memorySize: 512
timeout: 900
events:
- s3:
bucket: <bucket_name_for_this_lambda_trigger>
event: s3:ObjectCreated:*
rules:
- prefix: <filter_trigger_file_prefix>
- suffix: <filter_trigger_file_suffix>
existing: <true if an existing s3 bucket, false otherwise>
Whereas for the prod, the serverless-prod.yml file is,
service: service-prod
provider:
name: aws
role: arn:aws:iam::<aws-account-id>:role/<my-lambda-role-name>
region: <region>
runtime: python3.8
... rest is similar
My deployment commands for these separate stages are.
sls deploy -s dev -c serverless-dev.yml
sls deploy -s prod -c serverless-prod.yml

Ignore AWS ruby SDK global config

I'm working with the AWS ruby SDK and trying to override the global config for a specific client.
When I load the application I set the global config for S3 use like this
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
force_path_style: '*****',
region: '****'
)
At some point in the application I want to use a different AWS SDK and make those calls using a different set of config options. I create a client like this
client = Aws::SQS::Client.new(
credentials: Aws::Credentials.new(
'****',
'****'
),
region: '****'
)
When I make a call using this new client I get errors because it uses the new config options as well as the ones defined in the global config. For example, I get an error for having force_path_style set because SQS doesn't allow that config option.
Is there a way to override all the global config options for a specific call?
Aws.config supports nested service-specific options, so you can set global options specifically for S3 without affecting other service clients (like SQS).
This means you could change your global config to nest force_path_style under a new s3 hash, like this:
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
s3: {force_path_style: '*****'},
region: '****'
)

Different URLs for downloading and uploading with paperclip on S3 storage

For local development I am using a localstack Docker Container as AWS Sandbox with this Paperclip configuration:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
Links for download content are generated correctly and are working:
http://localhost:4572/my-development/files/downloads/be-fl-che-spezialtiefbau-mischanlage-750_ae0f1c99d8.pdf
But when I want to upload new files I get an Aws::Errors::NoSuchEndpointError based on a different URL:
https://my-development.s3.localhost-region.amazonaws.com/files/downloads/_umschlag-vorlage_c534f5f25e.pdf
I searched and debugged some hours but couldn't find out where this url is generated and why it uses amazonaws.com as host.
Any hint where to look at?
I found a way to get it working.
Add explicit endpoint url to configuration
# config/environments/development.rb
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
s3_options: {
endpoint: 'http://localhost:4572/my-development',
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
As the url will be renamed with the bucket name by AWS Gem, the resulting domain will be my-development.localhost. I didn't found any other solution yet than to add this subdomain into my /etc/hosts:
127.0.0.1 localhost
127.0.0.1 my-development.localhost
255.255.255.255 broadcasthost
::1 localhost
::1 my-development.localhost
This is not very clean but works. Maybe I found a better work around later.
This could help others. You can update the aws config in your environment specific config file.
Aws.config.update(
endpoint: 'http://localhost:4572',
force_path_style: true
)

Set environment variables from external file in serverless.yml

I'm using serverless and serverless-local for local development.
I've got an external file which holds references to environment variables which I retrieve from node.env in my app.
From what I understand, I should be able to set my environment variables such as
dev:
AWS_KEY: 'key',
SECRET: 'secret
test:
AWS_KEY: 'test-key',
SECRET: 'test-secret',
etc:
...
and have those environment variables included in my app through the following line in my serverless.yml
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.default_stage}
deploymentBucket: serverless-deploy-packages/${opt:stage, self:custom.default_stage}
environment:
${file(./serverless-env.yml):${opt:stage, self:custom.default_stage}}
then in the commandline, I call
serverless offline --stage dev --port 9000
I thought this would include the correct vars in my app, but it isn't working. Is this not how it is supposed to work? Am I doing something wrong here?
From docs:
You can set the contents of an external file into a variable:
file: ${file(./serverless-env.yml)}
And later you can use this new variable to access the file variables.
secret: file.dev.SECRET
Or you can use the file directly:
secret: ${file(./serverless-env.yml):dev.SECRET}
You can also now use remote async values with the serverless framework. See https://serverless.com/blog/serverless-v1.13.0/
This means you can call values from s3 or remote databases etc.
Example:
serverless.yml
service: serverless-async-vars
provider:
name: aws
runtime: nodejs6.10
custom:
secret: ${file(./vars.js):fetchSecret} # JS file running async / promised
vars.js
module.exports.fetchSecret = () => {
// async code
return Promise.resolve('SomeSecretKey');
}
This is how you can separate your environments by different stages:
serverless.yml:
custom:
test:
project: xxx
prod:
project: yyy
provider:
...
stage: ${opt:stage, 'test'}
project: ${self:custom.${opt:stage, 'test'}.project}
environment:
${file(.env.${opt:stage, 'test'}.yml):}
package:
exclude:
- .env.*
.env.test.yml:
VARIABLE1: value1
VARIABLE2: value2
During deploy, pass --stage=prod or skip and test project will be deployed. Then in your JS code you can access ENV variables with process.env.VARIABLE1.
Set Lambda environment variables from JSON file ( using AWS CLI)
aws lambda update-function-configuration --profile mfa --function-name test-api --cli-input-json file://dev.json
I had this correct, but I was referencing the file incorrectly.
I don't see this in the docs, but passing a file to environment will include the files yaml file, and the above structure does work.

Building wheels with travis' pypi deploy

I tried out the travis-ci pypi deployment, as can be seen here:
https://travis-ci.org/Simplistix/testfixtures/jobs/80429422
The pertinent .travis.yaml bits are:
deploy:
provider: pypi
user: ...
password:
secure: ...
on:
tags: true
repo: Simplistix/testfixtures
...but this has only created an sdist.
How can I configure it to also create and upload a wheel?
Just add in deploy section parameter "distributions", for example:
deploy:
provider: pypi
distributions: "sdist bdist bdist_wheel"
... e.t.c

Resources