I'm working with the AWS ruby SDK and trying to override the global config for a specific client.
When I load the application I set the global config for S3 use like this
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
force_path_style: '*****',
region: '****'
)
At some point in the application I want to use a different AWS SDK and make those calls using a different set of config options. I create a client like this
client = Aws::SQS::Client.new(
credentials: Aws::Credentials.new(
'****',
'****'
),
region: '****'
)
When I make a call using this new client I get errors because it uses the new config options as well as the ones defined in the global config. For example, I get an error for having force_path_style set because SQS doesn't allow that config option.
Is there a way to override all the global config options for a specific call?
Aws.config supports nested service-specific options, so you can set global options specifically for S3 without affecting other service clients (like SQS).
This means you could change your global config to nest force_path_style under a new s3 hash, like this:
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
s3: {force_path_style: '*****'},
region: '****'
)
Related
For local development I am using a localstack Docker Container as AWS Sandbox with this Paperclip configuration:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
Links for download content are generated correctly and are working:
http://localhost:4572/my-development/files/downloads/be-fl-che-spezialtiefbau-mischanlage-750_ae0f1c99d8.pdf
But when I want to upload new files I get an Aws::Errors::NoSuchEndpointError based on a different URL:
https://my-development.s3.localhost-region.amazonaws.com/files/downloads/_umschlag-vorlage_c534f5f25e.pdf
I searched and debugged some hours but couldn't find out where this url is generated and why it uses amazonaws.com as host.
Any hint where to look at?
I found a way to get it working.
Add explicit endpoint url to configuration
# config/environments/development.rb
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
s3_options: {
endpoint: 'http://localhost:4572/my-development',
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
As the url will be renamed with the bucket name by AWS Gem, the resulting domain will be my-development.localhost. I didn't found any other solution yet than to add this subdomain into my /etc/hosts:
127.0.0.1 localhost
127.0.0.1 my-development.localhost
255.255.255.255 broadcasthost
::1 localhost
::1 my-development.localhost
This is not very clean but works. Maybe I found a better work around later.
This could help others. You can update the aws config in your environment specific config file.
Aws.config.update(
endpoint: 'http://localhost:4572',
force_path_style: true
)
When deploying code to lambdas for a java project there was an issue at first with index. being added to my handler path. Using: https://stackoverflow.com/a/49620548/2612651 I was able to get by that problem.
Now the issue is in Java it is using module_name.handler_name but instead of . it should be ::.
Side note is there a concise list of all the .travis.yml commands anywhere? I cannot seem to find it.
Here is the deploy section of my .travis.yml file it puts the two artifact where I want I believe.
deploy:
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "grant-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function.GrantJwt"
handler_name: "handleRequest"
file: "./grant-jwt/target/grant-jwt-0.0.1-SNAPSHOT.jar"
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "verify-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function.VerifyJwt"
handler_name: "handleRequest"
file: "./verify-jwt/target/verify-jwt-0.0.1-SNAPSHOT.jar"
So I actually ran into this issue too, and when i found the answer i felt silly. The correct way to to this would be to have the module_name be the package path and the handler_name name be Class::handlerFunc. Also file is not supported, looking at the documentation the parameter you want is zip.
so for your case it would be
deploy:
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "grant-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function"
handler_name: "GrantJwt::handleRequest"
zip: "./grant-jwt/target/grant-jwt-0.0.1-SNAPSHOT.jar"
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "verify-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function"
handler_name: "VerifyJwt::handleRequest"
zip: "./verify-jwt/target/verify-jwt-0.0.1-SNAPSHOT.jar"
initializer/aws.rb
keys = Rails.application.credentials[:aws]
creds = Aws::Credentials.new(keys[:access_key_id], keys[:secret_access_key])
Aws.config.update({
service: "s3",
region: 'eu-west-2',
credentials: creds
})
when i do in my controller this i get error
s3 = Aws::S3::Client.new(
region: Aws.config[:region],
credentials: Aws.config[:credentials]
)
#ArgumentError (invalid configuration option `:service'):
I use IAM credentials
ruby-sdk-3
ok I deleted service from Aws.config and it works but it would be nice to store this param in config
s3 = Aws::Client.new(
region: Aws.config[:region],
credentials: Aws.config[:credentials]
)
try with this!!
AWS Ruby-sdk
I hope this will help you!
I'm using serverless and serverless-local for local development.
I've got an external file which holds references to environment variables which I retrieve from node.env in my app.
From what I understand, I should be able to set my environment variables such as
dev:
AWS_KEY: 'key',
SECRET: 'secret
test:
AWS_KEY: 'test-key',
SECRET: 'test-secret',
etc:
...
and have those environment variables included in my app through the following line in my serverless.yml
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.default_stage}
deploymentBucket: serverless-deploy-packages/${opt:stage, self:custom.default_stage}
environment:
${file(./serverless-env.yml):${opt:stage, self:custom.default_stage}}
then in the commandline, I call
serverless offline --stage dev --port 9000
I thought this would include the correct vars in my app, but it isn't working. Is this not how it is supposed to work? Am I doing something wrong here?
From docs:
You can set the contents of an external file into a variable:
file: ${file(./serverless-env.yml)}
And later you can use this new variable to access the file variables.
secret: file.dev.SECRET
Or you can use the file directly:
secret: ${file(./serverless-env.yml):dev.SECRET}
You can also now use remote async values with the serverless framework. See https://serverless.com/blog/serverless-v1.13.0/
This means you can call values from s3 or remote databases etc.
Example:
serverless.yml
service: serverless-async-vars
provider:
name: aws
runtime: nodejs6.10
custom:
secret: ${file(./vars.js):fetchSecret} # JS file running async / promised
vars.js
module.exports.fetchSecret = () => {
// async code
return Promise.resolve('SomeSecretKey');
}
This is how you can separate your environments by different stages:
serverless.yml:
custom:
test:
project: xxx
prod:
project: yyy
provider:
...
stage: ${opt:stage, 'test'}
project: ${self:custom.${opt:stage, 'test'}.project}
environment:
${file(.env.${opt:stage, 'test'}.yml):}
package:
exclude:
- .env.*
.env.test.yml:
VARIABLE1: value1
VARIABLE2: value2
During deploy, pass --stage=prod or skip and test project will be deployed. Then in your JS code you can access ENV variables with process.env.VARIABLE1.
Set Lambda environment variables from JSON file ( using AWS CLI)
aws lambda update-function-configuration --profile mfa --function-name test-api --cli-input-json file://dev.json
I had this correct, but I was referencing the file incorrectly.
I don't see this in the docs, but passing a file to environment will include the files yaml file, and the above structure does work.
Other questions answer how to open (and save) a Fog connection with ENV variables, or how to configure Carrierwave with ENV variables, but I want to set the Fog credentials universally and in one place.
Fog.credentials = {
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
}