initializer/aws.rb
keys = Rails.application.credentials[:aws]
creds = Aws::Credentials.new(keys[:access_key_id], keys[:secret_access_key])
Aws.config.update({
service: "s3",
region: 'eu-west-2',
credentials: creds
})
when i do in my controller this i get error
s3 = Aws::S3::Client.new(
region: Aws.config[:region],
credentials: Aws.config[:credentials]
)
#ArgumentError (invalid configuration option `:service'):
I use IAM credentials
ruby-sdk-3
ok I deleted service from Aws.config and it works but it would be nice to store this param in config
s3 = Aws::Client.new(
region: Aws.config[:region],
credentials: Aws.config[:credentials]
)
try with this!!
AWS Ruby-sdk
I hope this will help you!
Related
I'm working with the AWS ruby SDK and trying to override the global config for a specific client.
When I load the application I set the global config for S3 use like this
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
force_path_style: '*****',
region: '****'
)
At some point in the application I want to use a different AWS SDK and make those calls using a different set of config options. I create a client like this
client = Aws::SQS::Client.new(
credentials: Aws::Credentials.new(
'****',
'****'
),
region: '****'
)
When I make a call using this new client I get errors because it uses the new config options as well as the ones defined in the global config. For example, I get an error for having force_path_style set because SQS doesn't allow that config option.
Is there a way to override all the global config options for a specific call?
Aws.config supports nested service-specific options, so you can set global options specifically for S3 without affecting other service clients (like SQS).
This means you could change your global config to nest force_path_style under a new s3 hash, like this:
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
s3: {force_path_style: '*****'},
region: '****'
)
For local development I am using a localstack Docker Container as AWS Sandbox with this Paperclip configuration:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
Links for download content are generated correctly and are working:
http://localhost:4572/my-development/files/downloads/be-fl-che-spezialtiefbau-mischanlage-750_ae0f1c99d8.pdf
But when I want to upload new files I get an Aws::Errors::NoSuchEndpointError based on a different URL:
https://my-development.s3.localhost-region.amazonaws.com/files/downloads/_umschlag-vorlage_c534f5f25e.pdf
I searched and debugged some hours but couldn't find out where this url is generated and why it uses amazonaws.com as host.
Any hint where to look at?
I found a way to get it working.
Add explicit endpoint url to configuration
# config/environments/development.rb
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
s3_options: {
endpoint: 'http://localhost:4572/my-development',
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
As the url will be renamed with the bucket name by AWS Gem, the resulting domain will be my-development.localhost. I didn't found any other solution yet than to add this subdomain into my /etc/hosts:
127.0.0.1 localhost
127.0.0.1 my-development.localhost
255.255.255.255 broadcasthost
::1 localhost
::1 my-development.localhost
This is not very clean but works. Maybe I found a better work around later.
This could help others. You can update the aws config in your environment specific config file.
Aws.config.update(
endpoint: 'http://localhost:4572',
force_path_style: true
)
When deploying code to lambdas for a java project there was an issue at first with index. being added to my handler path. Using: https://stackoverflow.com/a/49620548/2612651 I was able to get by that problem.
Now the issue is in Java it is using module_name.handler_name but instead of . it should be ::.
Side note is there a concise list of all the .travis.yml commands anywhere? I cannot seem to find it.
Here is the deploy section of my .travis.yml file it puts the two artifact where I want I believe.
deploy:
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "grant-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function.GrantJwt"
handler_name: "handleRequest"
file: "./grant-jwt/target/grant-jwt-0.0.1-SNAPSHOT.jar"
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "verify-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function.VerifyJwt"
handler_name: "handleRequest"
file: "./verify-jwt/target/verify-jwt-0.0.1-SNAPSHOT.jar"
So I actually ran into this issue too, and when i found the answer i felt silly. The correct way to to this would be to have the module_name be the package path and the handler_name name be Class::handlerFunc. Also file is not supported, looking at the documentation the parameter you want is zip.
so for your case it would be
deploy:
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "grant-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function"
handler_name: "GrantJwt::handleRequest"
zip: "./grant-jwt/target/grant-jwt-0.0.1-SNAPSHOT.jar"
- provider: lambda
access_key_id: $AWS_KEY
secret_access_key: $AWS_SECRET
function_name: "verify-jwt"
region: "us-east-2"
role: "<arn>"
runtime: "java8"
module_name: "com.dapper.cloud.function"
handler_name: "VerifyJwt::handleRequest"
zip: "./verify-jwt/target/verify-jwt-0.0.1-SNAPSHOT.jar"
I've just transferred an existing Parse server to my self hosted Digital Ocean droplet. Unfortunately, I cannot get the sending push messages part working. I still remember that in the old Parse.com, we had an option to release the app in production. But I cannot find that attribute anymore.
Is there any way to set the parse server environment to "production" in my config or so?
Cheers,
Vincent
You can set that in your parse-server config!
{ "ios": [ { "pfx": "/home/parse/file/ApplePushServices.p12", "bundleId": "yourApp", "production": true } ] }
Use below config to parse server.
var api = new ParseServer({
databaseURI: 'databaseUrl',
cloud: __dirname + '/cloud/main.js',
appId: 'your app id',
masterKey: 'master key',
serverURL:'Server url',
push : {
ios: {
cert: 'ios certificate url',
bundleId: 'Your bundle id',
production: true
}
}
});
I have a Jenkins pipeline job that needs to provide the username and password to checkout from RTC as parameters.
The checkout action can use a userId and password variable, but the Password must be of the class "Secret".
When trying to create a secret using hudson.util.Secret secret = hudson.util.Secret.fromString("${Build_Password}"), I get the following error:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod hudson.util.Secret fromString java.lang.String
Is there a way to create a Secret or Credential from parameters?
I had to disable the groovy sandbox. After that, I was able to use the Secret class:
hudson.util.Secret secret = hudson.util.Secret.fromString(Build_Password)
checkout([$class: 'TeamFoundationServerScm', localPath: 'D:\Build-Code-Scm', projectPath: '$/RootDirectory/SubFolder', serverUrl: 'http://TEST.TEST.com:8080/TEST/TEST', useOverwrite: true, useUpdate: true, userName: 'UNMAME', password: hudson.util.Secret.fromString('PASSWORD'), workspaceName: 'Hudson-${JOB_NAME}-${NODE_NAME}'])