How do I centrally configure Fog credentials using environment variables? - environment-variables

Other questions answer how to open (and save) a Fog connection with ENV variables, or how to configure Carrierwave with ENV variables, but I want to set the Fog credentials universally and in one place.

Fog.credentials = {
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
}

Related

Ignore AWS ruby SDK global config

I'm working with the AWS ruby SDK and trying to override the global config for a specific client.
When I load the application I set the global config for S3 use like this
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
force_path_style: '*****',
region: '****'
)
At some point in the application I want to use a different AWS SDK and make those calls using a different set of config options. I create a client like this
client = Aws::SQS::Client.new(
credentials: Aws::Credentials.new(
'****',
'****'
),
region: '****'
)
When I make a call using this new client I get errors because it uses the new config options as well as the ones defined in the global config. For example, I get an error for having force_path_style set because SQS doesn't allow that config option.
Is there a way to override all the global config options for a specific call?
Aws.config supports nested service-specific options, so you can set global options specifically for S3 without affecting other service clients (like SQS).
This means you could change your global config to nest force_path_style under a new s3 hash, like this:
Aws.config.update(
endpoint: '****',
access_key_id: '****',
secret_access_key: '****',
s3: {force_path_style: '*****'},
region: '****'
)

Different URLs for downloading and uploading with paperclip on S3 storage

For local development I am using a localstack Docker Container as AWS Sandbox with this Paperclip configuration:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
Links for download content are generated correctly and are working:
http://localhost:4572/my-development/files/downloads/be-fl-che-spezialtiefbau-mischanlage-750_ae0f1c99d8.pdf
But when I want to upload new files I get an Aws::Errors::NoSuchEndpointError based on a different URL:
https://my-development.s3.localhost-region.amazonaws.com/files/downloads/_umschlag-vorlage_c534f5f25e.pdf
I searched and debugged some hours but couldn't find out where this url is generated and why it uses amazonaws.com as host.
Any hint where to look at?
I found a way to get it working.
Add explicit endpoint url to configuration
# config/environments/development.rb
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
s3_options: {
endpoint: 'http://localhost:4572/my-development',
},
bucket: 'my-development',
s3_region: 'localhost-region',
s3_host_name: 'localhost:4572',
url: ':s3_path_url',
}
As the url will be renamed with the bucket name by AWS Gem, the resulting domain will be my-development.localhost. I didn't found any other solution yet than to add this subdomain into my /etc/hosts:
127.0.0.1 localhost
127.0.0.1 my-development.localhost
255.255.255.255 broadcasthost
::1 localhost
::1 my-development.localhost
This is not very clean but works. Maybe I found a better work around later.
This could help others. You can update the aws config in your environment specific config file.
Aws.config.update(
endpoint: 'http://localhost:4572',
force_path_style: true
)

Amazon aws s3 client error aws-sdk ruby

initializer/aws.rb
keys = Rails.application.credentials[:aws]
creds = Aws::Credentials.new(keys[:access_key_id], keys[:secret_access_key])
Aws.config.update({
service: "s3",
region: 'eu-west-2',
credentials: creds
})
when i do in my controller this i get error
s3 = Aws::S3::Client.new(
region: Aws.config[:region],
credentials: Aws.config[:credentials]
)
#ArgumentError (invalid configuration option `:service'):
I use IAM credentials
ruby-sdk-3
ok I deleted service from Aws.config and it works but it would be nice to store this param in config
s3 = Aws::Client.new(
region: Aws.config[:region],
credentials: Aws.config[:credentials]
)
try with this!!
AWS Ruby-sdk
I hope this will help you!

Aws::S3::Errors::RequestTimeTooSkewed (The difference between the request time and the current time is too large.):

config.paperclip_defaults = {
storage: :s3,
:s3_region => 'ap-southeast-1',
s3_credentials: {
bucket: 'sjoobing',
access_key_id: '',
secret_access_key: '',
}
}
When i upload file then gething this error
The following works for me :
sudo apt-get install ntpdate
sudo ntpdate -s time.nist.gov
The time on your server is out of sync with the current time. Sync up your system clock and the problem will go away.
Check this guide for syncing the timing
http://www.howtogeek.com/tips/how-to-sync-your-linux-server-time-with-network-time-servers-ntp/

Rails capistrano deploy to multiple servers

I am trying to optimize my application. I would like to deploy my rails application to different machines. Unfortunately I can't understand how to do it.
role :web, "ip1","ip2"
role :app, "ip1, ip2"
role :db, "db_ip", primary: true
set :application, "Name"
set :user, "root"
set :port, 22
set :deploy_to, "/home/#{user}/apps/#{application}"
set :ssh_options, {:forward_agent => true}
ssh_options[:forward_agent] = true
ssh_options[:keys] = %w(~/.ssh/id_key)
This is my configuration. I have two unicorn servers and one db server. When I use cap:deploy:cold it asks me for password but I can't understand the password of which machine I should enter? It doesn't work with all of the server's passwords. I receive
(Net::SSH::AuthenticationFailed: root)
Can someone explain me how should my configuration looks to be able to deploy to all of the machines?
This should just work, but you should set up your ssh connections so you do not have to enter a password, using ssh keys.
this is for for version 3, and was posted before seeing version was set 2.
try setting your global options like this.
set :ssh_options, {
keys: %w(/home/your_user/.ssh/id_key),
forward_agent: true,
}
And is your key called id_key (id_rsa is more common)
if you need to do it per server you can do this.
server 'ip1',
user: 'root',
roles: %w{web app},
ssh_options: {
user: 'foobar', # overrides user setting above
keys: %w(/home/user_name/.ssh/id_rsa),
forward_agent: false,
}

Resources