I'm trying to configure ActiveStorage to use S3 bucket as a storage backend however I don't want to pass any of access_key_id, secret_access_key, region. Instead, I'd like to use previously defined IAM role. Such configuration is mentioned here. It reads (I've added bold):
If you want to use environment variables, standard SDK configuration files, profiles, IAM instance profiles or task roles, you can omit the access_key_id, secret_access_key, and region keys in the example above. The Amazon S3 Service supports all of the authentication options described in the AWS SDK documentation.
However I cannot get it working. My storage.yml looks similar to this:
amazon:
service: S3
bucket: bucket_name
credentials:
role_arn: "linked::account::arn"
role_session_name: "session-name"
I've run rails active_storage:install, applied generated migrations and set config.active_storage.service = :amazon in my app's config.
The issue is that when I'm trying to save a file, I'm getting an unexpected error:
u = User.first
s = StringIO.new
s << 'hello,world'
s.seek 0
u.csv.attach(io: s, filename: 'filename.csv')
Traceback (most recent call last):
2: from (irb):3
1: from (irb):3:in `rescue in irb_binding'
LoadError (Unable to autoload constant ActiveStorage::Blob::Analyzable, expected /usr/local/bundle/gems/activestorage-5.2.2/app/models/active_storage/blob/analyzable.rb to define it)
I'm using Rails 5.2.2.
Are you trying this code inside an AWS EC2 instance or locally in your machine?
If you check the authentication methods in AWS: https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html#aws-ruby-sdk-credentials-iam
You'll see the following section:
Setting Credentials Using IAM
For an Amazon Elastic Compute Cloud instance, create an AWS Identity and Access Management role, and then
give your Amazon EC2 instance access to that role. For more
information, see IAM Roles for Amazon EC2 in the Amazon EC2 User Guide
for Linux Instances or IAM Roles for Amazon EC2 in the Amazon EC2 User
Guide for Windows Instances.
This means that for this authentication method to work, you must:
Create an EC2 instance on AWS
Create an EC2 IAM Role with permissions to write to an S3 Bucket
Configure your EC2 instance attaching the new IAM Role to it
With the role attached to the instance, your config/storage.yml file will look like this:
amazon:
service: S3
bucket: test-stackoverflow-bucket-app
region: "us-west-1"
Note that region is a required parameter, you'll get an error if you skip it: https://github.com/aws/aws-sdk-ruby/issues/1240#issuecomment-231866239
I'm afraid this won't work locally, to use active_storage locally you must set the access_key_id, secret_access_key values.
Related
How to correctly create and setup access_key_id and secret_access_key in Amazon Simple Email Service (SES), for aws-ses gem? In the description of the gem it is written to provide exactly them in the credentials file, but I can't figure out how to create them.
My configuration for aws-ses gem:
# config/initializers/amazon_ses.rb
ActionMailer::Base.add_delivery_method :ses, AWS::SES::Base,
:server => 'email.us-west-2.amazonaws.com',
:access_key_id => Rails.application.credentials.aws[:access_key_id],
:secret_access_key => Rails.application.credentials.aws[:secret_access_key]
I configured the SES service itself by adding my personal domain to it and testing sending emails from the Amazon site. To use the service, it has SMTP settings - but they create a completely different type of key, which is not suitable for the aws-ses gem.
I also tried to use create keys when creating a new user through Identity and Access Management (IAM), specifying full access to Amazon SES.
But all this did not help, the Amazon SES service does not work, and when sending messages to SideKiq, I get errors in the form:
AWS :: SES :: ResponseError: InvalidClientTokenId - The security token included in the request is invalid.
There are several ways to specify credentials for the AWS SDK for Ruby. Hopefully the following topic helps you: https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html
I have an app running on a ubuntu server. I have a production mode and a staging mode.
Problem is that actions being done on the production site relative to uploading and retrieving images from an S3 bucket are being done to the same bucket as my staging. When I have my configurations set up differently.
production.rb
config.s3_bucket = 'bucket-production'
config.s3_path = 'https://bucket-production.s3.us-east-2.amazonaws.com/'
staging.rb && development.rb
config.s3_bucket = 'bucket-staging'
config.s3_path = 'https://bucket-staging.s3.us-east-2.amazonaws.com/'
storage.yml
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-2
bucket: bucket-staging
endpoint: http://bucket-staging.us-east-2.amazonaws.com
I'm thinking it could be something with storage.yml but I deleted this entire file and restarted the localhost server and it didn't change anything. Is storage.yml production only?
Also, my logs are logging to staging from production.
I would like to ask is prod server/staging server(Ubuntu) running in AWS.if yes, you must have IAM ROLE attached to the server which must be fine-grained to which env application should access which bucket of S3. Also storing access_key_id and secret access key id should not be used as best practice. IAM role can take care of that. Also, I would like to add if the server is in the private subnet you need to explore either NAT gateway or use VPC S3 Endpoint to access the bucket.
Also try printing logs of S3 connection in prod mode how it is acquiring cred to access bucket. You might be granting access using some ENV variables or IAM role. Best way to see is use
printenv
before and after of S3 connection to see the variables and which bucket access is provided.
Thanks
Ashish
I'm trying to access an s3 bucket from within the interactive ruby shell, using different AWS credentials than the ones the application is configured with.
I've tried manually setting a new s3 client using the other key/secret, but I get Access Denied as the call defaults to using the application's preconfigured AWS account. Modifying the application's configured credentials is not an option, as it's needed to simultaneously access different AWS resources.
Here's what I'm trying in the ruby shell:
s3_2 = AWS::S3.new(:access_key_id => "<key>", :secret_access_key => "<secret>")
bucket = s3_2.buckets['<bucket_name>']
bucket.objects.each do |obj|
puts obj.key
end
(The test just does a get to confirm access, it works if I use public access on the bucket because it allows any AWS user, but not when I restrict it and try to use the new temporary user that has s3 full access on the account.)
The Rails console should be a separately running instance of the app from the server instance using the pre-configured credentials.
The following should update the credentials for the rails console session only.
Aws.config.update({credentials:Aws::Credentials.new('your_access_key_id','your_secret_access_key')})
A new AWS S3 client should be initialized:
s3_2 = Aws::S3::Client.new(:access_key_id => "<key>", :secret_access_key => "<secret>")
The only tool I could find, I forked and tried to update to include the S3_REGION because I was getting
$ The bucket you are attempting to access must be addressed using the specified endpoint
These are all the variables I am passing to access the bucket.
opts[:s3_key] =======> AKIAJHXXG*********YA
opts[:s3_secret] =======> uYXxuA*******************pCcXuT61DI7po2
opts[:s3_bucket] =======> *****
opts[:output_path] =======> /Users/myname/Desktop/projects/my_project/public/system
opts[:s3_region] =======> s3-us-west-2.amazonaws.com
https://github.com/rounders/heroku-s3assets has not been update in a while so Im assuming I just can't find where the actual error is breaking either in Heroku tools, or the older aws-s3 gem.
Anyone have any method to pull down production assets to Heroku server from AmazonS3?
I think I mis-understood you, so editing now...maybe experiment with something simpler:
http://priyankapathak.wordpress.com/2012/12/28/download-assets-from-amazon-s3-via-ruby/
My search returned this info:
Bucket is in a different region
The Amazon S3 bucket specified in the COPY command must be in the same
region as the cluster. If your Amazon S3 bucket and your cluster are
in different regions, you will receive an error similar to the
following:
ERROR: S3ServiceException:The bucket you are attempting to access must be addressed using the specified endpoint.
You can create an Amazon S3 bucket in a specific region either by
selecting the region when you create the bucket by using the Amazon S3
Management Console, or by specifying an endpoint when you create the
bucket using the Amazon S3 API or CLI. For more information, see
Uploading files to Amazon S3.
For more information about Amazon S3 regions, see Buckets and Regions
in the Amazon Simple Storage Service Developer Guide.
Alternatively, you can specify the region using the REGION option with
the COPY command.
http://docs.aws.amazon.com/redshift/latest/dg/s3serviceexception-error.html
So it turns out that gem was all but useless. I've gotten further to my goal of downloading all my s3 assets to public/system - but still can not figure out how to download them to my correct local rails directory using the aws s3 docs - http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html
s3 = AWS::S3.new(access_key_id: 'AKIAJH*********PFYA', secret_access_key: 'uYXxuAMcnKODn***************uT61DI7po2', s3_endpoint: 's3-us-west-2.amazonaws.com')
s3.buckets['advlo'].objects.each do |obj|
puts obj.inspect
end
I probably just need to read more unix commands and scp them over individually or something. Any ideas?
How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk?
I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using:
git aws.push
The following files are in the .gitignore:
/config/database.yml
/config/initializers/omniauth.rb
/config/initializers/secret_token.rb
Following this link I attempted to add an S3 file to my deployment:
http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html
Quoting from that link:
Example Snippet
The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp:
sources:
/etc/myapp: http://s3.amazonaws.com/mybucket/myobject
Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .ebextensions directory:
sources:
/var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz
That config.tar.gz file will extract to:
/config/database.yml
/config/initializers/omniauth.rb
/config/initializers/secret_token.rb
However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message:
Error message:
No such file or directory - /var/app/current/config/database.yml
Exception class:
Errno::ENOENT
Application root:
/var/app/current
The "right" way to do what I think that you want to do is to use IAM Roles. You can see a blog post about it here: http://aws.typepad.com/aws/aws-iam/
Basically, it allows you to launch an EC2 instance without putting any personal credential on any configuration file at all. When you launch the instance it will be assigned the given role (a set of permissions to use AWS resources), and a rotating credential will be put on the machine automatically with Amazon IAM.
In order to have the .ebextension/*.config files be able to download the files from S3, they would have to be public. Given that they contain sensitive information, this is a Bad Idea.
You can launch an Elastic Beanstalk instance with an instance role, and you can give that role permission to access the files in question. Unfortunately, the file: and sources: sections of the .ebextension/*.config files do not have direct access to use this role.
You should be able to write a simple script using the AWS::S3::S3Object class of the AWS SDK for Ruby to download the files, and use a command: instead of a sources:. If you don't specify credentials, the SDK will automatically try to use the role.
You would have to add a policy to your role which allows you to download the files you are interested in specifically. It would look like this:
{
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
Then you could do something like this in your .config file
files:
/usr/local/bin/downloadScript.rb: http://s3.amazonaws.com/mybucket/downloadScript.rb
commands:
01-download-config:
command: ruby /usr/local/downloadScript.rb http://s3.amazonaws.com/mybucket/config.tar.gz /tmp
02-unzip-config:
command: tar xvf /tmp/config.tar.gz
cwd: /var/app/current
It is possible (and easy) to store sensitive files in S3 and copy them to your Beanstalk instances automatically.
When you create a Beanstalk application, an S3 bucket is automatically created. This bucket is used to store app versions, logs, metadata, etc.
The default aws-elasticbeanstalk-ec2-role that is assigned to your Beanstalk environment has read access to this bucket.
So all you need to do is put your sensitive files in that bucket (either at the root of the bucket or in any directory structure you desire), and create a .ebextension config file to copy them over to your EC2 instances.
Here is an example:
# .ebextensions/sensitive_files.config
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["elasticbeanstalk-us-east-1-XXX"] # Replace with your bucket name
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role" # This is the default role created for you when creating a new Beanstalk environment. Change it if you are using a custom role
files:
/etc/pki/tls/certs/server.key: # This is where the file will be copied on the EC2 instances
mode: "000400" # Apply restrictive permissions to the file
owner: root # Or nodejs, or whatever suits your needs
group: root # Or nodejs, or whatever suits your needs
authentication: "S3Auth"
source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-east-1-XXX/server.key # URL to the file in S3
This is documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html
Using environment variables is a good approach. Reference passwords in the environment, so in a yaml file:
password: <%= ENV['DATABASE_PASSWORD'] %>
Then set them on the instance directly with eb or the console.
You may be worried about having such sensitive information readily available in the environment. If a process compromises your system, it can probably obtain the password no matter where it is. This approach is used by many PaaS providers such as Heroku.
From there security document Amazon EC2 supports TrueCrypt for File Encryption and SSL for data in transit. Check out these documents
Security Whitepaper
Features
Risk and Compliance
Best Practices
You can upload a server instance with an encrypted disk, or you can use a private repo (I think this costs for github but there are alternatives)
I think the best way is not to hack AWS (set hooks, upload files). Just use ENV variables.
Use gem 'dot-env' for development (i.e. <%= ENV['LOCAL_DB_USERNAME'] %> in 'config/database.yml') and default AWS console to set variables in Beanstalk.
I know this is an old post but I couldn't find another answer anywhere so I burned the midnight oil to come up one. I hope it saves you several hours.
I agreed with the devs that posted how much of a PITA it is to force devs to put ENV vars in their local dev database.yml. I know the dotenv gem is nice but you still have to maintain the ENV vars, which adds to the time it takes to bring up a station.
My approach is to store a database.yml file on S3 in the bucket created by EB and then use a .ebextensions config file to create a script in the server's pre hook directory so it would be executed after the unzip to the staging directory but before the asset compilation--which, of course, blows up without a database.yml.
The .config file is
# .ebextensions/sensitive_files.config
# Create a prehook command to copy database.yml from S3
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/03_copy_database.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
set -xe
EB_APP_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_staging_dir)
echo EB_APP_STAGING_DIR is ${EB_APP_STAGING_DIR} >/tmp/copy.log
ls -l ${EB_APP_STAGING_DIR} >>/tmp/copy.log
aws s3 cp s3://elasticbeanstalk-us-east-1-XXX/database.yml ${EB_APP_STAGING_DIR}/config/database.yml >>/tmp/copy.log 2>>/tmp/copy.log
Notes
Of course the XXX in the bucket name is a sequence number created by EB. You'll have to check S3 to see the name of your bucket.
The name of the script file I create is important. These scripts are executed in alphabetical order so I was careful to name it so it sorts before the asset_compilation script.
Obviously redirecting output to /tmp/copy.log is optional
The post that helped me the most was at Customizing ElasticBeanstalk deployment hooks
posted by Kenta#AWS. Thanks Kenta!