i want to use the AWS credentials in the code to connect to mongodb
const user = process.env.AWS_ACCESS_KEY_ID
const pass = encodeURIComponent(
process.env.AWS_SECRET_ACCESS_KEY
)
but i get undefined. my serverless credentials are configured.
You have to add it to the environment in your serverless.yml file.
Here is an example->
provider:
environment:
AWS_SECRET_ACCESS_KEY: 'asdasda123123XXX'
Now you can access it in your code as process.env.AWS_SECRET_ACCESS_KEY
Related
I have used Azure Active Storage for all the user files that are uploaded on my application developed using Rails v6. Works perfectly fine.
Now I wanted to make a copy of the files that are getting uploaded on my application to Another Azure storage account. How can I go about this approach. Any help would be much appreciated.
There are a couple of options available. One is using mirrors in storage.yml file. There you can configure multiple storage accounts even with different cloud service providers like Amazon and Azure. This is done at configuration level
s3_mirror:
service: S3
access_key_id: ""
secret_access_key: ""
region: ""
bucket: ""
development:
service: AzureStorage
root: <%= Rails.root.join("storage") %>
container: development
storage_account_name: my_account_name
storage_access_key: my_access_key
mirror:
service: Mirror
primary: development
mirrors:
- s3_mirror
Another way which was more usable for me was this piece of code suggested by a famous AI app
# First, install the gem
gem 'azure-storage-ruby'
# Then, in your code, you can use the Azure::Storage::Client class to connect
to a storage account
# Replace ACCOUNT_NAME and ACCOUNT_KEY with your storage account name and key
client1 = Azure::Storage::Client.create(storage_account_name: 'ACCOUNT_NAME',
storage_access_key: 'ACCOUNT_KEY')
client2 = Azure::Storage::Client.create(storage_account_name: 'ACCOUNT_NAME',
storage_access_key: 'ACCOUNT_KEY')
# You can then use the client objects to perform operations on the storage
accounts
blob_client1 = client1.blob_client
blob_client2 = client2.blob_client
# For example, you can list the blobs in a container like this:
blob_client1.list_blobs('my-container').each do |blob|
puts blob.name
end
blob_client2.list_blobs('my-container').each do |blob|
puts blob.name
end
I have an app running on a ubuntu server. I have a production mode and a staging mode.
Problem is that actions being done on the production site relative to uploading and retrieving images from an S3 bucket are being done to the same bucket as my staging. When I have my configurations set up differently.
production.rb
config.s3_bucket = 'bucket-production'
config.s3_path = 'https://bucket-production.s3.us-east-2.amazonaws.com/'
staging.rb && development.rb
config.s3_bucket = 'bucket-staging'
config.s3_path = 'https://bucket-staging.s3.us-east-2.amazonaws.com/'
storage.yml
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-2
bucket: bucket-staging
endpoint: http://bucket-staging.us-east-2.amazonaws.com
I'm thinking it could be something with storage.yml but I deleted this entire file and restarted the localhost server and it didn't change anything. Is storage.yml production only?
Also, my logs are logging to staging from production.
I would like to ask is prod server/staging server(Ubuntu) running in AWS.if yes, you must have IAM ROLE attached to the server which must be fine-grained to which env application should access which bucket of S3. Also storing access_key_id and secret access key id should not be used as best practice. IAM role can take care of that. Also, I would like to add if the server is in the private subnet you need to explore either NAT gateway or use VPC S3 Endpoint to access the bucket.
Also try printing logs of S3 connection in prod mode how it is acquiring cred to access bucket. You might be granting access using some ENV variables or IAM role. Best way to see is use
printenv
before and after of S3 connection to see the variables and which bucket access is provided.
Thanks
Ashish
I'm trying to configure ActiveStorage to use S3 bucket as a storage backend however I don't want to pass any of access_key_id, secret_access_key, region. Instead, I'd like to use previously defined IAM role. Such configuration is mentioned here. It reads (I've added bold):
If you want to use environment variables, standard SDK configuration files, profiles, IAM instance profiles or task roles, you can omit the access_key_id, secret_access_key, and region keys in the example above. The Amazon S3 Service supports all of the authentication options described in the AWS SDK documentation.
However I cannot get it working. My storage.yml looks similar to this:
amazon:
service: S3
bucket: bucket_name
credentials:
role_arn: "linked::account::arn"
role_session_name: "session-name"
I've run rails active_storage:install, applied generated migrations and set config.active_storage.service = :amazon in my app's config.
The issue is that when I'm trying to save a file, I'm getting an unexpected error:
u = User.first
s = StringIO.new
s << 'hello,world'
s.seek 0
u.csv.attach(io: s, filename: 'filename.csv')
Traceback (most recent call last):
2: from (irb):3
1: from (irb):3:in `rescue in irb_binding'
LoadError (Unable to autoload constant ActiveStorage::Blob::Analyzable, expected /usr/local/bundle/gems/activestorage-5.2.2/app/models/active_storage/blob/analyzable.rb to define it)
I'm using Rails 5.2.2.
Are you trying this code inside an AWS EC2 instance or locally in your machine?
If you check the authentication methods in AWS: https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html#aws-ruby-sdk-credentials-iam
You'll see the following section:
Setting Credentials Using IAM
For an Amazon Elastic Compute Cloud instance, create an AWS Identity and Access Management role, and then
give your Amazon EC2 instance access to that role. For more
information, see IAM Roles for Amazon EC2 in the Amazon EC2 User Guide
for Linux Instances or IAM Roles for Amazon EC2 in the Amazon EC2 User
Guide for Windows Instances.
This means that for this authentication method to work, you must:
Create an EC2 instance on AWS
Create an EC2 IAM Role with permissions to write to an S3 Bucket
Configure your EC2 instance attaching the new IAM Role to it
With the role attached to the instance, your config/storage.yml file will look like this:
amazon:
service: S3
bucket: test-stackoverflow-bucket-app
region: "us-west-1"
Note that region is a required parameter, you'll get an error if you skip it: https://github.com/aws/aws-sdk-ruby/issues/1240#issuecomment-231866239
I'm afraid this won't work locally, to use active_storage locally you must set the access_key_id, secret_access_key values.
I'm trying to access an s3 bucket from within the interactive ruby shell, using different AWS credentials than the ones the application is configured with.
I've tried manually setting a new s3 client using the other key/secret, but I get Access Denied as the call defaults to using the application's preconfigured AWS account. Modifying the application's configured credentials is not an option, as it's needed to simultaneously access different AWS resources.
Here's what I'm trying in the ruby shell:
s3_2 = AWS::S3.new(:access_key_id => "<key>", :secret_access_key => "<secret>")
bucket = s3_2.buckets['<bucket_name>']
bucket.objects.each do |obj|
puts obj.key
end
(The test just does a get to confirm access, it works if I use public access on the bucket because it allows any AWS user, but not when I restrict it and try to use the new temporary user that has s3 full access on the account.)
The Rails console should be a separately running instance of the app from the server instance using the pre-configured credentials.
The following should update the credentials for the rails console session only.
Aws.config.update({credentials:Aws::Credentials.new('your_access_key_id','your_secret_access_key')})
A new AWS S3 client should be initialized:
s3_2 = Aws::S3::Client.new(:access_key_id => "<key>", :secret_access_key => "<secret>")
I have a static website and I'm trying to use Travis CI to migrate content to the S3 bucket where I'm hosting the website each time I commit changes to GitHub. To support this, I have the following .travis.yml file:
language: python
python: '2.7'
install: true
script: true
deploy:
provider: s3
access_key_id: XXXXX
secret_access_key: YYYYY
bucket: thug-r.life
skip_cleanup: true
region: us-east-1
local_dir: public
which works fine. Except I have my secret in plain text on GitHub in a public repo. So...that's bad. Travis CI has a section on encrypting keys (https://docs.travis-ci.com/user/encryption-keys/) which I followed. Using the CLI tool
travis encrypt secret_access_key="YYYYY" --add
which inserts at the bottom of my file
env:
global:
secure: ZZZZZ
So I tried to modify my original file to look like
deploy:
secret_access_key:
secure: ZZZZZ
But then Travis CI complains that the 'The request signature we calculated does not match the signature you provided.'
So I tried encrypting without quotes
travis encrypt secret_access_key=YYYYY --add
and using the output in the same way.
How am I supposed to include the encrypted key?
All of the examples in the Travic CI help on encrypting keys (https://docs.travis-ci.com/user/encryption-keys/) was of the form:
travis encrypt SOMEVAR="secretvalue"
which it states encrypts the key as well as the value. So, taking the output of the above encryption and using it like above
deploy:
secret_access_key:
secure: ZZZZZ
decrypts to be
deploy:
secret_access_key: secret_access_key: YYYYY
which is what was causing the errors. Instead, what I ended up doing that worked was:
travis encrypt "YYYYY" --add
and used it in the .travis.yml file as
deploy:
secret_access_key:
secure: ZZZZZ
which ended up being accepted.
tl;dr Don't include the key when encrypting the secure_access_key