Rails Active Storage push to DigitalOcean Spaces - ruby-on-rails

Hi I'm trying to get active storage to push to a DigitalOcean space. However, I'm finding that the push url is being changed to amazonaws.com even though I've defined the endpoint to digital ocean.
here is what I have in storage.yml
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: sfo2
bucket: redacted_bucket_name
endpoint: https://sfo2.digitaloceanspaces.com
When I try to upload a file, I get the following error:
Aws::Errors::NoSuchEndpointError (Encountered a `SocketError` while attempting to connect to:
https://redacted_bucket_name.s3.sfo2.amazonaws.com/a8278561714955c23ee99
in my gemfile I have: gem 'aws-sdk-s3
I've followed the directions found here, and I'm still getting the error. Is it possible that there's a new way to do this?

I just set something like this up myself a few days ago. When you check the URL https://redacted_bucket_name.s3.sfo2.amazonaws.com/a8278561714955c23ee99 it's different from the actual endpoint you set up https://redacted_bucket_name.sfo2.amazonaws.com/a8278561714955c23ee99
the error is being caused by an invalid endpoint your hitting, the s3 right before the .sfo2 is offsetting the endpoint. Did you happen to add s3 to your spaces config? check your spaces dashboard and try to get the endpoint setup properly.

I had this same challenge when working on a Rails 6 application in Ubuntu 20.04.
Here's how I fixed mine:
Firstly, create a Spaces access keys in your digital ocean console. This link should help - DigitalOcean Spaces API
Secondly, add a new configuration for DigitalOcean Spaces in your config/storage.yml file. Just after the local storage definition:
# Use rails credentials:edit to set the AWS secrets (as aws:access_key_id|secret_access_key)
digital_ocean:
service: S3
access_key_id: <%= SPACES_ACCESS_KEY_ID %>
secret_access_key: <%= SPACES_SECRET_ACCESS_KEY %>
region: <%= SPACES_REGION %>
bucket: <%= SPACES_BUCKET_NAME %>
endpoint: <%= SPACES_ENDPOINT %>
Note: You can give your entry any name, say digital_ocean_spaces or something else. For me I named it digital_ocean.
Thirdly, modify the config.active_storage.service configuration in the config/environments/production.rb file from:
config.active_storage.service = :local
to
config.active_storage.service = :digital_ocean
Finally, specify these environment variables file in your config/application.yml file (if you're using the Figaro gem) or your .env file. (if you're using the dotenv gem). In my case I was using the dotenv gem, so my .env file looked looked like this:
SPACES_ACCESS_KEY_ID=E4TFWVPDBLRTLUNZEIFMR
SPACES_SECRET_ACCESS_KEY=BBefjTJTFHYVNThun7GUPCeT2rNDJ4UxGLiSTM70Ac3NR
SPACES_REGION=nyc3
SPACES_BUCKET_NAME=my-spaces
SPACES_ENDPOINT=https://nyc3.digitaloceanspaces.com
That's all.
I hope this helps

Related

Rails 6 Active Storage: Digital Ocean Spaces does not honor public-read ACL

I'm using Rails 6 Active Storage to upload images to a Digital Ocean Spaces bucket.
My Storage.yml:
digitalocean:
service: S3
access_key_id: <%= Credential.digitalocean_access_key_id %>
secret_access_key: <%= Credential.digitalocean_secret_access_key %>
endpoint: https://sfo2.digitaloceanspaces.com
region: sfo2
bucket: mybucket
upload:
acl: "public-read"
As you can see I do specify a "public-read" upload ACL.
Images do upload fine using Active Storage direct upload, yet in the bucket, the file permission is Private, not Public.
Any help is appreciated, thank you.
I guess it's an issue with Digital Ocean S3 implementation! Try this command to force ACL to public:
s3cmd --access_key=YOUR_ACCESS_KEY--secret_key=YOUR_SECRET_KEY --host=YOUR_BUCKET_REGION.digitaloceanspaces.com --host-bucket=YOUR_BUCKET_NAME.YOUR_BUCKET_REGION.digitaloceanspaces.com --region=YOUR_BUCKET_REGION setacl s3://YOUR_BUCKET_NAME --acl-public

Production Server Uploading to Staging S3

I have an app running on a ubuntu server. I have a production mode and a staging mode.
Problem is that actions being done on the production site relative to uploading and retrieving images from an S3 bucket are being done to the same bucket as my staging. When I have my configurations set up differently.
production.rb
config.s3_bucket = 'bucket-production'
config.s3_path = 'https://bucket-production.s3.us-east-2.amazonaws.com/'
staging.rb && development.rb
config.s3_bucket = 'bucket-staging'
config.s3_path = 'https://bucket-staging.s3.us-east-2.amazonaws.com/'
storage.yml
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-2
bucket: bucket-staging
endpoint: http://bucket-staging.us-east-2.amazonaws.com
I'm thinking it could be something with storage.yml but I deleted this entire file and restarted the localhost server and it didn't change anything. Is storage.yml production only?
Also, my logs are logging to staging from production.
I would like to ask is prod server/staging server(Ubuntu) running in AWS.if yes, you must have IAM ROLE attached to the server which must be fine-grained to which env application should access which bucket of S3. Also storing access_key_id and secret access key id should not be used as best practice. IAM role can take care of that. Also, I would like to add if the server is in the private subnet you need to explore either NAT gateway or use VPC S3 Endpoint to access the bucket.
Also try printing logs of S3 connection in prod mode how it is acquiring cred to access bucket. You might be granting access using some ENV variables or IAM role. Best way to see is use
printenv
before and after of S3 connection to see the variables and which bucket access is provided.
Thanks
Ashish

Status 0 on uploading image with Rails and Backblaze B2 (S3)

When I try to upload a photo on my Rails app I get this error "Error storing "image.jpg" Status: 0" I am using ActiveStorage with Backblaze B2 which is based on Amazon S3. I have setup the bucket already.
I have tried changing regions but nothing happend.
This is in my storage.yml:
backblaze:
service: S3
access_key_id: keyID from my account
secret_access_key: master key from my account
region: us-east-1
bucket: my bucketID
endpoint: b2 upload url I got from b2_get_upload_url
force_path_style: true
I have set config.active_storage.service = :backblaze in both development and production environments.
This is code for file input in my form:
<%= f.file_field :image, direct_upload: true, multiple: false %>
I want to see image I posted in my file browser in my bucket but it is not uploading.
Status 0 (from my experience) relates to a failure to do with CORS. We use Azure as our storage platform (with Rails / ActiveStorage) and the root cause was the blob container not having a CORS policy configured. I'm not sure how this works with Backblaze, but if you configure a CORS policy and add the domain that you have your rails app on (with PUT/POST enabled), that should help

Rails Generate controller aws error missing bucket name

I am attempting to create a Users controller in my ruby on rails project, which I have also configured with heroku and an aws-s3 bucket. I set my .env and my heroku local with the S3_BUCKET, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY. I also set my initializer/aws.rb file to look like this:
Aws.config.update({
region: 'us-east-1',
credentials: Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']),
})
S3_BUCKET = Aws::S3::Resource.new.bucket(ENV['S3_BUCKET'])
I have bundle installed the aws gem like this:
gem 'aws-sdk', '~> 3'
However when I run the command
rails g controller Users new
I get the following error in my terminal:
aws-sdk-s3/bucket.rb:658:in `extract_name': missing required option :name (ArgumentError)
I looked at that file and it is trying to find the S3 bucket name, but I have set that already in .env and heroku local. Is there some other place where this needs to be set? None of the guides I have read mention this error.
Hi please check whether you have specified the right credentials and bucket name. Also, make sure you have provided the right region .Try the below code
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']),
region: 'us-west-1'
)
obj = s3.bucket(ENV['S3_BUCKET']).object('key')
If you want to upload file or something
obj.upload_file(file, acl:'public-read')
This would help you, I have used like this in my project.
1. Create the file aws.rb in your /config/initializers folder.
2. Then copy the below code,
S3Client = Aws::S3::Client.new(
access_key_id: 'ACCESS_KEY_ID',
secret_access_key: 'SECRET_ACCESS_KEY',
region: 'REGION'
)
Thats all, this works.
Happy coding :)

keystonejs cloudinary invalid signature error

I have implemented a Cloudinary type in one of my models, but get this error back when I try and save it to Cloudinary:
Image upload failed - Invalid Signature
ea4401c2ebf292208d28f9dc88c5ff1c4e73761d.
String to sign - 'tags=trial-images_image,trial-
images_image_55ba9896c6d05b8704802f0a,dev&timestamp=1438292137'.
I'm not sure what to do about it, anyone experience this?
You should make sure to calculate the signature correctly. Specifically, you should sign both the tags and the timestamp (including your api_secret of course).
I had the exact same issue.
Please double check that you have the configuration parameters(cloud name, api key, api secret) correctly set up. They can be found by going to the management console on your cloudinary account. (Dashboard > Account Details).
As per their documentation:
(http://cloudinary.com/documentation/rails_additional_topics#configuration_options)
Configuration parameters can be globally set using a cloudinary.yml configuration file, located under the config directory of your Rails project. etc...
Here's an example of a cloudinary.yml file:
production:
cloud_name: "sample"
api_key: "874837483274837"
api_secret: "a676b67565c6767a6767d6767f676fe1"
etc...
... Another configuration option is to use a Rails initializer file. You can place a file named cloudinary.rb in the /config/initializers folder of your Rails project. Here's a sample initializer code:
Cloudinary.config do |config|
config.cloud_name = 'sample'
config.api_key = '874837483274837'
config.api_secret = 'a676b67565c6767a6767d6767f676fe1'
config.cdn_subdomain = true
end
One last configuration option allows you to dynamically configure the Cloudinary library by defining the CLOUDINARY_URL environment variable. The configuration URL is available in the Management Console's dashboard of your account. When using Cloudinary through a PaaS add-on (e.g., Heroku), this environment variable is automatically defined in your deployment environment. Here's a sample value:
CLOUDINARY_URL=cloudinary://874837483274837:a676b67565c6767a6767d6767f676fe1#sample
How I actually solved the issue
I solved the issue by adopting (and slightly modifying) the first option, which is to create cloudinary.yml file in config directory and write the following code:
(config/cloudinary.yml)
development:
cloud_name: <%= ENV["CLOUD_NAME"] %>
api_key: <%= ENV["API_KEY"] %>
api_secret: <%= ENV["API_SECRET"] %>
test:
cloud_name: <%= ENV["CLOUD_NAME"] %>
api_key: <%= ENV["API_KEY"] %>
api_secret: <%= ENV["API_SECRET"] %>
production:
cloud_name: <%= ENV["CLOUD_NAME"] %>
api_key: <%= ENV["API_KEY"] %>
api_secret: <%= ENV["API_SECRET"] %>
Please note that the configuration parameters (cloud name, api key, api secret) are set as environmental variables(CLOUD_NAME, API_KEY, API_SECRET) to prevent them from being exposed when the code is shared publicly. (You don't want to hard code the sensitive information)
You can set environmental variables in bash by editing .bash_profile file, which is located (and hidden) in the home directory:
(.bash_profile)
.....
export CLOUD_NAME="your cloud name"
export API_KEY="your api key"
export API_SECRET="your api secret"
.....
You can check that these environmental variables are correctly set by typing echo $CLOUD_NAME, for example, in your terminal.(You may need to quit and restart the terminal). If it's successful, the output will look something like:
echo $CLOUD_NAME
> your cloud name
Finally, if you are planning to deploy your app to heroku, you may also want to add cloudinary as an addon, which is free for a starter option, by typing the following command in the terminal:
heroku addons:create cloudinary:starter
Putting this all together might solve your issue.
Last but not least, I found the following blog post quite useful:
http://www.uberobert.com/rails_cloudinary_carrierwave/
It explains how you can use cloudinary and carrierwave to upload and manipulate the images on your application.
Hope it helps!

Resources