storage.yml
service: Disk
root: <%= Rails.root.join("tmp/storage") %>
local:
service: Disk
root: <%= Rails.root.join("storage") %>
amazon:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: <%= ENV['AWS_REGION'] %>
bucket: <%= ENV['S3_BUCKET_NAME'] %>
confi/enviroments/development.rb
-----------------------------------------------
# Store uploaded files on the local file system (see config/storage.yml for options)
config.active_storage.service = :amazon
Watir Chrome Downaload
-----------------------------------------
prefs = {
download: {
prompt_for_download: false,
default_directory: '/path/to/dir'
}
}
b = Watir::Browser.new :chrome, options: {prefs: prefs}
**My query how to do i set default_directory ro amazon in pref for watir chrom download
**
is this correct --
prefs = {
download: {
prompt_for_download: false,
default_directory: amazon
}
}
Need help :)
enter link description here
If you specify the directory name then it will create the directory in your current directory. Give the full path there. It will work perfectly. I am using this and it's working perfectly for me.
Related
I am trying to upload a photo to S3 using Rails Active Storage.
I can attach a photo:
user.photo.attach(io: File.open('spec/images/filename.png'), filename: 'filename.png')
I can save the user and I can get the photo service url and see it in the browser, and in my bucket:
user.photo.service_url
However, if I restart my console and try to get the service url, I receive the following error:
Module::DelegationError (service_url delegated to attachment, but attachment is nil)
Here are my bucket settings:
storage.yml:
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-2
bucket: <%= Rails.application.credentials.dig(:aws, :bucket) %>
application.rb:
config.active_storage.service = :amazon
user.rb:
has_one_attached :photo
I am also having trouble using public: true in the storage.yml file.
I receive the following error if I try to set the config:
ArgumentError (Cannot load `Rails.config.active_storage.service`:)
invalid configuration option `:public'
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-2
bucket: <%= Rails.application.credentials.dig(:aws, :bucket) %>
public: true
I also wanted to upload my files to AWS S3, and have them publically available.
I ran into this issue as well and found the following:
ArgumentError (Cannot load `Rails.config.active_storage.service`:)
invalid configuration option `:public'
comes from this file in the aws-sdk-ruby gem. As per the error message, the aws-sdk-ruby gem does not support the public: true option.
I used the following work-around (special thanks to this article):
I updated my storage.yml to:
public_amazon:
service: S3
access_key_id: some_key
secret_access_key: some_secret
bucket: some-bucket-name
region: some-region
upload:
acl: "public-read"
The above sets the uploaded file permissions to be public.
Retrieve the public URL like this:
user.photo.attach(params[:file])
url = user.photo.service.send(:object_for, user.photo.key).public_url
Transfer Accelerated S3 bucket allows you to upload/download faster. The setup should be simple in theory:
The endpoint changes from:
mybucket.s3.us-east-1.amazonaws.com
to:
mybucket.s3-accelerate.amazonaws.com
In Ruby on Rails config/storage.yml the environment variables look like this:
amazon:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: <%= ENV['AWS_REGION'] %>
bucket: <%= ENV['AWS_BUCKET'] %>
The problem is the pattern is different, so I can't just change the AWS_REGION. There is an extra .s3 in there.
I wish to learn how to implement S3 transfer acceleration with ROR and Active Storage?
I had to add this below the bucket...:
use_accelerate_endpoint: true
I'm using the Cloudinary gem to allow a user to upload their avatar. In the docs, they offer a few ways to supply your api_key and api_secret, however I'm unable to get it work correctly with Rails credentials.
I've tried with the cloudinary.yml:
---
development:
cloud_name: test_cloud
api_key: <%= Rails.application.credentials.cloudinary[:api_key] %>
api_secret: <%= Rails.application.credentials.cloudinary[:api_secret] %>
enhance_image_tag: true
static_file_support: false
production:
cloud_name: test_cloud
api_key: <%= Rails.application.credentials.cloudinary[:api_key] %>
api_secret: <%= Rails.application.credentials.cloudinary[:api_secret] %>
enhance_image_tag: true
static_file_support: true
test:
cloud_name: test_cloud
api_key: <%= Rails.application.credentials.cloudinary[:api_key] %>
api_secret: <%= Rails.application.credentials.cloudinary[:api_secret] %>
enhance_image_tag: true
static_file_support: false
And I've also tried using a cloudinary.rb initializer:
Cloudinary.config do |config|
config.cloud_name = "test_cloud"
config.api_key = Rails.application.credentials.cloudinary[:api_key]
config.api_secret = Rails.application.credentials.cloudinary[:api_secret]
config.enhance_image_tag = true
config.static_file_support = false
end
When I attempt to upload the image, I receive a 500 error CloudinaryException (Must supply api_key). I know that the actual upload itself works because I hard-coded the values into the cloudinary.yml file to test it.
I've also tried using the .fetch function mentioned below, however I receive a key not found: :api_secret (KeyError)
How can I use Rails credentials within either of these files?
Does anyone know if there is a way to configure custom buckets for specific attachments?
class MyModel < ApplicationRecord
...
has_one_attached :logo, bucket: 'custom_bucket'
...
end
Although there isn't a way to use specific "buckets", one can pretty easily add multiple active storage configurations for multiple buckets (I believe introduced in v6.1):
https://edgeguides.rubyonrails.org/active_storage_overview.html#attaching-files-to-records
For example, you might have a "amazon_s3_cold" and an "amazon_s3_hot", they will have all the same configurations aside from the bucket. You may then configure your buckets accordingly on AWS.
# config/storage.yml
amazon_s3_hot:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-1
bucket: my_hot_bucket
amazon_s3_cold:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-1
bucket: my_cold_bucket
# controllers
class User < ApplicationRecord
has_one_attached :avatar, service: :amazon_s3_hot
end
class DocumentRecord < ApplicationRecord
has_one_attached :document_upload, service: :amazon_s3_cold
end
Note - hot/cold doesn't apply to the question directly, but provides some context. Hot/cold storage is a concept pertaining to cloud storage services that trades off costs for access frequencies.
You could follow a similar pattern to how a traditional database.yml file inherits settings which is just YML variables. My storage.yml file looks somewhat like this which allows me to store each Active Storage attachment type in their own folder.
The S3 provider which is what powers the DO provider requires a bucket name which I've just specified as 'default' but you could call it 'all' or 'general' and then override only the ones you care about.
(storage.yml)
do: &do
service: S3
endpoint: <%= Rails.application.credentials.dig(:digitalocean, :endpoint) %>
access_key_id: <%= Rails.application.credentials.dig(:digitalocean, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:digitalocean, :secret_access_key) %>
region: 'nyc3'
bucket: default
do_user_uploads:
<<: *do
bucket: user_uploads
(user.rb)
has_one_attached :upload, service: :do_user_uploads
Hope that helps, I came here looking for the same answer!
There isn’t, sorry. Active Storage is designed for use with a single bucket.
In Rails 4.0.2, I am using s3_direct_upload and aws-sdk gems for file uploads for s3 bucket directly. In development environment it is working fine but in production environment it is throwing an error like below,
ActionView::Template::Error (no implicit conversion of nil into String)
In views,
<%= s3_uploader_form :callback_url=>create_cv_url, :id=> "s3_uploader", :key=> "cv_uploads/{unique_id}/${filename}",
:key_starts_with=> "cv_uploads/", :callback_param=> "cv[direct_upload_url]", :max_file_size=> 1.megabytes,
:expiration=> 24.hours.from_now.utc.iso8601 do %>
<%= file_field_tag :file, multiple: true, :max_file_size => 1.megabytes, accept: 'application/pdf application/msword application/rtf application/doc application/docx' %>
<% end %>
<script id="template-upload" type="text/x-tmpl">
<div id="upload_{%=o.unique_id%}" class="upload">
<h5 class="mt1">Please Wait. <span style="color: #5f6fa0;"> {%=o.name%} </span>is processing...</h5>
<div class="progress"><div class="progress-bar progress-bar-striped active" style="width: 0%;"></div></div>
</div>
Here, the issue is mainly pointing to s3_uploader_form line(in views).
This feature is fully referred from http://blog.littleblimp.com/post/53942611764/direct-uploads-to-s3-with-rails-paperclip-and
In paperclip.rb
Paperclip::Attachment.default_options.merge!(
url: :s3_domain_url,
path: ':class/:attachment/:id/:style/:filename',
storage: :s3,
s3_credentials: Rails.configuration.aws,
s3_permissions: :private,
s3_protocol: 'http'
)
require 'paperclip/media_type_spoof_detector'
module Paperclip
class MediaTypeSpoofDetector
def spoofed?
false
end
end
end
In aws.rb
require 'aws-sdk'
# Rails.configuration.aws is used by AWS, Paperclip, and S3DirectUpload
Rails.configuration.aws = YAML.load(ERB.new(File.read("# {Rails.root}/config/aws.yml")).result)[Rails.env].symbolize_keys!
AWS.config(:logger=> Rails.logger)
AWS.config(Rails.configuration.aws)
In s3_direct_upload.rb
S3DirectUpload.config do |c|
c.access_key_id = Rails.configuration.aws[:access_key_id]
c.secret_access_key = Rails.configuration.aws[:secret_access_key]
c.bucket = Rails.configuration.aws[:bucket]
c.region = "s3"
end
Is it because of configuration issue in production environment? Please help me to solve this issue.
I had the same problem and I solve it adding the file config.yml with my S3 credentials:
RailsApp/config.yml
# Fill in your AWS Access Key ID and Secret Access Key
# http://aws.amazon.com/security-credentials
access_key_id: xxxxxx
secret_access_key: xxxxxxx
More info: https://docs.aws.amazon.com/AWSSdkDocsRuby/latest/DeveloperGuide/ruby-dg-setup.html
The code seems ok. As skozz mentioned, one of the issue may be with your configuration keys that may be not assigned properly. Please check aws production keys in "/config/aws.yml".
I was suffering the same, by following this link. The thing is that I've added the initializers, but I needed to restart rails (it was running but not refreshed).
Executing this command worked for me. figaro heroku:set -e production