I am using carrierwave fog-google configuration for file download and upload to GCS bucket. However, my concern is I wanted to have a signed URL returned from GCS response with some expiration time.
Is there any configuration I need to set which would help me in receiving a response from GCS which will have signed URL and will expire let say after an hour?
class TestUploader < CarrierWave::Uploader::Base
storage :fog
def fog_credentials
{
:provider => 'google',
:google_project =>'my project',
:google_json_key_location =>'myCredentialFile.json'
}
end
def fog_provider
'fog/google'
end
def fog_directory
'{#bucket-name}'
end
def store_dir
when :File
"#{file.getpath}/file"
when :audio
"#{file.getpath}/audio"
else
p " Invalid file "
end
end
end
Related
I am using Carrier wave to upload images. But with the default store_dir in the image_uploader.rb file, it is appending the store_dir to my image path. So I am successful in displaying the images that I have uploaded. However, I have a database with remote image urls that are already existing. These remote image urls are not displayed, as it is appending the store_dir to the image path and they are not found.
For eg:
It is taking "http://myapp.com/images/I/51oYEfb%2B0WL.SL160.jpg" as "/uploads/product/productimage/1/http%3A/myapp.com/images/I/51oYEfb%252B0WL.SL160.jpg"
Here is my code:
_product.html.erb
<% #products.each do |product| %>
<li>
<%= image_tag(product.productimage_url) if product.productimage? %>
</li>
<% end %>
Product.rb
class Product < ActiveRecord::Base
mount_uploader :productimage, ProductimageUploader
end
productimage_uploader.rb
class ProductimageUploader < CarrierWave::Uploader::Base
include CarrierWave::MiniMagick
storage :file
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
I even tried nil as below and it still appends / to the image url:
def store_dir
nil
end
I'm assuming you must have loaded the remote urls into your products' table's productimage column.
Perhaps the simplest way to accomplish your goal would to be add something like a remote_url column to the products table and not put remote URLs in the productimage column. Then you could do something like:
Class Product < ActiveRecord::Base
def image_url
productimage.present? ? productimage_url : remote_url
end
end
Then change your view to:
<%= image_tag(product.image_url) if product.image_url.present? %>
If your products table is already populated with remote urls from your app previously using something other than carrierwave, another option that might be better would be to write a rake task to download and re-save them with carrierwave. That might look something like:
Product.all.each do |product|
temp_location = Rails.root.join('tmp', File.basename(product.attributes['productimage']))
uri = URI(product.attributes['productimage'])
Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
request = Net::HTTP::Get.new uri
http.request request do |response|
File.open(temp_location, 'w') do |file|
response.read_body do |chunk|
file.write chunk
end
end
end
end
product.productimage = File.open(temp_location)
product.save!
File.unlink(temp_location)
end
I tried to follow the 'Secure Upload' in carrier wave which is a bit confusing because I have customized the file path and all a bit. When I try to run the app, I get 'Cannot read file' error.
Here's the route :
match "/upload_files/:tenant_id/:model/:mount_as/:id/:basename.:extension" => "documents#download",via: [:get, :post]
class ImageUploader < CarrierWave::Uploader::Base
def store_dir
"upload_files/#{model.tenant_id}/#model.class.to_s.underscore}/#mounted_as}/#{model.id}"
end
end
carrierwave.rb initializer :
CarrierWave.configure do |config|
config.permissions = 0600
config.directory_permissions = 0700
config.root = Rails.root
end
documents controller:`
def download
path = request.fullpath
send_file path
end
got the error
ActionController::MissingFile in DocumentsController#download
Cannot read file /upload_files/1/hoshin_attachment/image/3/support3_HoshinUserStatusReports_08_14_2015.pdf
Please help me to find the solution
CarrierWave seems to compute paths using a root variable and the uploader store_dir.
I wanted to store my files in a private folder under Rails.root.
To setup the root:
# config/initializers/carrierwave.rb
# Uploader settings
CarrierWave.root = Rails.root.to_s
CarrierWave::Uploader::Base.root = Rails.root.to_s
To setup the store_dir:
# In your uploader class definition
def store_dir
"private/your_app/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
Since files are not longer in the public folder, they require actions in controller to show and download. Lets assume the column for the uploaded file is named uploaded_file:
# In some controller
# you have to ensure #document initialize with a before_action filter
def show_file
send_file #document.unsigned_file.path,
filename: #document.uploaded_file_identifier,
disposition: :inline,
type: #document.uploaded_file.content_type
end
def download_file
send_file #document.unsigned_file.path,
filename: #document.uploaded_file_identifier,
disposition: :attachment,
type: #document.uploaded_file.content_type
end
I have 2 Carrierwave uploaders - ItemUploader and ImageUploader, and am using fog.
I can upload files to S3 just fine, but doing a destroy doesn't remove them from S3.
This is my destroy action:
def destroy
#item = Item.find(params[:id])
#item.destroy
respond_to do |format|
format.html { redirect_to items_url }
format.json { head :no_content }
end
end
When I do item.destroy, it deletes the record from my db but it doesn't remove the file from S3 and it doesn't remove the folders.
This is a brand new S3 bucket, with vanilla settings. Also a brand new Carrierwave install.
FYI: I have tried adding #item.remove_item! and #item.remove_image! to the destroy action of the controller but that hasn't done the trick either.
Edit 1
So it seems that what happens is that it deletes 1 of the attachments.
The model has this:
class Item < ActiveRecord::Base
# image :string(255)
# link :string(255)
mount_uploader :link, ItemUploader
mount_uploader :image, ImageUploader
end
So, when I delete an object in my console, it removes the object associated with ItemUploader and not the image related via the ImageUploader.
Why would it delete 1 and not the other?
It seems there is something wrong with my console - because once I delete the object via the web UI it deletes all the related objects in S3.
But if I do it via the console, it doesn't work.
I will be opening another SO question for that particular issue.
It deletes but take little time specially if you are referring file(image) from cdn.
Uses aws sdk - https://github.com/amazonwebservices/aws-sdk-for-ruby
You can build a facade to manage your objects on 3, for example:
require 'aws-sdk'
class Facades::AmazonFacade
attr_reader :s3
#
# Connection to Amazon S3
#
def initialize
#s3 = AWS::S3.new(
:access_key_id => config['access_key_id'],
:secret_access_key => config['secret_access_key']
)
#bucket = #s3.buckets[self.config['bucket']]
end
def config
##config ||= YAML::load(File.open("#{Rails.root}/config/amazon_s3.yml" ))[Rails.env]
end
####
def policy(bucket, options = {})
# Base 64 policy
end
def signature(bucket, options = {})
# Base64 signature
end
#
# Find object and get public urls
#
def url_link(obj, expires)
#bucket.objects[obj].url_for(:read, :secure => true, :expires => 10*60).to_s
end
def object_exists_on_amazon?(obj)
#bucket.objects[obj].exists?
end
def object_size(obj)
unless Rails.env.test?
#bucket.objects[obj].content_length
end
end
def object_upload_date(obj)
#bucket.objects[obj].last_modified
end
#
# create, delete objects
#
def store_object_on_amazon(obj, file, access)
#bucket.objects[obj].write(file, :acl => access)
end
def delete_object_on_amazon(obj)
#bucket.objects[obj].delete(:force => true)
end
end
I'm following this tutorial to integrate Zencoder in my Rails 3 app: http://www.nickdesteffen.com/blog/video-encoding-with-uploadify-carrierwave-and-zencoder
The tutorial uses Rackspace for storage, but I'd like to adapt the code so that I can use Amazon S3 for storage instead. I replaced all the Rackspace info with my Amazon S3 info, but whenever I try to upload a video in my form, I get this HTTP error: "There was an error with the file you tried uploading.Please verify that it is the correct type."
What do I need to fix here to make this work?
carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'xxx',
:aws_secret_access_key => 'xxx',
}
config.fog_directory = 'mybucket'
config.fog_public = true
config.fog_attributes = {'Cache-Control' => 'max-age=315576000'}
end
video_uploader.rb
class VideoUploader < CarrierWave::Uploader::Base
include Rails.application.routes.url_helpers
Rails.application.routes.default_url_options = ActionMailer::Base.default_url_options
after :store, :zencode
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
def extension_white_list
%w(mov avi mp4 mkv wmv mpg)
end
def filename
"video.mp4" if original_filename
end
private
def zencode(args)
zencoder_response = Zencoder::Job.create({:input => 's3://mybucket/key.mp4',
:outputs => [{:label => 'vp8 for the web',
:url => 's3://mybucket/key_output.webm'}]})
zencoder_response.body["outputs"].each do |output|
if output["label"] == "web"
#model.zencoder_output_id = output["id"]
#model.processed = false
#model.save(:validate => false)
end
end
end
end
I've been working on the same issue.
Using Fog for my credentials I created my urls like so:
bucket = AttachmentUploader.fog_directory
input = "s3://#{bucket}/#{self.path}"
base_url = "s3://#{bucket}/#{store_dir}"
Take a look at my gist for more detail: https://gist.github.com/4002368
Don't forget to allow Zencoder to acces your bucket via security policy: https://app.zencoder.com/docs/guides/getting-started/working-with-s3
been trying to search the reason for this error for a long time and can't seem to find any...
So I have a rails app, and I utilize carrierwave for pictures uploading. I also want to utilize Amazon S3 for file upload storage in my app.
Initially as I am developing the app I allowed file uploads to be on the on :file, i.e.
image_uploader.rb
# Choose what kind of storage to use for this uploader:
storage :file
# storage :fog
Now upon finishing up development and placing it live (I use heroku), I decided to change the carrierwave storage to S3 to test it locally.
image_uploader.rb
# Choose what kind of storage to use for this uploader:
# storage :file
storage :fog
However, now when I try to upload a picture (be it for user avatar, etc) I get this error:
Excon::Errors::Forbidden in UsersController#update
Expected(200) <=> Actual(403 Forbidden)
request => {:connect_timeout=>60, :headers=>{"Content-Length"=>74577, "x-amz- acl"=>"private", "Content-Type"=>"image/png", "Date"=>"Sun, 26 Feb 2012 10:00:43 +0000", "Authorization"=>"AWS AKIAJOCDPFOU7UTT4HOQ:8ZnOy7X71nQAM87yraSI24Y5bSw=", "Host"=>"s3.amazonaws.com:443"}, :instrumentor_name=>"excon", :mock=>false, :read_timeout=>60, :retry_limit=>4, :ssl_verify_peer=>true, :write_timeout=>60, :host=>"s3.amazonaws.com", :path=>"/uploads//uploads%2Fuser%2Favatar%2F1%2Fjeffportraitmedium.png", :port=>"443", :query=>nil, :scheme=>"https", :body=>"\x89PNG\r\n\x1A\n\x00\x00\x00\rIHDR\x00\x00\x00\xC2\x00\x00\x00\xC3\b\x06\x00\x00\x00\xD0\xBD\xCE\x94\x00\x00\nCiCCPICC Profile\x00\x00x\x01\x9D\x96wTSY\x13\xC0\xEF{/\xBD\xD0\x12B\x91\x12z\rMJ\x00\x91\x12z\x91^E%$\
...
# The code you see above to the far right repeats itself a LOT
...
1#\x85\xB5\t\xFC_y~\xA6=:\xB2\xD0^\xBB~i\xBB\x82\x8F\x9B\xAF\xE7\x04m\xB2i\xFF\x17O\x94S\xF7l\x87\xA8&\x00\x00\x00\x00IEND\xAEB`\x82", :expects=>200, :idempotent=>true, :method=>"PUT"}
response => #<Excon::Response:0x007fc88ca9f3d8 #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>8EFA56C0DDDC8878</RequestId><HostId>1OxWXppSSUq1MFjQwvnFptuCM3gKOuKdlQQyVSEgvzzv4Aj+r2hSFM2UUw2NYyrR</HostId></Error>", #headers={"x-amz-request-id"=>"8EFA56C0DDDC8878", "x-amz-id-2"=>"1OxWXppSSUq1MFjQwvnFptuCM3gKOuKdlQQyVSEgvzzv4Aj+r2hSFM2UUw2NYyrR", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Sun, 26 Feb 2012 10:00:47 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, #status=403>
And then it says this as well for my application trace:
app/controllers/users_controller.rb:39:in `update'
And my REQUEST parameters:
{"utf8"=>"✓",
"_method"=>"put",
"authenticity_token"=>"DvADD1vYpCLcghq+EIOwVSjsfmAWCHhtA3VI5VGD/q8=",
"user"=>{"avatar"=>#<ActionDispatch::Http::UploadedFile:0x007fc88cde76f8
#original_filename="JeffPortraitMedium.png",
#content_type="image/png",
#headers="Content-Disposition: form-data; name=\"user[avatar]\";
filename=\"JeffPortraitMedium.png\"\r\nContent-Type: image/png\r\n",
#tempfile=#<File:/var/folders/vg/98nv58ss4v7gcbf8px_8dyqc0000gq/T/RackMultipart20120226- 19096-1ppu2sr>>,
"remote_avatar_url"=>"",
"name"=>"Jeff Lam ",
"email"=>"email#gmail.com",
"user_bio"=>"Tester Hello",
"shop"=>"1"},
"commit"=>"Update Changes",
"id"=>"1"}
Here's my users_controller.rb partial code:
def update
#user = User.find(params[:id])
if #user.update_attributes(params[:user])
redirect_back_or root_path
flash[:success] = "Your have updated your settings successfully."
else
flash.now[:error] = "Sorry! We are unable to update your settings. Please check your fields and try again."
render 'edit'
end
end
My image_uploader.rb code
# encoding: utf-8
class ImageUploader < CarrierWave::Uploader::Base
# Include RMagick or MiniMagick support:
# include CarrierWave::RMagick
include CarrierWave::MiniMagick
# Choose what kind of storage to use for this uploader:
# storage :file
storage :fog
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
# Provide a default URL as a default if there hasn't been a file uploaded:
# def default_url
# "/images/fallback/" + [version_name, "default.png"].compact.join('_')
# end
# Process files as they are uploaded:
# process :scale => [200, 300]
#
# def scale(width, height)
# # do something
# end
# Create different versions of your uploaded files:
version :thumb do
process resize_to_fill: [360, 250]
end
version :cover_photo_thumb do
process resize_to_fill: [1170, 400]
end
version :event do
process resize_to_fill: [550, 382]
end
version :product do
process resize_to_fit: [226, 316]
end
# Add a white list of extensions which are allowed to be uploaded.
# For images you might use something like this:
def extension_white_list
%w(jpg jpeg gif png)
end
# Override the filename of the uploaded files:
# Avoid using model.id or version_name here, see uploader/store.rb for details.
# def filename
# "something.jpg" if original_filename
# end
# fix for Heroku, unfortunately, it disables caching,
# see: https://github.com/jnicklas/carrierwave/wiki/How-to%3A-Make-Carrierwave-work-on-Heroku
def cache_dir
"#{Rails.root}/tmp/uploads"
end
end
Finally, my fog.rb file in the config/initializers
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'ACCESS_KEY', # required
:aws_secret_access_key => 'SECRET_ACCESS_KEY/ZN5SkOUtOEHd61/Cglq9', # required
:region => 'Singapore' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'ruuva/' # required
config.fog_public = false # optional, defaults to true
end
I'm actually quite confused on some of the things in my fog.rb. Firstly, should I change my region to Singapore if I created a bucket called "ruuva", with region "Singapore" on my amazon s3 account?
Thank you to anyone that can help in advance!
First make sure you use the right credentials by not setting custom region and custom directory (create a fake bucket for free in the default region)
Then I think you are not using the right name for the region. Try setting your region like this:
:region => 'ap-southeast-1'
We were facing the same problem and fixed that changing the user's permission associated to your access key, changing it to "Power User". Check if you need your user to be power user before put it into productions.