Carrierwave upload to Amazon s3 with wrong url - ruby-on-rails

I am using Carrierwave/RailsAPI to upload images to Amazon S3.
Media is uploading correctly and into the correct folder and bucket.
PROBLEM:
Carrierwave is saving the url to the image and the thumb in the wrong format.
The correct url is:
https://region.amazonaws.com/bucket/folder/filename.jpeg
Carrierwave saves
https://bucket.s3.amazonaws.com/folder/filename.jpeg
My configurations follow:
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => "AWS_KEY",
:aws_secret_access_key => "SECRET_KEY",
:region => 'us-west-2'
}
config.fog_directory = "bucket"
end
class ImageUploader < CarrierWave::Uploader::Base
include CarrierWave::MiniMagick
storage :fog
def store_dir
"folder/"
end
def default_url
"/images/fallback/" + [version_name, "default.png"].compact.join('_')
end
version :thumb do
process :resize_to_fill => [150, 150]
end
def extension_white_list
%w(jpg jpeg gif png)
end
def filename
DateTime.now.strftime('%Q') + ".jpeg"
end
end
Help Appreciated!!

Both forms of the URL are valid. From the Amazon docs:
Amazon S3 supports both virtual-hosted–style and path-style URLs to
access a bucket.
In a virtual-hosted–style URL, the bucket name is part of the domain name in the URL. For example:
http://bucket.s3.amazonaws.com
http://bucket.s3-aws-region.amazonaws.com.
In a virtual-hosted–style URL, you can use either of these endpoints. If you make a request to the http://bucket.s3.amazonaws.com
endpoint, the DNS has sufficient information to route your request
directly to the region where your bucket resides.
In a path-style URL, the bucket name is not part of the domain (unless you use a region-specific endpoint). For example:
US East (N. Virginia) region endpoint, http://s3.amazonaws.com/bucket
Region-specific endpoint, http://s3-aws-region.amazonaws.com/bucket
In a path-style URL, the endpoint you use must match the region in which the bucket resides. For example, if your bucket is in the South
America (Sao Paulo) region, you must use the
http://s3-sa-east-1.amazonaws.com/bucket endpoint. If your bucket is
in the US East (N. Virginia) region, you must use the
http://s3.amazonaws.com/bucket endpoint.

Related

amazon s3 variables not working in heroku environment rails

Hello I have included given code
def store_s3(file)
# We create a connection with amazon S3
AWS.config(access_key_id: ENV['S3_ACCESS_KEY'], secret_access_key: ENV['S3_SECRET'])
s3 = AWS::S3.new
bucket = s3.buckets[ENV['S3_BUCKET_LABELS']]
object = bucket.objects[File.basename(file)]
# the file is not the content of the file is the route
# file_data = File.open(file, 'rb')
object.write(file: file)
# save the file and return an url to download it
object.url_for(:read, response_content_type: 'text/csv')
end
this code is working correctly in my local data is stored in amazon but when I had deployed code in heroku server I had made variables on server too.
is there anything which I am missing here please let me know cause of issue.
I don't see region, in your example is S3_Hostname your region?
for myself, region was just like 'us-west-2'.
If you want to setup your s3 with carrierwave and gem fog you can do it like this on config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_directory = 'name for s3 directory'
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'your access key',
:aws_secret_access_key => 'your secret key',
:region => 'your region ex: eu-west-2'
}
end

Rails carrierwave link generated different from s3 storage link

I created a rails api but I have a problem with image upload.
I'm using carrierwave , the upload of picture is working but I get a wrong link.
Example :
This is the link I find in the RESTful api :
https://s3.eu-west-2.amazonaws.com/gpsql/uploads/driver/picture/35/imagename.png
But when I check S3 storage I find a different link :
https://s3.eu-west-2.amazonaws.com/gpsql/gpsql/gpsql/uploads/driver/picture/35/imagename.png
This is initializer for s3 carrierwave :
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '...', # required
aws_secret_access_key: '...', # required
region: 'us-west-2',
path_style: true,
}
config.fog_directory = 'gpsql' # required
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
config.fog_attributes = {'Cache-Control' => "max-age=#{365.day.to_i}"} # optional, defaults to {}
end
In picture uploader :
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
How can I fix the link that is shown in the RESTful api also why there is so much "bucket name" in amazon link why not something straightforward link/bucketname/image.png
For the first link I find in restful api it doesn't work at all I get access denied or key not found for the second one in amazon s3 it works without any problem.
One of the problem is this
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
it should be
config.asset_host = 'https://s3.eu-west-2.amazonaws.com'
Anyway I don't know why it's repeating twice...
So, if you can you should fix it in the configuration and move the folder in S3 to the proper place
If you can't move it I would try to change the store dir to "gpsql/gpsql/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
I'm not sure if that works but that would be my first step

Error When Uploading to S3 via Carrierwave and Fog

So I'm trying to use Carrierwave and Fog to upload files to Amazon S3. I first set up Carrierwave working with the local file server. Worked golden. Then I tried adding the fog gem, switching the storage to fog and adding the carrier_wave.rb initializer file and I get the error "no implicit conversion of Array into String" when I try to upload anything.
My initializer code is:
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ['XXXX'],
:aws_secret_access_key => ['XXXX']
}
config.fog_directory = ['XXXX']
end
And my uploader code is:
class MediaUploader < CarrierWave::Uploader::Base
include CarrierWave::MiniMagick
include CarrierWave::MimeTypes
process :set_content_type
process :save_content_type_and_size_in_model
def save_content_type_and_size_in_model
model.content_type = file.content_type if file.content_type
model.file_size = file.size
end
storage :fog
The error seems to be stemming from the fact that carrierwave & fog is trying to pass an array into my string "media" field in my model (project). See the parameters being passed in:
{"utf8"=>"✓",
"authenticity_token"=>"XXXXXX",
"project"=>{"name"=>"Cat",
"description"=>"Meow meow",
"media"=>#<ActionDispatch::Http::UploadedFile:0x007fd15513f3e8 #tempfile=# <Tempfile:/var/folders/8_/77fwkkc56r71w67p2jj5x9200000gn/T/RackMultipart20141004-77510-1k6nxyw>,
#original_filename="SG Square.jpg",
#content_type="image/jpeg",
#headers="Content-Disposition: form-data; name=\"project[media]\"; filename=\"SG Square.jpg\"\r\nContent-Type: image/jpeg\r\n">},
"commit"=>"Create Project"}
Help!

Can CarrierWave upload to Amazon S3 but serve through CloudFront?

I'm working on a small rails site which allows some users to upload images and others to see them. I started using CarrierWave with S3 as the storage medium and everything worked great but then I wanted to experiment with using CouldFront. I first added a distribution to my S3 bucket and then changed the CarrierWave configuration I was using to this:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY_ID'], # required
:aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY'], # required
:region => 'eu-west-1',
}
config.asset_host = 'http://static.my-domain.com/some-folder'
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
I should mention that http://static.my-domain.com is a CNAME entry pointing to a CloudFront endpoint (some-id.cloudfront.net). The result is that the pictures are shown correctly, URLs look like this: http://static.my-domain.com/some-folder/uploads/gallery_image/attachment/161/large_image.jpg but whenever I try to upload a photo or for that matter get the size of the uploaded attachment I get the following exception:
Excon::Errors::MovedPermanently: Expected(200) <=> Actual(301 Moved Permanently)
response => #<Excon::Response:0x007f61fc3d1548 #data={:body=>"",
:headers=>{"x-amz-request-id"=>"some-id", "x-amz-id-2"=>"some-id",
"Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked",
"Date"=>"Mon, 31 Mar 2014 21:16:45 GMT", "Connection"=>"close", "Server"=>"AmazonS3"},
:status=>301, :remote_ip=>"some-ip"}, #body="", #headers={"x-amz-request-id"=>"some-id",
"x-amz-id-2"=>"some-id", "Content-Type"=>"application/xml",
"Transfer-Encoding"=>"chunked", "Date"=>"Mon, 31 Mar 2014 21:16:45 GMT",
"Connection"=>"close", "Server"=>"AmazonS3"}, #status=301, #remote_ip="some-ip"
Just to add some more info, I tried the following:
removing the region entry
using the CloudFront URL directly instead of the CNAME
specifying the Amazon endpoint (https://s3-eu-west1.amazonaws.com)
but all of them had no effect.
Is there something I'm missing or is it that CarrierWave does not support this at this time?
The answer to the question is YES. The reason why it didn't work with my configuration is that I was missing the fog_directory entry. When I added my asset_host, I removed fog_directory since the CDN urls being generated where malformed. I later found out that this was due to having fog_public set to false. After getting the proper CDN urls, I forgot to add fog_directory back since I could see my images and thought everything was fine. Anyway the correct configuration is:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY_ID'], # required
:aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY'], # required
:region => 'eu-west-1'
}
config.fog_directory = '-bucket-name-/-some-folder-'
config.asset_host = 'https://static.my-domain.com/-some-folder-'
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
Try setting :asset_host in your Uploader like so:
class ScreenshotUploader < CarrierWave::Uploader::Base
storage :fog
# Configure uploads to be stored in a public Cloud Files container
def fog_directory
'my_public_container'
end
# Configure uploads to be delivered over Rackspace CDN
def asset_host
"c000000.cdn.rackspacecloud.com"
end
end
Inspired from https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Store-private-public-uploads-in-different-Cloud-Files-Containers-with-Fog

Carrierwave & Amazon S3 file downloading/uploading

I have a rails 3 app with an UploadsUploader and a Resource model on which this is mounted. I recently switched to using s3 storage and this has broken my ability to download files using the send_to method. I can enable downloading using the redirect_to method which is just forwarding the user to an authenticated s3 url. I need to authenticate file downloads and I want the url to be http://mydomainname.com/the_file_path or http://mydomainname.com/controller_action_name/id_of_resource so I am assuming I need to use send_to, but is there a way of doing that using the redirect_to method? My current code follows. Resources_controller.rb
def download
resource = Resource.find(params[:id])
if resource.shared_items.find_by_shared_with_id(current_user) or resource.user_id == current_user.id
filename = resource.upload_identifier
send_file "#{Rails.root}/my_bucket_name_here/uploads/#{filename}"
else
flash[:notice] = "You don't have permission to access this file."
redirect_to resources_path
end
end
carrierwave.rb initializer:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxxx', # copied off the aws site
:aws_secret_access_key => 'xxxx', #
}
config.fog_directory = 'my_bucket_name_here' # required
config.fog_host = 'https://localhost:3000' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
upload_uploader.rb
class UploadUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads"
end
end
All of this throws the error:
Cannot read file
/home/tom/Documents/ruby/rails/circlshare/My_bucket_name_here/uploads/Picture0024.jpg
I have tried reading up about carrierwave, fog, send_to and all of that but everything I have tried hasn't been fruitful as yet. Uploading is working fine and I can see the files in s3 bucket. Using re_direct would be great as the file wouldn't pass through my server. Any help appreciated. Thanks.
Looks like you want to upload to S3, but have not-public URLs. Instead of downloading the file from S3 and using send_file, you can redirect the user to the S3 authenticated URL. This URL will expire and only be valid for a little while (for the user to download).
Check out this thread: http://groups.google.com/group/carrierwave/browse_thread/thread/2f727c77864ac923
Since you're already setting fog_public to false, do you get an authenticated (i.e. signed) url when calling resource.upload_url

Resources