I have a Rails 4 application and am trying to configure Carrierwave and Fog to store uploaded files on Amazon S3 but I keep getting the following error:
Expected(200) <=> Actual(301 Moved Permanently) excon.error.response :body => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message><RequestId>3C27ACF693820E4E</RequestId><Bucket>bucket_name</Bucket><HostId>8hnHAWoVEgsGkSyclME99rPTq5UHuSt6ZQ/ezmCRcuK+JUGWsSeI4FvcC2A5cym7</HostId><Endpoint>s3.amazonaws.com</Endpoint></Error>" :headers => { "Content-Type" => "application/xml" "Date" => "Wed, 03 Sep 2014 06:59:16 GMT" "Server" => "AmazonS3" "x-amz-id-2" => "8hnHAWoVEgsGkSyclME99rPTq5UHuSt6ZQ/ezmCRcuK+JUGWsSeI4FvcC2A5cym7" "x-amz-request-id" => "3C27ACF693820E4E" } :local_address => "10.0.0.9" :local_port => 54480 :remote_ip => "176.32.114.26" :status => 301
config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'XXXXXXXXXXXXXXXXXXXX',
:aws_secret_access_key => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
:region => 'us-east-1'
}
config.fog_directory = 'bucket_name'
config.fog_public = false
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
I've also tried removing the :region parameter (along with the preceding comma) but that doesn't seem to work either. I've checked the bucket region and it is listed as "US Standard" but if I look at endpoint, Amazon lists that bucket as us-east-1. Regardless, I've tried assigning both of those values to :region and neither worked.
Can somebody help me figure out what I'm doing wrong?
In case this might help somebody save a few hours of their life, the solution turned out to be that I just needed to restart the server after modifying the file.
Related
I'm using Carrierwave and Fog to store images on the cloud. I was previously using Amazon S3 for the actual storage, which worked with no issues. But I switched over to Google Cloud Storage, and now I'm getting the following error whenever I try to save anything:
Excon::Error::Forbidden in GalleriesController#create
Expected(200) <=> Actual(403 Forbidden) excon.error.response :body => "InvalidSecurityThe
provided security credentials are not valid.Request
was not signed or contained a malformed signature"
:cookies => [ ] :headers => { "Alt-Svc" => "hq=\":443\"; ma=2592000;
quic=51303433; quic=51303432; quic=51303431; quic=51303339;
quic=51303335,quic=\":443\"; ma=2592000; v=\"43,42,39,38,35\""
"Content-Length" => "224" "Content-Type" => "application/xml;
charset=UTF-8" "Date" => "Tue, 01 May 2018 22:03:23 GMT" "Server" =>
"UploadServer" "Vary" => "Origin" "X-GUploader-UploadID" =>
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
} :host => "[directory].storage.googleapis.com" :local_address
=> "xxx.xxx.x.xxx" :local_port => xxxxx :path => "/uploads%2Fimage.png" :port => 443 :reason_phrase => "Forbidden"
:remote_ip => "xxx.xxx.x.xx" :status => 403 :status_line => "HTTP/1.1
403 Forbidden\r\n"
initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/google'
config.fog_credentials = {
provider: 'Google',
google_storage_access_key_id: 'GOOGxxxxxxxxxxx',
google_storage_secret_access_key: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
}
config.fog_directory = 'xxxxxxxxxxx'
#config.fog_public = false
#config.fog_attributes = { cache_control: "public, max-age=#{365.day.to_i}" }
end
Uploader
class PhotoFileUploader < CarrierWave::Uploader::Base
include CarrierWave::MiniMagick
storage :fog
def fix_exif_rotation
manipulate! do |img|
img.tap(&:auto_orient)
end
end
process :fix_exif_rotation
process :resize_to_fit => [800, 56000]
version :thumb do
process resize_to_fit: [300, 56000]
end
end
Gemfile
gem "fog-google"
gem "google-api-client", "> 0.8.5", "< 0.9"
gem "mime-types"
It seems like there's a problem with the key_id or secret_key, but I just copied and pasted both from the Interoperability section on the Google Cloud Storage Settings page. And I have no idea how to test if they're valid. My request is from localhost, if that matters.
I've found a few similar errors on SO, but they're all related to Amazon, and they don't seem to apply to what I'm doing.
Anyone have any ideas for how I can debug this?
You should use a valid directory name in for_directory.
Replace config.fog_directory = '[directory]'
with config.fog_directory = 'name_of_fog_folder'.
Hope this will help.
Looking at your error, it seems the host is the reason your call is forbidden. AFAIK, the host should be written as such within your configuration. I believe that the error message is not explicit enough.
Hope this helps.
You need to use a name that exists, instead of inserting a random name. Try re-installing some of the files that you need to run the program, as I once was doing the same thing but the file had been updated so it didn't work.
You could try using the google-cloud-storage Ruby library to debug your authentication. Just write a simple script that uploads and downloads a file. There are examples in this guide.
If you want to use google-cloud-storage in a new Rails application, you can do so with Active Storage.
When using fog via paperclip with the following configuration:
config.paperclip_defaults = {
:storage => :fog,
:fog_credentials => {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => 'eu-central-1'
},
:fog_directory => ENV['FOG_DIRECTORY']
}
Access to S3 fails with the following error:
Excon::Errors::Forbidden: Expected(200) <=> Actual(403 Forbidden)
SignatureDoesNotMatch - The request signature we calculated does not match the signature you provided. Check your key and signing method.
Logging directly with the awscli tools using the same credentials and setting the same region works. I double checked the keys. Also, aws s3api get-bucket-location --bucket mybucket returns eu-central-1.
Update
I got it working with the aws-sdk gem instead of fog, which is what paperclip recommends in their readme. But I think fog should work too so I'm not marking this as resolved.
I'm writing a Rails 3 app that uses Paperclip to transcode a video file attachment into a bunch of other formats, and then to store the resulting files. It all works fine for local storage, but I am trying to make it work using Paperclip's Fog support to store files in a bucket on our own Ceph cluster. However, I can't seem to find the right configuration options to make Fog talk to my Ceph server.
Here is a snippet from my Rails class:
has_attached_file :videofile,
:storage => :fog,
:fog_credentials => { :aws_access_key_id => 'xxx', :aws_secret_access_key => 'xxx', :provider => 'AWS'},
:fog_public => true,
:url => ":id/:filename",
:fog_directory => 'replay',
:fog_host => 'my-hostname',
Writes using this setup fail because Paperclip attempts to save to Amazon S3 rather than the host I've provided. I have a non-Rails / non-Paperclip toy script working just fine:
conn = Fog::Storage.new({
:aws_access_key_id => 'xxx',
:aws_secret_access_key => 'xxx',
:host => 'my-hostname',
:path_style => true,
:provider => "AWS",
})
This correctly connects to my local Ceph server. So I suspect there is something I'm not configuring in Paperclip properly - but what?
Here's the relevant hunk from fog.rb that I think is causing the connection to only go to AWS:
def host_name_for_directory
if #options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
"#{#options[:fog_directory]}.s3.amazonaws.com"
else
"s3.amazonaws.com/#{#options[:fog_directory]}"
end
end
the error was just from an improperly configured Ceph cluster. For anyone who finds this thread, as long as you:
Have your wildcard DNS set up properly for your Ceph frontend;
Ceph configured to recognize as such
Pass in :host in :fog_credentials, which would be the FQDN of the Ceph frontend
:fog_host, which apparently needs to be the URL for your bucket, e.g. https://bucket.ceph-server.foobar.com.
Paperclip will work out of the box. I don't think that it is documented anywhere that you can use :host but it works.
I'm working on a small rails site which allows some users to upload images and others to see them. I started using CarrierWave with S3 as the storage medium and everything worked great but then I wanted to experiment with using CouldFront. I first added a distribution to my S3 bucket and then changed the CarrierWave configuration I was using to this:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY_ID'], # required
:aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY'], # required
:region => 'eu-west-1',
}
config.asset_host = 'http://static.my-domain.com/some-folder'
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
I should mention that http://static.my-domain.com is a CNAME entry pointing to a CloudFront endpoint (some-id.cloudfront.net). The result is that the pictures are shown correctly, URLs look like this: http://static.my-domain.com/some-folder/uploads/gallery_image/attachment/161/large_image.jpg but whenever I try to upload a photo or for that matter get the size of the uploaded attachment I get the following exception:
Excon::Errors::MovedPermanently: Expected(200) <=> Actual(301 Moved Permanently)
response => #<Excon::Response:0x007f61fc3d1548 #data={:body=>"",
:headers=>{"x-amz-request-id"=>"some-id", "x-amz-id-2"=>"some-id",
"Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked",
"Date"=>"Mon, 31 Mar 2014 21:16:45 GMT", "Connection"=>"close", "Server"=>"AmazonS3"},
:status=>301, :remote_ip=>"some-ip"}, #body="", #headers={"x-amz-request-id"=>"some-id",
"x-amz-id-2"=>"some-id", "Content-Type"=>"application/xml",
"Transfer-Encoding"=>"chunked", "Date"=>"Mon, 31 Mar 2014 21:16:45 GMT",
"Connection"=>"close", "Server"=>"AmazonS3"}, #status=301, #remote_ip="some-ip"
Just to add some more info, I tried the following:
removing the region entry
using the CloudFront URL directly instead of the CNAME
specifying the Amazon endpoint (https://s3-eu-west1.amazonaws.com)
but all of them had no effect.
Is there something I'm missing or is it that CarrierWave does not support this at this time?
The answer to the question is YES. The reason why it didn't work with my configuration is that I was missing the fog_directory entry. When I added my asset_host, I removed fog_directory since the CDN urls being generated where malformed. I later found out that this was due to having fog_public set to false. After getting the proper CDN urls, I forgot to add fog_directory back since I could see my images and thought everything was fine. Anyway the correct configuration is:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY_ID'], # required
:aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY'], # required
:region => 'eu-west-1'
}
config.fog_directory = '-bucket-name-/-some-folder-'
config.asset_host = 'https://static.my-domain.com/-some-folder-'
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
Try setting :asset_host in your Uploader like so:
class ScreenshotUploader < CarrierWave::Uploader::Base
storage :fog
# Configure uploads to be stored in a public Cloud Files container
def fog_directory
'my_public_container'
end
# Configure uploads to be delivered over Rackspace CDN
def asset_host
"c000000.cdn.rackspacecloud.com"
end
end
Inspired from https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Store-private-public-uploads-in-different-Cloud-Files-Containers-with-Fog
I have installed carrierwave and fog, have successfully uploaded the images and viewed them the first time, but now it does not show the images anymore.
Here is my config file app/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAJKOHTE4WTXCCXAMA', # required
:aws_secret_access_key => 'some secret key here', # required
:region => 'eu-east-1', # optional, defaults to 'us-east-1'
:host => 'https://s3.amazonaws.com', # optional, defaults to nil
:endpoint => 'https://s3.amazonaws.com:8080' # optional, defaults to nil
}
config.fog_directory = 'createmysite.co.za' # required
config.fog_public = false # optional, defaults to true
#config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
This is what the url looks like of the image that is supposed to display
<img alt="Normal_selection_003" src="https://createmysite.co.za.s3.amazonaws.com/uploads/portfolio/image/3/normal_Selection_003.png?AWSAccessKeyId=AKIAJKOHTE4WTXCCXAMA&Signature=8PLq8WCkfrkthmfVGfXX9K6s5fc%3D&Expires=1354859553">
when I open the image url this is the output from amazon
https://createmysite.co.za.s3.amazonaws.com/uploads/portfolio/image/3/normal_Selection_003.png?AWSAccessKeyId=AKIAJKOHTE4WTXCCXAMA&Signature=8PLq8WCkfrkthmfVGfXX9K6s5fc%3D&Expires=1354859553
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3F179B7CE417BC12</RequestId>
<HostId>
zgh46a+G7UDdpIHEEIT0C/rmijShOKAzhPSbLpEeVgUre1iDc9f7TSOwaJdQpR65
</HostId>
</Error>
Update
new config file (added fog url expiry) app/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAJKOHTE4WTXCCXAMA', # required
:aws_secret_access_key => 'chuck norris', # required
}
config.fog_directory = 'createmysite.co.za' # required
config.fog_public = false # optional, defaults to true
config.fog_authenticated_url_expiration = 600 # (in seconds) => 10 minutes
end
works like a charm!
You've set config.fog_public to false and are using Amazon S3 for storage. URLs for private files through S3 are temporary (they're signed and have an expiry). Specifically, the URL posted in your question has an Expires=1354859553 parameter.
1354859553 is Fri, 07 Dec 2012 05:52:33 GMT, which is in the past from the current time, so the link has effectively expired, which is why you're getting the Access Denied error.
You can adjust the expiry out further (the default is 600 seconds) by setting
config.fog_authenticated_url_expiration = ... # some integer here
If you want non-expiring links either
set config.fog_public to true
have your application act as a middle man, serving the files up through send_file. Here is at least one question on SO covering this