Cannot download s3 image stored with Paperclip (Rails) - ruby-on-rails

I am able to upload images using Paperclip, and can see them in my bucket on Amazon's S3 management console website, but the url provided by Paperclip (e.g., image.url(:thumb)) cannot be used to access the image. I get a url that looks something like this:
http://s3.amazonaws.com/xxx/xxx/images/000/000/012/thumb/image.jpg?1366900621
When i put that URL in my browser, I'm sent to an XML page that states: "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
where the "endpoint" is a subdomain of Paperclip path. But when I go to that "endpoint", I just get another error that says "Access Denied". According to the file information provided by the Amazon site, however, the image is publicly viewable. Can someone tell me what I'm doing wrong?
My development.rb file simply contains the following:
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => AWS_BUCKET,
:access_key_id => AWS_ACCESS_KEY_ID,
:secret_access_key => AWS_SECRET_ACCESS_KEY
}
}

I got it to work by changing the default for :url
# config/initializers/paperclip.rb
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Paperclip::Attachment.default_options[:path] = '/:class/:attachment/:id_partition/:style/:filename'
I'm in the domestic U.S., but it appears that this was still necessary for my code to work (cf. https://devcenter.heroku.com/articles/paperclip-s3)

Related

AWS::S3::Errors::InvalidAccessKeyId with valid credentials

I'm getting the following error when trying to upload a file to an S3 bucket:
AWS::S3::Errors::InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The file exists, the bucket exists, the bucket allows uploads, the credentials are correct, and using CyberDuck with the same credentials i can connect and upload files to that bucket just fine. Most answers around here point to the credentials being overridden by environment variables, that is not the case here, i've tried passing them directly as strings, and outputting them just to make sure, it's the right credentials.
v1
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret'
)
s3 = AWS::S3.new
bucket = AWS::S3.new.buckets['bucket-name']
obj = bucket.objects['filename']
obj.write(file: 'path-to-file', acl:'private')
this is using the v1 version of the gem (aws-sdk-v1) but I've tried also using v3 and I get the same error.
v3
Aws.config.update({
region: 'eu-west-1',
credentials: Aws::Credentials.new('key_id', 'secret')
})
s3 = Aws::S3::Resource.new(region: 'eu-west-1')
bucket = s3.bucket('bucket-name')
obj = bucket.object('filename')
ok = obj.upload_file('path-to-file')
Note: the error is thrown on the obj.write line.
Note 2: This is a rake task from a Ruby on Rails 4 app.
Finally figured it out, the problem was that because we are using a custom endpoint the credentials were not found, I guess that works differently with custom endpoints.
Now to specify the custom endpoint you'll need to use a config option that for some reason is not documented (or at least I didn't find it anywhere), I actually had to go through paperclip's code to see how those guys were handling this.
Anyway here's how the config for v1 looks like with the added config for the endpoint:
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret',
:s3_endpoint => 'custom.endpoint.com'
)
Hopefully that will save somebody some time.

How to use AWS S3, copy file from one bucket to another and get the url back to store in carrierwave

I am using Ruby on Rails AWS SDK to copy files from one bucket to another and store the url in carrierwave. The app uses carrierwave and I need to store the new copied url to the database field that carrierwave uses. The problem is the url is private so I tried to just generate the url and store it in remote_file_url but it can't access the file. The files are very large so I can't use carrierwave to upload the file to sdk. I tried and had no success.
obj.public_url did not work for me. When I copied the file it worked but not when moved it from one bucket to another.
Here is what I have:
picture_name = "my_picture.jpg"
#picture = Picture.find(id)
upload_dir = "uploads/picture/file/#{#picture.id}/my_picture.jpg"
s3 = Fog::Storage.new(provider: 'AWS', :aws_access_key_id => access_key, :aws_secret_access_key => secret_access_key ,:region => region)
obj = s3.copy_object('my-temp-bucket',
picture_name,
"my-target-bucket",
upload_dir, acl: 'public-read')
#picture.remote_image_url = obj.url_for(:read, :expires => 10*60)
#picture.save
I also tried with no luck.
#picture.remote_image_url = obj.public_url
#picture.save
I get an error undefined method `bucket' for #
Thank you for your help!!!

Rails upload static image url to s3

I need to copy images from static image url which are stored in database tables
like : https://www.gravatar.com/avatar/b8c19609aaa9eb291f2a5974e369e2a4?s=328&d=identicon&r=PG&f=1
to s3 using ruby on rails
Try out following code:
AWS::S3::S3Object.store(path,content,bucket)
Here, path is the path in the bucket where you want to store, content is the contents which you want to store in that file and bucket is the name of the bucket.
Before this you have to establish connection. So your final code might look like this:
AWS::S3::Base.establish_connection!(
:access_key_id => <key>,
:secret_access_key => <access_key>,
:use_ssl => true,
)
AWS::S3::S3Object.store(path,open('https://www.gravatar.com/avatar/b8c19609aaa9eb291f2a5974e369e2a4?s=328&d=identicon&r=PG&f=1'),bucket)

Rename Object in S3 using Ruby

I want to rename an item in s3 using the Ruby sdk. How do I do this?
I have tried:
require 'aws-sdk'
s3 = AWS.config(
:region => 'region',
:access_key_id => 'key',
:secret_access_key => 'key'
)
b = AWS::S3::Bucket.new(client: s3, name: 'taxalli')
b.objects.each do |obj|
obj.rename_to('imports/files/' + line.split(' ').last.split('/').last)
end
But I dont see anything in the new sdk for moves or renames.
In AWS-SDK version 2 is now method called "move_to" (http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#move_to-instance_method) which you can use in this case. Technically it will still copy & delete that file on S3 but you don't need to copy & delete it manually and most importantly it will not delete that file if that copy action fails from some reason.
There is no such thing as renaming in the Amazon S3 SDK. Basically what you have to do is copy the object and then delete the old one.
require 'aws-sdk'
require 'open-uri'
creds = Aws::SharedCredentials.new(profile_name: 'my_profile')
s3 = Aws::S3::Client.new( region: 'us-east-1',
credentials: creds)
s3.copy_object(bucket: "my_bucket",
copy_source: URI::encode("my_bucket/MyBookName.pdf"),
key: "my_new_book_name.pdf")
s3.delete_object(bucket: "my_bucket",
key: "MyBookName.pdf")
have you Rails Paperclip S3 rename thousands of files? or https://gist.github.com/ericskiff/769191 ?

Problem in accessing bucket of my AWS S3 account

I tried to establish connection to my aws s3 account like this in my irb console -
AWS::S3::Base.establish_connection!(:access_key_id => 'my access key', :secret_access_key => 'my secret key', :server => "s3-ap-southeast-1.amazonaws.com")
And it works well and prompt this -
=> #<AWS::S3::Connection:0x8cd86d0 #options={:server=>"s3-ap-southeast-1.amazonaws.com", :port=>80, :access_key_id=>"my access key", :secret_access_key=>"my secret key"}, #access_key_id="my access key", #secret_access_key="my secret key", #http=#<Net::HTTP s3-ap-southeast-1.amazonaws.com:80 open=false>>
I have a bucket which is based on "Singapore Region" and for that endpoint i.e. server is: s3-ap-southeast-1.amazonaws.com So when I try to access it using this command -
AWS::S3::Service.buckets
it fetches all buckets in my account correctly -
=> [#<AWS::S3::Bucket:0x8d291fc #attributes={"name"=>"bucket1", "creation_date"=>2011-06-28 10:08:58 UTC}, #object_cache=[]>,
#<AWS::S3::Bucket:0x8d291c0 #attributes={"name"=>"bucket2", "creation_date"=>2011-07-04 07:15:21 UTC}, #object_cache=[]>,
#<AWS::S3::Bucket:0x8d29184 #attributes={"name"=>"bucket3", "creation_date"=>2011-07-04 07:39:21 UTC}, #object_cache=[]>]
where as bucket1 belongs to Singapore Region and other 2 to US Region. So, when I do this -
AWS::S3::Bucket.find("bucket1")
it shows me this error:
AWS::S3::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/error.rb:38:in `raise'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/base.rb:72:in `request'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/base.rb:88:in `get'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:102:in `find'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:145:in `objects'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:313:in `reload!'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:242:in `objects'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:253:in `each'
from (irb):5
from /home/surya/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `<main>'
I don't understand the reason why this is happening cause yesterday same thing was working well. Any guess?? am I missing something here??
Before you connect, try using
AWS::S3::DEFAULT_HOST.replace "s3-ap-southeast-1.amazonaws.com"
Another thing you can do (although this isn't really a good solution) is to access the bucket with the array index
AWS::S3::Bucket.list[0]
If anyone is getting the issue where you are trying to do different regions for different platforms, you can setup your config like this:
AWS.config({
:region => 'us-west-2',
:access_key_id => ENV["AWS_ACCESS_KEY_ID"],
:secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"],
:s3 => { :region => 'us-east-1' }
})
Here I ran into this problem too. Since I live in Brazil I tried creating a sao paulo bucket, after I deleted it and used a US Standart bucket everything panned out well.
aws region must be set to us-standard to access S3 buckets.
In case of Linux command line, run : export AWS_DEFAULT_REGION="us-standard".

Resources