I'm trying to upload a file to amazon s3 but I'm getting the following error:
Aws::S3::Errors::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
Here is how I'm doing it:
s3 = Aws::S3::Resource.new(
credentials: Aws::Credentials.new('xxxx', 'xxx'),
region: 'us-west-2'
)
obj = s3.bucket('dev.media.xxx.com.br').object('key')
obj.upload_file('/Users/andrefurquin/Desktop/teste.pdf', acl:'public-read')
obj.public_url
I'm sure I'm using the right region, credentials etc..
What can I do ?
Related
I'm getting the following error when trying to upload a file to an S3 bucket:
AWS::S3::Errors::InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The file exists, the bucket exists, the bucket allows uploads, the credentials are correct, and using CyberDuck with the same credentials i can connect and upload files to that bucket just fine. Most answers around here point to the credentials being overridden by environment variables, that is not the case here, i've tried passing them directly as strings, and outputting them just to make sure, it's the right credentials.
v1
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret'
)
s3 = AWS::S3.new
bucket = AWS::S3.new.buckets['bucket-name']
obj = bucket.objects['filename']
obj.write(file: 'path-to-file', acl:'private')
this is using the v1 version of the gem (aws-sdk-v1) but I've tried also using v3 and I get the same error.
v3
Aws.config.update({
region: 'eu-west-1',
credentials: Aws::Credentials.new('key_id', 'secret')
})
s3 = Aws::S3::Resource.new(region: 'eu-west-1')
bucket = s3.bucket('bucket-name')
obj = bucket.object('filename')
ok = obj.upload_file('path-to-file')
Note: the error is thrown on the obj.write line.
Note 2: This is a rake task from a Ruby on Rails 4 app.
Finally figured it out, the problem was that because we are using a custom endpoint the credentials were not found, I guess that works differently with custom endpoints.
Now to specify the custom endpoint you'll need to use a config option that for some reason is not documented (or at least I didn't find it anywhere), I actually had to go through paperclip's code to see how those guys were handling this.
Anyway here's how the config for v1 looks like with the added config for the endpoint:
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret',
:s3_endpoint => 'custom.endpoint.com'
)
Hopefully that will save somebody some time.
I have been using the tutorial:
https://devcenter.heroku.com/articles/paperclip-s3
For adding images to S3 storage on AWS, but now want to use IBM's Object Storage which supports S3 API (using gem 'aws-sdk').
Using below:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('REGION'),
}
}
where REGION = 'us-geo'
gives the error
Seahorse::Client::NetworkingError (Failed to open TCP connection to david-provest-movies.s3.us-geo.amazonaws.com:443 (getaddrinfo: Name or service not known)).
How would I change the 'david-provest-movies.s3.us-geo.amazonaws.com:443' to the desired 'david-provest-movies.s3-api.us-geo.objectstorage.softlayer.net' URL?
Link to the API:
https://developer.ibm.com/recipes/tutorials/cloud-object-storage-s3-api-intro/
Thank you :)
That is not a valid region for s3.
us-east-2
us-west-1
us-west-2
ca-central-1
ap-south-1
ap-northeast-2
ap-southeast-1
ap-southeast-2
ap-northeast-1
eu-central-1
eu-west-1
eu-west-2
sa-east-1
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
Here is bluemix
Note: Using third-party tools or SDKs may enforce setting a ‘LocationConstraint’ when creating a bucket. If required, this value must be set to ‘us-standard’. Unlike AWS S3, IBM COS uses this value to indicate the storage class of an object. This field has no relation to physical geography or region - those values are provided within the endpoint used to connect to the service. Currently, the only permitted values for ‘LocationCostraint’ are ‘us-standard’, ‘us-vault’, ‘us-cold’, ‘us-flex’, ‘us-south-standard’, ‘us-south-vault’, us-south-cold’, and ‘us-south-flex’.
I haven't actually tried to use paperclip yet, but the issue here is that the IBM Cloud endpoint needs to be specified. I'll take a closer look at the paperclip docs, but it looks like something along the lines of specifying the bucket URL (where {bucket-name} is hard coded in the below snippet but could be constructed) or some other method to explicitly specify the host name or root URL. There's a chance that paperclip has the AWS endpoint hardcoded and the team at IBM will need to contribute a method for setting a custom endpoint (which is fully supported in the aws-sdk gem when creating clients).
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('REGION'),
bucket_url: 'https://{bucket-name}.s3-api.us-geo.objectstorage.softlayer.net'
}
}
Let me know if it seems like using an explicit bucket URL doesn't work, and I'll look into whether the endpoint is hardcoded in paperclip and what we can do to fix it.
I have the following problem,
In my rails 4 app I am hosting images / videos on s3. Currently I made all the files public and for example an image I can access by storing the public link in the database.
However, I want some of the images videos to be private.
I looked at the presigned url options using the following
s3 = Aws::S3::Client.new(
region: AWS_REGION,
access_key_id: S3_CONFIG['access_key_id'],
secret_access_key: S3_CONFIG['secret_access_key']
)
resource = Aws::S3::Resource.new(client: s3)
bucket = resource.bucket(BUCKET_NAME)
utilities = bucket.objects(prefix: '/folder').each do |obj|
obj.presigned_url(:get, expires_in: 3600).to_s
end
This works fine, but how would I use the presigned url since I can obviously not store them in the db like the public links.
I am using aws-sdk version 2
I am also wondering if this in general is a good solution?
Thanks for any hints,
Jean
Here is the Presigner Doc
Example:
signer = Aws::S3::Presigner.new
url = signer.presigned_url(:put_object, bucket: "bucket", key: "path")
I have been able to have third party clients upload files directly to AWS s3 and then process those files with paperclip with the following line in the model:
my_object.file_attachment = URI.parse(URI.escape(my_bucket.s3.amazonaws.com/whatever.ext))
That line downloads the file, processes it and then saves it appropriately. The problem is, in order for that line to work, I have to provide anonymous read privileges for the upload location. So my question is: How do avoid that? My thought is to use the aws-sdk to download the file - so I have been trying stuff like:
file = Tempfile.new('temp', :encoding => 'ascii-8bit')
bucket.objects[aws_key].read do |chunk|
file.write chunk
end
my_object.file_attachment = file
and variations on that theme, but nothing is working so far. Any insights would be most helpful.
Solution I am not very happy with
You can generate a temporary privileged URL using the AWS SDK:
s3 = AWS::S3.new
bucket = s3.buckets['bucket_name']
my_object.file_attachment = bucket.objects['relative/path/of/uploaded/file.ext'].url_for(:read)
As #laertiades says in his amended question, one solution is to create a temporary, pre-signed URL using the AWS SDK.
AWS SDK version 1
In AWS SDK version 1, that looks like this:
s3 = AWS::S3.new
bucket = s3.buckets['bucket_name']
my_object.file_attachment = bucket.objects['relative/path/of/uploaded/file.ext'].url_for(:read)
AWS documentation: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html#url_for-instance_method
AWS SDK version 2
In AWS SDK version 2, it looks like this with the optional expires_in parameter (credit to this answer on another question):
presigner = Aws::S3::Presigner.new
my_object.file_attachment = presigner.presigned_url(:get_object, # get_object method means read-only
bucket: 'bucket-name',
key: "relative/path/of/uploaded/file.ext",
expires_in: 10.minutes.to_i # time should be in seconds
).to_s
AWS documentation: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Presigner.html
I am working on AWS api based application where is working fine for all regions ,now i want to add the support aws govCloud(http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/welcome.html)using aws-sdk api.
But when i try to call api using access key and secret token i am getting error "The security token included in the request is invalid" How can i access the govCloud using aws-sdk.
You have to pass the region 'us-gov-west-1' while accessing the API of govCloud.
#ec2 = AWS::EC2.new(access_key_id: 'Your Access Key', secret_access_key: 'Your S', region: 'us-gov-west-1')
response = #ec2.client.describe_instances
#instances = response.reservation_set.map(&:instances_set).flatten!
Using this code you can access the govCloud.