Problem in accessing bucket of my AWS S3 account - ruby-on-rails

I tried to establish connection to my aws s3 account like this in my irb console -
AWS::S3::Base.establish_connection!(:access_key_id => 'my access key', :secret_access_key => 'my secret key', :server => "s3-ap-southeast-1.amazonaws.com")
And it works well and prompt this -
=> #<AWS::S3::Connection:0x8cd86d0 #options={:server=>"s3-ap-southeast-1.amazonaws.com", :port=>80, :access_key_id=>"my access key", :secret_access_key=>"my secret key"}, #access_key_id="my access key", #secret_access_key="my secret key", #http=#<Net::HTTP s3-ap-southeast-1.amazonaws.com:80 open=false>>
I have a bucket which is based on "Singapore Region" and for that endpoint i.e. server is: s3-ap-southeast-1.amazonaws.com So when I try to access it using this command -
AWS::S3::Service.buckets
it fetches all buckets in my account correctly -
=> [#<AWS::S3::Bucket:0x8d291fc #attributes={"name"=>"bucket1", "creation_date"=>2011-06-28 10:08:58 UTC}, #object_cache=[]>,
#<AWS::S3::Bucket:0x8d291c0 #attributes={"name"=>"bucket2", "creation_date"=>2011-07-04 07:15:21 UTC}, #object_cache=[]>,
#<AWS::S3::Bucket:0x8d29184 #attributes={"name"=>"bucket3", "creation_date"=>2011-07-04 07:39:21 UTC}, #object_cache=[]>]
where as bucket1 belongs to Singapore Region and other 2 to US Region. So, when I do this -
AWS::S3::Bucket.find("bucket1")
it shows me this error:
AWS::S3::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/error.rb:38:in `raise'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/base.rb:72:in `request'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/base.rb:88:in `get'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:102:in `find'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:145:in `objects'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:313:in `reload!'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:242:in `objects'
from /home/surya/.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:253:in `each'
from (irb):5
from /home/surya/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `<main>'
I don't understand the reason why this is happening cause yesterday same thing was working well. Any guess?? am I missing something here??

Before you connect, try using
AWS::S3::DEFAULT_HOST.replace "s3-ap-southeast-1.amazonaws.com"
Another thing you can do (although this isn't really a good solution) is to access the bucket with the array index
AWS::S3::Bucket.list[0]

If anyone is getting the issue where you are trying to do different regions for different platforms, you can setup your config like this:
AWS.config({
:region => 'us-west-2',
:access_key_id => ENV["AWS_ACCESS_KEY_ID"],
:secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"],
:s3 => { :region => 'us-east-1' }
})

Here I ran into this problem too. Since I live in Brazil I tried creating a sao paulo bucket, after I deleted it and used a US Standart bucket everything panned out well.

aws region must be set to us-standard to access S3 buckets.
In case of Linux command line, run : export AWS_DEFAULT_REGION="us-standard".

Related

AWS::S3::Errors::InvalidAccessKeyId with valid credentials

I'm getting the following error when trying to upload a file to an S3 bucket:
AWS::S3::Errors::InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The file exists, the bucket exists, the bucket allows uploads, the credentials are correct, and using CyberDuck with the same credentials i can connect and upload files to that bucket just fine. Most answers around here point to the credentials being overridden by environment variables, that is not the case here, i've tried passing them directly as strings, and outputting them just to make sure, it's the right credentials.
v1
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret'
)
s3 = AWS::S3.new
bucket = AWS::S3.new.buckets['bucket-name']
obj = bucket.objects['filename']
obj.write(file: 'path-to-file', acl:'private')
this is using the v1 version of the gem (aws-sdk-v1) but I've tried also using v3 and I get the same error.
v3
Aws.config.update({
region: 'eu-west-1',
credentials: Aws::Credentials.new('key_id', 'secret')
})
s3 = Aws::S3::Resource.new(region: 'eu-west-1')
bucket = s3.bucket('bucket-name')
obj = bucket.object('filename')
ok = obj.upload_file('path-to-file')
Note: the error is thrown on the obj.write line.
Note 2: This is a rake task from a Ruby on Rails 4 app.
Finally figured it out, the problem was that because we are using a custom endpoint the credentials were not found, I guess that works differently with custom endpoints.
Now to specify the custom endpoint you'll need to use a config option that for some reason is not documented (or at least I didn't find it anywhere), I actually had to go through paperclip's code to see how those guys were handling this.
Anyway here's how the config for v1 looks like with the added config for the endpoint:
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret',
:s3_endpoint => 'custom.endpoint.com'
)
Hopefully that will save somebody some time.

AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint

I am trying to delete uploaded image files with the AWS-SDK-Core Ruby Gem.
I have the following code:
require 'aws-sdk-core'
def pull_picture(picture)
Aws.config = {
:access_key_id => ENV["AWS_ACCESS_KEY_ID"],
:secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"],
:region => 'us-west-2'
}
s3 = Aws::S3::Client.new
test = s3.get_object(
:bucket => ENV["AWS_S3_BUCKET"],
:key => picture.image_url.split('/')[-2],
)
end
However, I am getting the following error:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
I know the region is correct because if I change it to us-east-1, the following error shows up:
The specified key does not exist.
What am I doing wrong here?
It seems likely that this bucket was created in a different region, IE not us-west-2. That's the only time I've seen "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
US Standard is us-east-1
I was facing a similar error because the bucket was in region us-west-2 and the URL pattern had bucketname in the path. Once, I changed the URL pattern to have bucketname as URL subdomain to grab the files and it worked.
For eg previous URL was
https://s3.amazonaws.com/bucketname/filePath/filename
Then I replaced it as
https://bucketname.s3.amazonaws.com/filePath/filename
Check your bucket location in the console, then use this as reference to which endpoint to use:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
In my case, I selected wrong RegionEndpoint. After selecting the correct RegionEndpoint, it started working :)
After a long search, I found a working solution. The issue was because of the wrong region-code.
below is the list of region-codes, set the appropriate one and your issue will be solved.
Code Name
US East (Ohio) us-east-2
US East (N. Virginia) us-east-1
US West (N. California) us-west-1
US West (Oregon) us-west-2
Asia Pacific (Hong Kong) ap-east-1
Asia Pacific (Mumbai) ap-south-1
Asia Pacific (Osaka-Local) ap-northeast-3
Asia Pacific (Seoul) ap-northeast-2
Asia Pacific (Singapore) ap-southeast-1
Asia Pacific (Sydney) ap-southeast-2
Asia Pacific (Tokyo) ap-northeast-1
Canada (Central) ca-central-1
Europe (Frankfurt) eu-central-1
Europe (Ireland) eu-west-1
Europe (London) eu-west-2
Europe (Paris) eu-west-3
Europe (Stockholm) eu-north-1
Middle East (Bahrain) me-south-1
South America (São Paulo) sa-east-1
You can find your region-code on click of bucket name right corner.
For mode details Click
I encountered this issue when using a different AWS profile. I saw the error when I was using an account with admin permissions, so the possibility of permissions issues seemed unlikely.
It's really a pet peeve of mine that AWS is so prone to issuing error messages that have such little correlation with the required actions, from a user perspective.
For those of you using #aws-sdk/client-s3, just be sure to supply the bucket's region to the client before you send the command.
Get it with the CLI:
$ aws s3api get-bucket-location --bucket <bucket_name>
{
"LocationConstraint": "ca-central-1"
}
const client = new S3Client({ region: "ca-central-1", credentials...
For ppl who are still facing this issue, try adding s3_host as follows to the config hash
:storage => :s3,
:s3_credentials => {:access_key_id => access key,
:secret_access_key => secret access key},
:bucket => bucket name here,
:s3_host_name => s3-us-west-1.amazonaws.com or whatever comes as per your region}.
This fixed the issue for me.
I had same error. It occurred when s3 client was created with different endpoint than the one which was set up while creating bucket.
ERROR CODE - The bucket was set up with EAST Region.
s3Client = New AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, RegionEndpoint.USWest2)
FIX
s3Client = New AmazonS3Client(AWS_ACCESS_KEY, AWS_SECRET_KEY, RegionEndpoint.USEast1)
None of the above answers fixed my issue.
The above answers are probably more likely the cause of your problem but my issue was that I was using the wrong bucket name. It was a valid bucket name, it just wasn't my bucket.
The bucket I was pointing to was in a different region that my lambda function so check your bucket name!
Though S3 bucket is global but while accessing bucket we need to give region. I was getting error in .netcore, Once I added region in below code, it start working.
var s3Client = new AmazonS3Client(Amazon.RegionEndpoint.USWest2);
In my case the bucket name was wrong.
For many S3 API packages (I recently had this problem the npm s3 package) you can run into issues where the region is assumed to be US Standard, and lookup by name will require you to explicitly define the region if you choose to host a bucket outside of that region.
During the creation of S3Client you can specify the endpoint mapping to a particular region. If default of s3.amazonaws.com then bucket will be created in us-east-1 which is North Virginia.
More details on S3 endpoints and regions in AWS docs: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region.
So, always make sure about the endpoint/region while creating the S3Client and access S3 resouces using the same client in the same region.
If the bucket is created from AWS S3 Console, then check the region from the console for that bucket then create a S3 Client in that region using the endpoint details mentioned in the above link.
I Have faced the same issue.After a lot of struggle I found that the real issue is with the com.amazonaws dependencies.After adding dependencies this error got disappeared.
In C# you can do a following check, I assume, similar code is possible with other SDKs:
var client = new AmazonS3Client(
credentials.AccessKey,
credentials.ClientSecret,
new AmazonS3Config{}
);
var bucketLocationRequest = new GetBucketLocationRequest
{
BucketName = amazonS3Bucket.BucketName
};
var response = await client.GetBucketLocationAsync(bucketLocationRequest);
var region = response.Location;
var regionEndpoint = region != null ? RegionEndpoint.GetBySystemName(region.Value) : RegionEndpoint.EUCentral1;
var clientWithRegion = new AmazonS3Client(
credentials.AccessKey,
credentials.ClientSecret,
new AmazonS3Config
{
RegionEndpoint = regionEndpoint
}
);
I got this exception on c#.net, It was fixed after changing the RegionEndpoint value on client creation as below
var client = new AmazonS3Client(accesKey,secretKey, RegionEndpoint.APSoutheast2)
I got this error when I tried to access a bucket that didn't exist.
I mistakenly switched a path variable with the bucket name variable and so the bucket name had the file path value. So maybe double-check, if the bucket name that you set on your request is correct.
I live in uk was keep on trying for 'us-west-2'region. So redirected to 'eu-west-2'. The correct region for S3 is 'eu-west-2'
This occurred for me when I had a source ip constraint on the policy being used by the user (access key / secret key) to create the s3 bucket. My IP was accurate--but for some reason it wouldn't work and gave this error.

Cannot download s3 image stored with Paperclip (Rails)

I am able to upload images using Paperclip, and can see them in my bucket on Amazon's S3 management console website, but the url provided by Paperclip (e.g., image.url(:thumb)) cannot be used to access the image. I get a url that looks something like this:
http://s3.amazonaws.com/xxx/xxx/images/000/000/012/thumb/image.jpg?1366900621
When i put that URL in my browser, I'm sent to an XML page that states: "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
where the "endpoint" is a subdomain of Paperclip path. But when I go to that "endpoint", I just get another error that says "Access Denied". According to the file information provided by the Amazon site, however, the image is publicly viewable. Can someone tell me what I'm doing wrong?
My development.rb file simply contains the following:
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => AWS_BUCKET,
:access_key_id => AWS_ACCESS_KEY_ID,
:secret_access_key => AWS_SECRET_ACCESS_KEY
}
}
I got it to work by changing the default for :url
# config/initializers/paperclip.rb
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Paperclip::Attachment.default_options[:path] = '/:class/:attachment/:id_partition/:style/:filename'
I'm in the domestic U.S., but it appears that this was still necessary for my code to work (cf. https://devcenter.heroku.com/articles/paperclip-s3)

Ruby : File upload to Google drive using google-api-ruby-client SDK failing with Timeout::Error

I am using the google-api-ruby-client in my Rails application to interact with Google Drive. All the basic functions like listing files in a folder, fetching, moving, copying and deleting a file, create new folder, etc are working fine.
However uploading a new file is always failing with Timeout::Error
I receive the file to be uploaded as a regular file upload from my website (multipart/form-data) and this is how I upload it to Google drive
result = nil
new_file_obj = google_drive.files.insert.request_schema.new({
'title' => file_name,
'mimeType' => file_mime_type,
'parents' => [{'id' => current_folder_id}]
})
file_content = Google::APIClient::UploadIO.new(new_file.tempfile, file_mime_type, file_name)
result = google_client.execute(
api_method: google_drive.files.insert,
body_object: new_file_obj,
media: file_content,
parameters: {
'uploadType' => 'multipart',
'alt' => 'json'
}
)
Here new_file is the file that was uploaded by the client. new_file.tempfile gives an object of type Tempfile.
The execute method never returns and ultimately I get a Timeout::Error exception. This is the relevant stack trace :
/lib/ruby/1.9.1/net/protocol.rb:140:in `rescue in rbuf_fill'
/lib/ruby/1.9.1/net/protocol.rb:134:in `rbuf_fill'
/lib/ruby/1.9.1/net/protocol.rb:116:in `readuntil'
/lib/ruby/1.9.1/net/protocol.rb:126:in `readline'
/lib/ruby/1.9.1/net/http.rb:2211:in `read_status_line'
/lib/ruby/1.9.1/net/http.rb:2200:in `read_new'
/lib/ruby/1.9.1/net/http.rb:1183:in `transport_request'
/lib/ruby/1.9.1/net/http.rb:1169:in `request'
/lib/ruby/1.9.1/net/http.rb:1162:in `block in request'
/lib/ruby/1.9.1/net/http.rb:627:in `start'
/lib/ruby/1.9.1/net/http.rb:1160:in `request'
/lib/ruby/gems/1.9.1/gems/faraday-0.8.4/lib/faraday/adapter/net_http.rb:74:in `perform_request'
/lib/ruby/gems/1.9.1/gems/faraday-0.8.4/lib/faraday/adapter/net_http.rb:37:in `call'
/lib/ruby/gems/1.9.1/gems/faraday-0.8.4/lib/faraday/request/url_encoded.rb:14:in `call'
/lib/ruby/gems/1.9.1/gems/google-api-client-0.5.0/lib/google/api_client/request.rb:154:in `send'
/lib/ruby/gems/1.9.1/gems/google-api-client-0.5.0/lib/google/api_client.rb:546:in `execute'
I wrote this code following the example here : https://developers.google.com/drive/examples/ruby#saving_new_files What am I missing here?
The file I am trying to upload is a small png image. The file is coming properly to my web application, as I can view the file if I write it to disk. The Google servers are definitely not down, as I can upload a file from drive.google.com. This also means my network is good enough.
So what exactly is causing the timeout?
A solution to this same exception is mentioned as increasing the read_timeout (http://stackoverflow.com/questions/10011387/rescue-in-rbuf-fill-timeouterror-timeouterror). Is that what I should do? If so how do I do it here using the google-api-ruby-client sdk?
From some quick experimentation, it might just be that the tempfile wasn't rewound for reading. I modified the quickstart example to read from a tempfile and was able to reproduce the error. For example:
tmp = Tempfile.new('foo')
tmp.write("hello world")
media = Google::APIClient::UploadIO.new(tmp, 'text/plain', 'foo.txt')
file = drive.files.insert.request_schema.new({
'title' => 'My document',
'description' => 'A test document',
'mimeType' => 'text/plain'
})
result = client.execute(
:api_method => drive.files.insert,
:body_object => file,
:media => media,
:parameters => {
'uploadType' => 'multipart',
'alt' => 'json'})
This will produce the same stack trace after hanging for ~30 seconds or so. However, adding a call to rewind to reset the tempfile makes it work just fine
tmp = Tempfile.new('foo')
tmp.write("hello world")
tmp.rewind # Reset for reading...
Give that a try and see if it fixes your problem.
Looking at https://github.com/google/google-api-ruby-client/blob/master/lib/google/api_client.rb#L532-560
You can pass a Faraday connection to the execute method. Something like
conn = Faraday.default_connection
conn.options[:timeout] = 500
result = google_client.execute(
api_method: google_drive.files.insert,
body_object: new_file_obj,
media: file_content,
parameters: {
'uploadType' => 'multipart',
'alt' => 'json'
},
connection: conn
)
This will increase the Net:HTTP timeout.
Apparently Google's Documentation on uploading file to the folder using Ruby is yet to documented. It is meant to be implemented as following.
class DriveUpload
def self.process
drive_service = Google::Apis::DriveV3::DriveService.new
drive_service.authorization = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io: File.open('client_secret.json'),
scope: Google::Apis::DriveV3::AUTH_DRIVE)
meta_data = Google::Apis::DriveV3::File.new(name:'README_ONDRIVE.md',parents:["1gUxa7_9kbaBX_iofwUYKU_3BFXbBu6Ip"])
file = drive_service.create_file(meta_data, upload_source: 'README.md',content_type: 'text/plain')
end
end
Please note that parent mean folder inside the Drive(folder that you shared with service account)
This answer is wrong. drive.install permission is NOT needed to upload files.
Today I tried the upload without drive.install as suggested #Steve Bazyl and it just worked (even without the tmp_file.rewind call).
Looks like the problem was ephemeral as nothing else in the environment has changed.
Turns out the problem was that of insufficient privileges. Previously these were the scopes for which I was asking user's permission :
https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/userinfo.profile
https://www.googleapis.com/auth/drive
Where as Google drive requires the application to be installed for it to upload files on behalf of the user. And for the app to be installed permission to the following scope should also be requested : https://www.googleapis.com/auth/drive.install
Sources :
How do I add/create/insert files to Google Drive through the API? (You do not have to list your app in chrome store as mentioned there. Just asking for the drive.install permission is sufficient. Be sure to re-authenticate/re-authorize all the users to get new tokens)
https://developers.google.com/drive/scopes#google_drive_scopes
However the Timeout::Error is still somewhat of an anomaly. Probably the Google servers did not respond with an error message either when they saw insufficient privileges in the token provided.
P.S : If anyone explains the Timeout::Error, it will be accepted as the right answer.

How do I pull EC2 stats through Cloudwatch with the amazon-ec2 gem?

I'm not sure what I'm doing wrong, but I'm having trouble using this gem to pull EC2 statistics: https://github.com/grempe/amazon-ec2
I'm able to connect to my EC2 instances through Cloudwatch:
#cw = AWS::Cloudwatch::Base.new(:access_key_id => ACCESS_KEY_ID, :secret_access_key => SECRET_KEY_ID)
I can see all the metrics available to me:
#cw.list_metrics
But when I try and use the get_metric_statistics method, I can't figure out what option parameters reference the actual metric fields.
# Fails
#cw.get_metric_statistics(namespace: 'AWS/EC2', measure_name: 'CPUUtilization', statistics: "Average")
I get a generic "NoMethodError: undefined method `elements' for nil:NilClass" error and I can't find out how to properly use get_metric_statistics(). Does anyone have any example code they have used to do similar things? It's the 'statistics' and 'dimensions' parameters that I'm confused about.
If I can supply any further information, let me know.
select server base on your instance in which zone
Amazon CloudWatch Endpoints
US-East (Northern Virginia) monitoring.us-east-1.amazonaws.com
US-West (Northern California) monitoring.us-west-1.amazonaws.com
EU (Ireland) monitoring.eu-west-1.amazonaws.com
Asia Pacific (Singapore) monitoring.ap-southeast-1.amazonaws.com
in your example u miss to set server zone that why its not working
#cw = AWS::Cloudwatch::Base.new(:access_key_id => 'key', :secret_access_key => 'key',:server => 'monitoring.eu-west-1.amazonaws.com')
instance_id ='instanceid'
time = Time.new
time.gmtime
#result = #cw.get_metric_statistics(namespace: 'AWS/EC2', measure_name: 'CPUUtilization', statistics: 'Average', start_time: time-1000, dimensions: "InstanceId=#{instance_id}")
try with this code and share your output. this code working for me
So I figured this one out after inspecting the gem source, it should look like this.
#cw.get_metric_statistics(namespace: 'AWS/EC2', measure_name: 'CPUUtilization', statistics: 'Average', start_time: 1.minute.ago.to_time, dimensions: "InstanceId=#{instance_id}")

Resources