While creating the presigned url for my private image.png file in my s3 bucket, i used the below template
require 'aws-sdk-s3'
s3 = Aws::S3::Client.new(
region: 'us-east-1',
access_key_id: Access_key_id,
secret_access_key: Secret_access_key
)
signer = Aws::S3::Presigner.new(client: s3)
url = signer.presigned_url(
:get_object,
bucket: 'mybuck1',
key: "${image.png}-#{SecureRandom.uuid}"
)
while running the code, i get the follwing error as
AuthorizationQueryParametersError
Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.
so what might be the reason of this error, and how to fix this error
thanks in advance
Related
I am using a Rails code (using AWS SDK) to do the following:
Upload a file to an S3 bucket
s3 = Aws::S3::Client.new(
access_key_id: <my key>,
secret_access_key: <my secret key>
)
s3.put_object(bucket: <my bucket>, key: <file name>, body: <file content>)
Send an email to the user stating that the file has been uploaded to S3 bucket
ses = Aws::SES::Client.new(region: 'us-west-2')
While step 1 works perfectly fine, I am getting this error when I try to instantiate the SES client in step# 2:
NameError uninitialized constant Aws::SES
Why AWS::Ses is giving a namespace error while Aws::S3 is working perfectly fine? Please help!
These are the related gems I am using:
aws-ses
aws-sdk-3
Please add gem 'aws-sdk-ses'
for more info check here https://rubygems.org/gems/aws-sdk-ses/versions/1.6.0
I want to use Rails ActiveStorage, but I am using non-AWS S3 API object storage.
amazon:
service: S3
access_key_id: ""
secret_access_key: ""
region: ""
bucket: ""
On the documentation, it says that we need the region, my S3 API has no region. Is the any way to use custom S3 API?
I solved it by using the endpoint key on the configuration file. It look like this.
amazon:
service: S3
access_key_id: "123"
secret_access_key: "asd"
endpoint: "http://192.168.1.201:30103"
bucket: "test"
Do not need to use any region since there is not any. I found it at the bottom of the S3 Ruby gem AWS documentation https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html
According to this blog post, the new version of the Aws gem switches the namespace from AWS to Aws. But what am I supposed to use instead of
Aws.config({
access_key_id: "something",
secret_access_key: "something"
})
It's explained here but doesn't say what the alternative is:
http://ruby.awsblog.com/post/TxFKSK2QJE6RPZ/Upcoming-Stable-Release-of-AWS-SDK-for-Ruby-Version-2
Instead, I get an error:
Uncaught exception: wrong number of arguments (1 for 0)
AWS.config is no longer a method in v2. You now call Aws.config.update with a simple hash:
# v1
AWS.config({
access_key_id: "something",
secret_access_key: "something"
})
# v2
Aws.config.update({
access_key_id: "something",
secret_access_key: "something"
})
Here you have the link to the configuration options for more info related to #v2.
Looking at this section in the doc: http://docs.aws.amazon.com/sdkforruby/api/index.html#Configuration
it seems that the way you configure the credentials has changed.
I can't find the .config method in the docs anymore, it is now an attribute of Aws.
To avoid passing in access keys and secret aws access on a yml file I use the following :
development:
bucket: development
access_key_id: <%= ENV["S3_KEY"] %>
secret_access_key: <%= ENV["S3_SECRET"] %>
and then when running i get the error
Could not log "sql.active_record" event. ArgumentError: invalid byte sequence in UTF-8
PG::Error: ERROR: invalid byte sequence for encoding "UTF8": 0xe7 0xe3 0x6f
If I write my access key and secret directly on yml, like:
development:
bucket: development
access_key_id: MYACCESSKEY
secret_access_key: MYSECRETKEY
it goes smoothly.
Why does this error happen? How can i fix it without loading my key and secret into the yml file?
Edit
To load the environment variables onto development, I'm using the solution explained here
# Load the app's custom environment variables here, so that they are loaded before environments/*.rb
app_environment_variables = File.join(Rails.root, 'config', 'app_environment_variables.rb')
load(app_environment_variables) if File.exists?(app_environment_variables)
Might this be a problem with the loading process?
Edit 2
In the meanwhile, I tried to log what seems to be on my S3_CONFIG variable, loaded with:
config/initializers/load_config.rb
S3_CONFIG = YAML.load_file("#{::Rails.root}/config/s3.yml")[Rails.env]
I get
S3 Config: {"bucket"=>"mybucket", "access_key_id"=>"<%= ENV[\"S3_KEY\"] %>", "secret_access_key"=>"<%= ENV[\"S3_SECRET\"] %>"}
Wasn't it supposed to load the environment key already? May this be my problem?
This problem was happening when I was downloading the file from S3 with :
s3=AWS::S3.new(
access_key_id: S3_CONFIG["access_key_id"],
secret_access_key: S3_CONFIG["secret_access_key"])
and S3_CONFIG["access_key_id"] is just a string <%= ENV[\"S3_KEY\"] %>.
My solution for this was using just
s3=AWS::S3.new(
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'])
Guess sometimes one just needs to understand what he is doing, before pasting in lines of code...
I am trying to use Amazon S3 with Carrierwave. This is the first time I use S3 so I am not sure what I am doing most of the time. I am using Carrierwave with Fog, and uploading the files (just images) through ActiveAdmin, but I get a 'broken pipe' error when I try to upload anything
This is the full trace of the error.
I set up Carrierwave with this configuration in the initializer:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'myid',
:aws_secret_access_key => 'mysecretkey',
}
config.fog_directory = 'bucketname'
config.s3_region = 'EU'
end
And I changed this in my uploader class:
#storage :file
storage :fog
I am using Rails 3.1
Can anyone give me a clue about what's wrong? I've been searching through open issues of Carrierwave and Fog and cant find anything.
IMPORTANT EDIT: I just tried to upload a very small image and it worked, but for some reason >100 KB are giving me the "broken pipe" error.
The s3_region should be 'eu-west-1'.
In my case the 'Broken pipe' was being caused by a RequestTimeTooSkewed error. Here it is explained by the AWS site: http://www.bucketexplorer.com/documentation/amazon-s3--difference-between-requesttime-currenttime-too-large.html.
So because the default S3 bucket location is the us-east-1 and I'm located in the West I had to change my bucket's "Region" to Oregon or us-west and it worked!