First I created a pdf with WickedPDF.
pdf_string = WickedPdf.new.pdf_from_string(
ActionController::Base.new.render_to_string(template: 'v1/invoices/invoice_template', filename: 'test.pdf')
)
invoice.attachment.attach(
io: StringIO.new(pdf_string),
filename: 'test.pdf',
content_type: 'application/pdf'
)
My app is setup to store the files on s3 on prod and locally in dev. For testing I also used s3 in dev to verify that my pdf is getting generated and saved correctly. So after it has been generated I am able to log into aws and download my invoice. Everything displays just fine.
Now the problem I have is trying to download my invoice. When I download it, my pdf is just blank.
I have a download method that looks like this:
response.headers['Content-Type'] = #invoice.attachment.content_type
response.headers['Content-Disposition'] = "inline; #{#invoice.attachment.filename}"
response.headers['filename'] = #invoice.filename
#invoice.attachment.download do |chunk|
response.stream.write(chunk)
end
I also tried
send_data #invoice.attachment.download, filename: #invoice.filename
and my frontend (react) uses axios to download it:
const downloadInvoice = (id) => {
axios.get(`/v1/invoices/${id}/download`)
.then((response) => {
const url = window.URL.createObjectURL(new Blob([response.data]));
const link = document.createElement('a');
link.href = url;
link.setAttribute('download', response.headers.filename);
document.body.appendChild(link);
link.click();
})
.catch(() => {});
};
I am a little confused to why my downloaded pdf is blank. If I open it in my storage folders it displays just fine. There seems to be an issue with how I download it.
What works is if I create a presigned URL for S3 with:
s3 = Aws::S3::Resource.new(client: aws_client)
bucket = s3.bucket('bucket-name')
obj = bucket.object("#invoice.attachment.attachment.blob.key)
url = obj.presigned_url(:get)
I can send that url back to the frontend and open it in a new tab to view the pdf. But this is not what I want...
Thanks for any help!
In case anyone is interested in this or runs into the same issue. I hope this will save you some time!
The problem is with the axios request.
Instead of:
axios.get(`/v1/invoices/${id}/download`)
use
axios.get(`/v1/invoices/${id}/download`, { responseType: 'arraybuffer' })
Related
I am trying to have images uploaded via my trix editor and also want to upload the images to AWS S3.
The images are getting succesfully uploaded to ActiveStorage but they are not getting uploaded to S3.
I however see something like this in the rails console Generated URL for file at key: Gsgdc7Jp84wYTQ1W4s (https://bucket.s3.amazonaws.com/Gsgdc7Jp84wYT2Ya3gxQ1W4s?X-Amz-Algorithm=AWS4redential=AKIAX6%2F20200414%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241821Z&X-Amz-Expires=300&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost&X-Amz-Signature=3613d41915e47baaa7c90421eee3f0ffc)
I see that trix documentation provides attachments.js, which uploads to cloud provider https://trix-editor.org/js/attachments.js.
Also below is my relevant part of my code which is used to upload to ActiveStorage
document.addEventListener('trix-attachment-add', function (event) {
var file = event.attachment.file;
if (file) {
var upload = new window.ActiveStorage.DirectUpload(file,'/rails/active_storage/direct_uploads', window);
upload.create((error, attributes) => {
if (error) {
return false;
} else {
return event.attachment.setAttributes({
url: `/rails/active_storage/blobs/${attributes.signed_id}/${attributes.filename}`,
href: `/rails/active_storage/blobs/${attributes.signed_id}/${attributes.filename}`,
});
}
});
}
});
Below are my questions:
1) If my active storage is configured to upload to S3, do i still need attachments.js
2) My active storage is configured to upload to S3 and i see the above response in rails console but do not see the file in S3.
Any help in fixing this would be really great. Thanks.
I a writing a Rails API, with help of aws-sdk-ruby, which retrieves a file from AWS and returns in the response of API. Can I get somehow file stream in response of object.get, which I can directly return from the Rails API.
s3 = Aws::S3::Resource.new
bucket_name = "my_bucket"
bucket = s3.bucket(bucket_name)
object = bucket.object("a/b/my.pdf")
Rails.logger.info 'Downloading file to AWS'
downloaded_data = object.get({})
send_data(downloaded_data,
:filename => "my.pdf",
:type => "mime/type"
)
But it does not return file.
One option I know is to first save the file in local using this line:
object.get(response_target: '/tmp/my.pdf')
Than I can return this file but is there a way to skip this step and directly return the response of object.get without saving in local.
I can not use this solution as my URL are not public and I am just creating a REST API.
I got screen like following when I tried this solution.
As of now what I am doing is getting a URL from the object like this:
url = object.presigned_url(:get, expires_in: 3600)
and using following code to send the response:
data = open(url)
send_data data.read, filename: file_name, type: "mime/type"
I am developing a website on ruby on rails where users can upload pictures thanks to paperclip, it is stored in amazon S3. After, they can modify pictures thanks to aviary. But when i want to save the new pictures, aviary just gave me an temporary URL where i can get my modified picture.
Does paperclip can do it ? I don't think it can save an picture from an URL and store it to S3 ?
I've searched for a week now, and i don't know the best way to do it. I've read about filepicker, but the account to store data in S3 files isn't free ...
Finally i've heard about this s3 https://github.com/qoobaa/s3, but i don't understand how to use it. I have installed gem s3, but when i set require 's3' , it is not recognize.
What is the best to do?
Why don't you pass the URL that Aviary generates to your server and upload the new photo from there? The code below does that in Python/Django:
#login_required
#csrf_exempt
def upload_from_url(request):
origin_url = request.POST.get("origin_url")
name = request.POST.get("name")
try:
conn = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket_name = settings.AWS_UGC_STORAGE_BUCKET_NAME
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = name
file_object = urllib2.urlopen(origin_url)
fp = StringIO.StringIO(file_object.read())
k.set_contents_from_file(fp)
return HttpResponse("Success")
except Exception, e:
return HttpResponse(e, mimetype='application/javascript')
Hope this helps.
Paperclip has matured a lot since this question was answered. If you want to save files by passing a URL, as of Paperclip v3.1.4, you can just assign the URL to your Paperclip attachment attribute.
Let's say I have a class User and my attachment is called avatar. We'll have the following in our User model:
has_attached_file :avatar
# Validate the attached image is image/jpg, image/png, etc
# This is required by later releases of Paperclip
validates_attachment_content_type :avatar, :content_type => /\Aimage\/.*\Z/
In our view, we can define a hidden field that will accept the temporary URL received from Aviary:
= f.hidden_field :avatar, id: 'avatar'
We can set the value of this hidden field with the Aviary onSave callback:
var featherEditor = new Aviary.Feather({
apiKey: '#{ENV['AVIARY_KEY']}',
onSave: function(imageID, newURL) {
var img = document.getElementById(imageID);
img.src = newURL;
var avatar = document.getElementById('avatar');
avatar.value = newURL;
featherEditor.close();
}
});
Within onSave, you can use AJAX to update the User object, use jQuery's .submit() to submit the form, or let the user submit it when they want.
I have a problem with S3 and CarrierWave:
I have a pseudo-form that uploads data and files, I wrote "pseudo" because it's an ajax form so data is sent with jquery to rails with a POST request. Files cannot be uploaded in this way...so I have a popup windows that upload files to rails, I save in the session the reference to the uploaded files and when the ajax request uploads the rest of the form, I link the files uploaded to the rest of the data.
With storage :file it works without any problems, when i receive the file I do:
uploader = ImgObjUploader.new
uploader.store!(params[:image_form][:image])
session["image"] = uploader.url
and then when I get the rest of the data:
if (session[:image] != nil) then
obj.image = File.open(session[:image])
end
And my model is:
mount_uploader :image, ImgObjUploader
This code work without any problems, for amazon s3 I switched to:
uploader = ImgObjUploader.new
uploader.retrieve_from_store!(session[:image])
puts uploader
#obj.image = uploader
obj.image = uploader.url
but it doesn't work...I didn't receive an error but I don't have the image saved inside obj object. Puts uploader prints the url of amazon S3.
Anyone can help me?
Thank You.
In my app, I have a requirement that is stumping me.
I have a file stored in S3, and when a user clicks on a link in my app, I log in the DB they've clicked the link, decrease their 'download credit' allowance by one and then I want to prompt the file for download.
I don't simply want to redirect the user to the file because it's stored in S3 and I don't want them to have the link of the source file (so that I can maintain integrity and access)
It looks like send_file() wont work with a remote source file, anyone recommend a gem or suitable code which will do this?
You would need to stream the file content to the user while reading it from the S3 bucket/object.
If you use the AWS::S3 library something like this may work:
send_file_headers!( :length=>S3Object.about(<s3 object>, <s3 bucket>)["content-length"], :filename=><the filename> )
render :status => 200, :text => Proc.new { |response, output|
S3Object.stream(<s3 object>, <s3 bucket>) do |chunk|
output.write chunk
end
}
This code is mostly copied form the send_file code which by itself works only for local files or file-like objects
N.B. I would anyhow advise against serving the file from the rails process itself. If possible/acceptable for your use case I'd use an authenticated GET to serve the private data from the bucket.
Using an authenticated GET you can keep the bucket and its objects private, while allowing temporary permission to read a specific object content by crafting a URL that includes an authentication signature token. The user is simply redirected to the authenticated URL, and the token can be made valid for just a few minutes.
Using the above mentioned AWS::S3 you can obtain an authenticated GET url in this way:
time_of_exipry = Time.now + 2.minutes
S3Object.url_for(<s3 object>, <s3 bucket>,
:expires => time_of_exipry)
Full image download method using temp file (tested rails 3.2):
def download
#image = Image.find(params[:image_id])
open(#image.url) {|img|
tmpfile = Tempfile.new("download.jpg")
File.open(tmpfile.path, 'wb') do |f|
f.write img.read
end
send_file tmpfile.path, :filename => "great-image.jpg"
}
end
You can read the file from S3 and write it locally to a non-public directory, then use X-Sendfile (apache) or X-Accel-Redirect (nginx) to serve the content.
For nginx you would include something like the following in your config:
location /private {
internal;
alias /path/to/private/directory/;
}
Then in your rails controller, you do the following:
response.headers['Content-Type'] = your_content_type
response.headers['Content-Disposition'] = "attachment; filename=#{your_file_name}"
response.headers['Cache-Control'] = "private"
response.headers['X-Accel-Redirect'] = path_to_your_file
render :nothing=>true
A good writeup of the process is here