large file upload via Zuul - upload

I'm trying to upload a large file through Zuul.
Basically I have the applications set up like this:
UI: this is where the Zuul Gateway is located
Backend: this is where the file must finally arrive.
I used the functionality described here so everything works fine if I used "Transfer-Encoding: chunked". However, this can only be set via curl. I haven't found any way to set this header in the browser (the header is rejected with the error message in the console "Refused to set unsafe header ..".
Any idea how to instruct the header to set this header ?

It seems that actually there are 2 possible ways to upload large files via zuul:
By using "Transfer-Encoding: chunked" in header (but this cannot be used in a browser, as mentioned in the initial question, because this header is considered unsafe)
By bypassing the DispatcherServlet servlet used by zuul (using the /zuul path in front of the usual path that I was using).
I found the documentation not very clear in this aspect (that you can use either of the 2 options). In my case, considering that the file was being uploaded via Angular Js (hence in the browser), I had to use the second approach.

Related

Using the apple-app-site-association file with an azure static-web-app

I am using an Azure static web app to host a website. I want to host the apple file on the backend for deep linking. I am running into the problem however that the apple documentation requires there is no extension on the file when it is uploaded. I have tried to override the content-type / mime methods to "application/json" via route rules, general headers, and extension rules. Nothing seems to change this file from being retrieved with the content type "application/octet-stream". Any guidance on how to get these two implementations to work together would be amazing. Thank you in advance.
Edit the file metadata to set the content type.
Are you using CloudFront? If so, be sure to invalidate the cache for this file.

Set extension for Challenge file using getssl

I am using getssl to create a SSL from lets-encrypt. I think I have everything setup correct but am running into an issue with the Challenge file. The server I am using MVC on Windows on Liquid Web is not letting me use a file without an extension. If I add .txt to the challenge file it works correctly, so I know the file is accessable.
So I see two choices: First is to have getssl add a file extension to the Challenge file. Second is to allow files without extensions under MVC/Windows.
I have tried changing the web.config file and also looking for changes to the setting but neither have been fruitful.
http://my.site.com/.well-known/acme-challenge/WLasfaweaefqwwqetfgewfweqrtfwefwefsefasdfasdf_W1nuoZqCWbHTU
Found the answer here finally:
Letsencrypt acme-challenge on wordpress or asp.net mvc
Add the config file in the directory you want to download the file from.

Trying to open an application with parameter via an Application Protocol Handler

I am currently trying to figure out an issue with an Application Protocol Handler I've created. Following the directions listed on MSDN (http://msdn.microsoft.com/en-us/library/aa767914%28v=vs.85%29.aspx), I was able to register my application, PDF Annotator, to open via a URL. The issue I am experiencing is when I try to pass a parameter along with the call. The application will open, but the file parameter that gets passed is not opening within the application.
My registry key is verbatim as dictated by MSDN. My HTML code is as follows:
PDFAnnotator:C:\path\to\file\file.pdf
The way I understood the protocol handler is it takes the URL and tries to launch it via the command line. That being said, I am able to open my pdf file in PDFAnnotator with following command in the prompt:
PDFAnnotator.exe C:\path\to\file\file.pdf
I've tried formatting the file path in the HTML differently thinking that would be the issue too. Has anyone else come across this issue or something similar?
Obligatory Update for future generations (http://xkcd.com/979/):
The reason I was doing this is because half of the PDFs my application handled would be editable while the other half were read-only. I was trying to keep the read-only ones in browser with the Acrobat plugin (I'm targeting chrome only) while the protocol would allow me to set the links of the editable ones to open with Annotator. I tried, on whim, to reverse this (setting the default to Annotator and creating a protocol for Acrobat). I did this, first by trying Acrobat's URI Scheme (acrobat://), which didn't work outside of opening Acrobat. Then, I tried creating a protocol for Acrobat. When that fired off, it gave me an error stating the path was wrong for the file name, path name, or volume. So, progress? I'm giving up on this for now as other priorities have come up, but hopefully this helps somebody down the road.

Alfresco CMIS, Bad filenames for download of files

I have a website that have an integration against an Alfresco installation through CMIS. The problem is that the content-url that I get from Alfresco is ugly. The major problem is that the filename is "content.xxx" (xxx-being the file-extension).
In another project we have solved this by streaming the document through the website and then to the visitor, but for this case (internal web) that doesn't make any sence and only introduce another source for problems. But I can't make the user to accept content.xxx as filename for all files they are going to use so I need a way to fix this.
Is streaming the file through the website my best choise after all?
It appears that you may be using the deprecated CMIS URLs. When I get the content stream for an object named "test.txt" using the appropriate CMIS URLs and the AtomPub binding (/alfresco/cmisatom) I use the following URL:
/alfresco/cmisatom/1b8980cc-1f1b-4ac3-b26f-17aeee0cefc9/content/test.txt?id=workspace%3A%2F%2FSpacesStore%2Fc20d54f9-01b6-4c80-861b-094c2246ab21%3B1.0
If I then connect using the deprecated URL (/alfresco/s/api/cmis) the content stream URL becomes:
/alfresco/s/cmis/s/workspace:SpacesStore/i/c20d54f9-01b6-4c80-861b-094c2246ab21/content.txt
Can you double-check that you are using the non-deprecated URL and see if this addresses your issue?

Alternative to X-sendfile in Apache for sending file given a URL?

I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.

Resources