Google cloudrun blocking downloading big files [duplicate] - google-cloud-run

This question already has answers here:
Cloud Run Request Limit
(3 answers)
Closed 2 years ago.
I'd deployed a springboot 2.2.4 webapp on google cloudrun and I've some big files (> 100mb) served as static files inside of it. When I'm trying to access these specific files I'm getting an error 500 on log. If I try with smaller files it does work.
Does anyone knows if there are any limit related to the files size?
Thanks,
Lucas

As captured in the official Quotas and Limits documentation page, the maximum response size is currently 32MB

Related

encrypted docker container for desktop [duplicate]

This question already has answers here:
Encrypted and secure docker containers
(6 answers)
Closed 2 months ago.
I'm looking for a solution where I can run an app to an device in an ecrypted way, I mean user of that laptop should not be able to see the source code, just the running app. But it should be run on their device.
Is there any way to encrypt app running and use a key with expiration date, like windiws with licence? In my case is a little bit different, they should be much more limmited .
Thank you very much for the answers.
I didn't find any tool/framework for that.
Docker cannot solve the issue, since everything in the container must be readable by the underlying OS.
With the command docker save the user can export the images content into a tar file. See: https://docs.docker.com/engine/reference/commandline/save/
So you have to protect your app the same way you would protect it without docker.
First of all you should use a compiled language so the user cannot see the code directly. But even compiled code is not protected againts modification etc. To take it one step futher you could try to obfuscate the compiled binary.

How to upload big files to webserver?

My application needs to send big files (>2GB) to a webserver. I tried to receive the file with a php script on server side, but the file size exceeds the php limit. Sadly, my server hoster does not allow me to increase the file size limit that much.
So, it seems the only possible solution is using FTP. But when using FTP, I need to store the credentials in the source code and someone could reverse engineer my software and get the credentials.
I didn't find any suitable and safe solution for this. Can someone give me a hint how this should be done?

Is it dangerous for performance to provide in MVC file download as Stream Forwarding from another Stream source

I want to provide in Azure MVC web site a Download link for files that are stored in Blob storage. I do not want the users see my blob storage Url and I want to provide my own dowload link to provide the name of the file by this as well.
I think this can be done with passing(forwarding) the stream. Found many similar questions here in SO, eg here: Download/Stream file from URL - asp.net.
The problem what I see is here: Imagine 1000 users start downloading one file simultaneously. This will totaly kill my server as there is limited number of threads in the pool right?
I should say, that the files I want to forward are about 100MB big so 1 request can take about 10 minutes.
I am right or can I do it with no risks? Would async method in MVC5 help? Thx!
Update: My azure example is here only to give some background. I am actualy interrested in the theoretical problem of the Long Streaming Methods in MVC.
in your situation Lukas, I'd actually recommend you look at using the local, temporary storage area for the blob and serve it up from there. This will result in a delay in delivering the file the first time, but all subsequent requests will be faster (in my experience) and result in fewer azure storage transaction calls. it also then eliminates the risk of running into throttling on the azure storage account or blob. Your throughput limits would be based on the outbound bandwidth of the vm instance and number of connections it can support. I have a sample for this type of approach at: http://brentdacodemonkey.wordpress.com/2012/08/02/local-file-cache-in-windows-azure/

Http Server on a Win32 Application [duplicate]

This question already has answers here:
Indy 10 Http Server sample
(2 answers)
Closed 9 years ago.
How to write a desktop Delphi 7 Win32 Apps with embedded Http server something like Media Player Classic with Web Interface. I need a standalone HTTP server to be launched from user's browser to a url e.g. http://:/ and request using a GET or POST and then responded from Delphi app.
TCP/IP libraries usually comes with demo projects.
For example http://synapse.ararat.cz/doku.php/public:howto:httpsserver
There are also larger frameworks that provide HTTP server just as one of their services (which still allows you to carve that part of their code and re-use it).
For example (but Henri seems to got fed up with Embarcadero and abandoned his Delphi projects) http://code.google.com/p/delphionrails/w/list
For another example there is http://blog.synopse.info/tag/HTTP
This implementation relies upon Windows http.sys driver, that was developed as a fast HTTP protocol implementation for Microsoft IIS.
During recent DataSnap performance shootouts mORMot-based server, working through http.sys AFAIR, shown great performance with low overhead.
BTW, Indy-based DataSnap was shown to only surviving of low to medium load.
Add an IdHTTPServer to the project.

Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload Java API due to broken pipe

I implemented S3 multi-part upload in Java, both high level and low level version, based on the sample code from
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html
When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After multiple retries, it still failed.
Here is the way to repeat the scenario. Take 1.1.7.1 release,
create a new bucket in US standard region
create a large EC2 instance as the client to upload file
create a file of 13GB in size on the EC2 instance.
run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance
test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes.
So far the problem shows up consistently. I did a tcpdump. It appeared the HTTP server (S3 side) kept resetting the TCP stream, which caused the client side to throw IO exception - broken pipe, after the uploaded byte counts exceeding 8GB . Anyone has similar experiences when upload large files to S3 using multi-part upload?

Resources