s3cmd - delete failed multipart uploads taking up space and being charged - upload

I use s3cmd for backing up files to AWS S3.
Because of regular internet connection problems, 1000s of failed multipart uploads have built up and I am being charged for this space usage.
I have found a way to list all the multipart uploads and think these are the ones which have failed.
e.g.
s3cmd multipart s3://my.bucket.name/
...
2019-09-21T02:57:09.000Z s3://my.bucket.name/server1/home/jbloggs/bigfile.tar.gz wsmw7IGcBvy.yssRikscDwxozV0_7iU_YXsgwqR3nQxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxkPeUiWMp3G6NMWOemaIcWjYA5XaGaiqz09WJKnQRzJIAtQ
Is there a way to delete all these failed uploads and stop being charged?
Thanks,
Paully

You can do:
s3cmd --help
to see all the options that are available for the tool.
In your case, the multipart command will list all the active "incomplete" multipart download.
In the reply, you will see the "object path" and an "upload id".
To "cancel"(ie delete), this incomplete upload, you just have to use the abortmp.
In your case, for example, it will be:
s3cmd abortmp s3://my.bucket.name/server1/home/jbloggs/bigfile.tar.gz wsmw7IGcBvy.yssRikscDwxozV0_7iU_YXsgwqR3nQxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxkPeUiWMp3G6NMWOemaIcWjYA5XaGaiqz09WJKnQRzJIAtQ
(ie s3cmd abortmp s3://mybucket/myobject Upload_ID)

Related

Download or view the files sent as multi-part Request (PNG, PDF) through network proxy tool?

How to download or view the files sent as multi-part Request (e.g. PUT) via a software tool?
Is there any way to accomplish this with a specific tool like CharlesProxy on macOSX, to download and view files that were sent as a part of request (PUT multipart request)? I typically fix such issues by saving the file to sandbox via code changes. Ideally, need something that can be used by our QA and doesn't require any code modification.
Charles Proxy on macos is sufficient for the most dev/QA needs, such as:
Throttle network
Device debugging
Download response data
...
However, there is no option present to view or download files in HTTP request in Charles Proxy 4.x:
Charles Proxy 4.x (and earlier) allows saving response files, example pdf in this screenshot:
This can be done by editing binary file manually. It's a bit tricky, but can save the file in multipart HTTP request, without any modification to project code.
Here are the steps (verified on Charles v4.2.8 and macOS v10.12.6):
Save request. Right click a recorded HTTP request (the one that send file), and click "Save Request...". This will save the whole HTTP request in binary format.
Inspect Hex representation of request. Left click that recorded HTTP request, and click "Hex" tab of "Request" panel. This will show the binary representation of request, together with some parsed text.
Edit the saved request. Open the saved request (step 1) with editor that support binary, such as Sublime Text. Then, remove all non-image binary code according to the result of step 2. Especially, remove every bytes before (and include) the first empty line (0d0a0d0a in macOS and Windows, 0a0a in Linux), and remove the tail bytes. For example, following screenshot indicates request bytes of step 2, the selected bytes would be deleted (please note the 0d0a bytes, as this experiment is taken on Mac):
...
Save image file. Save the file after step 3 is finished. Then, append filename extension according to the Content-Type value in step 2. In this experiment, the Content-Type is image/png, so .png is appended to the filename.
That's it. You can open the xxx.png file now. It's a pure image file.
Note: this experiment only contain 1 file, but the strategy works when there are multiple file upload in request.

MultipartUpload upload on S3 with AWS ios SDK

I want to upload large files on S3. I know there is an option multipart upload by which I can upload large file in parts. I read the documentation (http://docs.aws.amazon.com/mobile/sdkforios/developerguide/s3transfermanager.html) but didn't find any code for the multipart upload. I have successfully uploaded a file on server as a single file but I want to use multipart for large file.
Thanks.
IF you're still looking for a solution, you can check out my blog post on this subject: Taming the AWS framework to upload a large file to S3. For large files you will have to skip using the AWSTransferManager as it uses cognito credentials which are limited to an hour validity.

how to check if a file is finished decompressing

I have a website where i upload a zip file and then the serverside decompresses it. i've since moved to amazons S3 service which does not allow such things as decompressing.
I'm wondering, is there a way to check or monitor the status of that zip file- and then run my model/method for pushing to s3? i'd like to run it immediately after it's decompressed- otherwise i'd try a cronjob or something.
The only conclusion i can think of right now is to output the files unzipped in my view. then selecting those files and submitting again to the method for uploading. but this seems cumbersome.
any thoughts on this?

Alternative to X-sendfile in Apache for sending file given a URL?

I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.

Ruby on rails: Image downloads with Authentication/Authorization/Time outs

I'm having few doubts on implementing file downloads. I'm creating an app where I use attachment_fu with Amazon s3 to upload files. Things are working pretty well so far on uploading side. Now its the time to start the file downloads. Here is what I need, a logged in user search and browse for Images and they should able to add the files in to a download basket (Let's say its a Download Shopping Cart). Finally the user should be able to download these file(s) from S3 probably as a zipped file.
Is there any plugin/gem where I can use for this?
The downside of giving the customer a zip file of all the files is that you'll need to first pull all of the files from S3 back onto your server, then zip them.
You can certainly do that if you want, but it will take a bit of time, you would not want to do it synchronously as part of the browser request. Instead, do it as a background job using delayed_job or similar.
To do the actual zipping, use Zlib::GzipWriter See http://ruby-doc.org/core/classes/Zlib/GzipWriter.html -- it is part of standard Ruby
You could then:
email the user the actual zip file as an attachment
email the user the link to the zip file on your server
or upload the zip file to s3, then email a link to the zip file on s3
Remember to create a clean up task/job to remove the old zip files from your system...
Alternative is to not zip the files together, instead, give the user one or more links to download the files separately.
S3 enables you to create a url to an S3 file that can be used for a set period of time. (The file would be private on S3 so a straight link to it won't work.) Here's how to create it using attachment-fu and aws-s3 gem:
# I added this as a method to my model for the files stored in S3
def authenticated_s3_url
# return a publicly usable url
connect_to_aws # a local method which connects/re-connects to s3
S3Object.url_for(full_filename,
bucket_name,
:expires_in => 60 * 60) # 1 hour
end

Resources