AWS iOS: How to resume a multipart upload - ios

Based on this sample http://aws.amazon.com/articles/0006282245644577, it is clear how to use multipart upload using the AWS iOS SDK. However, it seems that my uploads are not stitched together correctly when I try to resume an interrupted upload.
I use the code below to resume an upload. Is this the correct way to set the upload id of a multipart upload?
S3InitiateMultipartUploadRequest *initiateRequest = [[S3InitiateMultipartUploadRequest alloc] initWithKey:key inBucket:bucketName];
S3InitiateMultipartUploadResponse *initiateResponse = [self.amazonS3Client initiateMultipartUpload:initiateRequest];
self.multipartUpload = [initiateResponse multipartUpload];
// Set upload id to resume upload
self.multipartUpload.uploadId = uploadId;
I'd appreciate any help or pointers.

Your code should be robust enough to handle cases where you may need to track which parts were uploaded. Part Uploads of the multipart upload can be done in many ways (either in parallel, multithreaded manner or one after the other in sequence).
Whatever the above approach may be, you can use the listParts API to determine how many parts were successfully uploaded. Since you would already have the upload ID your design must support the ability to continue from the following part upload.
GET /ObjectName?uploadId=UploadId HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: Date
Authorization: Signature
Another useful resource to help optimize multipart uploads: http://aws.typepad.com/aws/2010/11/amazon-s3-multipart-upload.html

Related

Using signed urls with multipart upload

I would like to use multipart upload to allow uploading big files to the bucket split to chunks. Chunks should be uploaded in parallel, this the reason for use multipart upload (as far I know, resumable upload doesn't offer this feature).
https://cloud.google.com/storage/docs/multipart-uploads
The flow should looks in following way:
backend generates signedUrl for starting the upload
client calls previously generated url
backend generates signedUrl for uploading all the chunks
client uses previously generated url to upload multiple chunks
after all chunks are generated client confirms end of upload and chunks are merged to the file.
The problem I encountered, is following:
To perform PUT request to upload a chunk, two arguments needs to be provided: uploadId and partNumber. According to documentation, these arguments should be provided as query params. But signed url doesn't work if in the paths arguments differ from signature provided during generation. So for example, if i generate signed url with partNumber=1, I can't use it to upload chunk with partNumber=2. Is there any way to generate signed url with variable query params? Ideally something like https://.../uploadId=*&partNumber=*? Or the only option is to generate signed url for every chunk to match the signature?
Regards
I've checked the documentation, but didn't found anything useful. Unfortunately there is not much examples for multipart upload.

How to see the speedup when using Cloudinary "direct upload" method?

I have a RoR web app that allow users upload images and use Cloudinary as cloud storage. I read their document and find a cool way called "direct uploading" which reduce my server's loading. To my knowledge, the spirit is changing workflow
image -> server -> Cloudinary
to
image -> Cloudinary
and my server only store an Cloudinary url to database, not the image file (Tell me if I'm wrong, thx).
So my question is, how to check whether I have changed to "direct uploading" method successfully? Open element inspector to see time cost for each POST and GET requests? Other better options?
I expect big advances via this way, but how can I feel it?
Thanks form a rookie =)
# The app is deployed on heroku.
# Doesn't change to direct uploading method yet.
# This app is private, only serve for around 10 people.
You can indeed (and it is very recommended to) bypass your server and let Cloudinary take care of the upload processing directly. This indeed lowers the processing of your server to simply store the uploaded image's details, and the image is directly stored in your Cloudinary account. This indeed quickens the upload process. You can test out the sample project which demonstrates both server-side and client-side uploads.

Can CDNs handle base64 encoded data?

I'm trying to make an app where I take pictures from users add them to a canvas, draw stuff in them, then convert them to a base64 string and upload them.
For this purpose I'm considering the possibility to use a cdn but can't find information on what I can upload to them and how the client side uploading works. I'd like to be able to send the image as base64 and the name to be given to the file, so that when it arrives to the origin cdn, the base64 image is decoded and saved under the specified name (which I will add to the database on the server).Is this possible?Can I have some kind of save.php file on the origin cdn where I write my logic to save the file and to which I'll send XHR requests? Or how this whole thing work?I know this question may sound trivial but I'm looking for it for hours and still didn't find anything which explains in detail how the client side uploading work for CDNs.
CDNs usually do not provide such uploading service for client side, so you can not do it in this way.

aws multipart upload change token solution?

I was trying to implement a multipart upload by using S3UploadPartRequest class. However I was wondering how to deal the following situation:
During the multipart uploading , I change the token for some reason like ExpiredToken. And here is my question: should I start over the uploading or the part I uploaded still working ?
It turns out that the part I already uploaded still working, I just need to upload rest of them.

Receive video files through http on my rails JSON api

So here's the thing. I believe my case is pretty particular at this point, and some help from the experts it's highly advisable.
I have an API built on Rails (3.2.6) and what I want to able to do is receive a video file (mostly .mp4, .avi) and upload it to s3 through a Process Queue (Using Resque).
Now I'm kind of lost on how to do this. To my understanding, I would be receiving a byte[] (Array of bytes) through the request which is the video and send that as a param to my Resque job in order to upload it (Resque Job params can only be strings, not objects)?
Has anyone had any experience doing this sort of procedure. We're pretty much trying to mimic the http://docs.brightcove.com/en/media/ create_video method. Where a Video object can be created either by sending the direct file in the request or the link the file....
Any suggestions?
Try using the CarrierWave gem. You should allow someone to HTTP POST the file data to your API, and then save that file data on the backend and upload it to S3 using CarrierWave.

Resources