How can I get last transaction on S3 bucket? - upload

Am a beginner of S3 AWS SDK. and getting problem in my project.
I want to get uploaded or downloaded size of file which is currently uploaded. Actually the functionality of my application is that it will upload contents directly from client browser to Amazon S3. But if transfer of data interrupted and if exception is raised then i cant track that how much data of file has be transferred.

If data transfer is interrupted you will have to start all over. There is no way to resume the transfer where you left off. check out Amazon S3 official forum for more info.

Related

AWS s3 upload + api taking too long

Currently, I'm using amazon s3 to store all the objects like images and videos. I'm using IOS AWS SDK to upload the objects.
The flow of my application are
User snaps a photo or record a video
User add additional information on a form, some sort like
Instagram's caption (using Alamofire)
user clicks continue, and then AWS will begin to upload the images
and videos to S3 using IOS AWS SDK
After object has been successfully uploaded to S3, S3 will response
with a link
Finally using Alamofire to send the information including the link from
S3 as parameters to POST API
The problem that I'm facing is that it takes quite some, to do AWS upload + calling an API. This indeed is a bad user experience. Most of the images are roughly less than 5 MB
My solution
Resizing the image, but what about Video?
After user click continue, instead of doing the AWS upload + calling
API, why not do it as a background on a different screen, so that
users don't need to wait for the loading indicator
What approach is great to solve this problem?. Thanks
Your problem can be rephrased to a questions of "How do I minimise upload latency?"
The most important thing is to use AWS for all app infrastructure.
If you are using S3 as a file storage and uploading from external server, you'll face a huge impact on upload speed due network latency.
When user uploads, your back-end process will write to disk first -
use EBS SSD.
Choose EC2 with increased network latency.
Place all your AWS resources in the same region and same availability
zone.
Another solution you might want to consider is to use S3 post-operation, instead of in-operation.
That simply means your server takes the upload and gives back URL to user, where independent background process syncs with S3 outside users requests.
In this situation you would have User -> Server -> User.
Right now you have User -> Server -> S3 -> User.
You may want to use content delivery network service like AWS cloudfront when speed and user experience is of utmost importance.
Amazon S3 is ideal when low cost of bandwidth and storage is more critical than speed of access, whereas Cloudfront is all about speed of access.
Please see this link as to how to setup cloudfront with your S3 bucket.
When your users are distributed worldwide, you should enable and use S3 Transfer Acceleration. This will decrease the upload time for user who are not close to your buckets's region.
Besides that, uploading 5 MB will take some time depending on your mobile connectivity. So you should resize the image on the client and you can think about starting the upload earlier, e.g. when the users enters details the upload already runs in the background.
I noticed that including "region" in the config speeds up uploading in 7 seconds.
There are maybe other ways to improve, I'm currently researching about including direct endpoint URL. Or maybe that's it: https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
For me, it was not providing the content type option in the upload function. Saved like 60+ seconds

Cloud-based automatic new youtube upload download to dropbox

Basically what I'm trying to achieve is:
-whenever a new video is posted on my channel, trigger a zap/ifttt to download it to dropbox in mp4 for backup purpose, added bonus - extract audio to mp3.
I want to do it automatically and on a free remote service, not my PC or VPS. I know it all this could easily be done locally, but I want an independent solution for a number of reasons.
The problem is, youtube api prohibits video download.
So far I have investigated web-based downloaders, but couldn't figure a way to automatically get a download link without visiting the website. cloudconvert doesn't support direct youtube download.
The closest thing I found is a web-fork of youtube-dl that allows it to run on owncloud, but I'm failing to find a free owncloud provider that allows user apps.
There should not be more than 3 short channel uploads a day, so performance and delays are not much of an issue, I'm happy to wait up to a day for the download to commence.
Any help much appreciated.
One step of the process is probably using offcloud, which can fetch your youtube video and store it on a cloud storage, such as google drive, ftp, etc. It has API

iOS AWS S3/DynamoDB Design for Video Recording App with Cloud Storage

I'm creating an iOS app that allows users to:
Log into the app
Record videos
Upload the videos to the cloud
Watch their videos stored in the cloud
What is the best way to design the backend? Normally I would use Parse (which I've used previously), but given that they're shutting down, I need to find a new solution. Keep in mind that I've never done server side programming before.
I was thinking of using a configuration as follows:
AWS DynamoDB
Usernames, passwords, emails, etc.
Video meta data - titles, description, tags, etc.
URL/pointers to video files on AWS S3
AWS S3
Video files
AWS Lambda
Processing of video files - compression, joining/splitting videos, etc.
This is my first time implementing a backend like this, so I was just wondering if this make sense?
I would Suggest You Can Add SQS In There It will help decouple.
SNS For Notification and SES For Emails.
AWS Elastic Transcode to maintain Uniformity (For Video Conversion if Required)
Also I would Suggest Using of AWS CDN So that the end users get Benefit out of it.
Using of AWS Glacier for storage of Retired User Who have Not Used Account Since Long Time (whatever Limit you set) and Then Deleting it Permanently (After Set Time is spent in glacier storage).

How to use AWS iOS SDK to listen to an S3 bucket an download a file once it gets put there

I'm writing an iOS app that is using the AWS SDK to upload and download files from an S3 bucket. Some data will be processed on EC2 and it will place a file into S3 after an unknown amount of time, so I want to have my app listen to the S3 bucket and automatically download a file with a particular name once it gets created. I've been looking through the AWS iOS API and haven't been able to find any listeners of the type I'm looking for. I also feel like AWS Lambda may be helpful here but all I've seen is about a mobile app triggering a Lambda function, not receiving a message from one. Any idea how I should go about this?
Since your Amazon EC2 instance is putting a file to an Amazon S3 bucket, it can also send a push notification to your device using Amazon SNS Mobile Push Notifications. You can set up an AWS Lambda function on the S3 bucket to push the notification as well. The device then should pull the file from the S3 bucket.

iOS 8, uploading multi files into AWS S3 Bucket, while app running in the backgroud

I am new to AWS S3, working on app that will have a large number of files ( probably hundreds of 1 mb files ). Because of the large size I want to be able to perform this upload while app is in background mode. I went through the AWS documentation for http://docs.aws.amazon.com/mobile/sdkforios/developerguide/s3transfermanager.html
But this does not talk about whether its possible for it to run while app is in background. This app is kind of like a dropbox for photos.
to clarify more, I am using iOS 8, as per this page
https://developer.apple.com/library/ios/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/BackgroundExecution/BackgroundExecution.html
Feedback is welcome.
AWSS3TrasnferManager does not support background transfer. You should use AWSS3PreSignedURLBuilder instead. You should take a look at S3BackgroundTransfer-Sample that demonstrates how to use background transfer with AWSS3PreSignedURLBuilder.
For those who still see this question, you can check out my tutorial on uploading large file to S3 using multipart in the background. You should be able to extrapolate how to do it for non multipart upload:
Taming the AWS framework to upload a large file to S3

Resources