Red5: Save streams on S3 - upload

I am using Red5 to upload a stream to my EC2 instance. Ultimately, though, I would like the uploaded stream to be saved on S3. I could transfer the file from EC2 to S3 after the upload, but ideally, I would love Red5 to save the file on S3 to begin with.
Is there any way to do this?

You cannot "record" to s3, you may only upload a recording to s3 once the recording process has been completed. I have code to playback files located on s3 with red5, if you're interested.

Related

Reading video during cloud dataflow, using GCSfuse, download locally, or write new Beam reader?

I am building a python cloud video pipeline that will read video from a bucket, perform some computer vision analysis and return frames back to a bucket. As far as I can tell, there is not a Beam read method to pass GCS paths to opencv, similar to TextIO.read(). My options moving forward seem to download the file locally (they are large), use GCS fuse to mount on a local worker (possible?) or write a custom source method. Anyone have experience on what makes most sense?
My main confusion was this question here
Can google cloud dataflow (apache beam) use ffmpeg to process video or image data
How would ffmpeg have access to the path? Its not just a question of uploading the binary? There needs to be a Beam method to pass the item, correct?
I think that you will need to download the files first and then pass them through.
However instead of saving the files locally, is it possible to pass bytes through to opencv. Does it accept any sort of ByteStream or input stream?
You could have one ParDo which downloads the files using the GCS API, then passes it to a opencv through a stream, ByteChannel stdin pipe, etc.
If that is not available, you will need to save the files to disk locally. Then pass opencv the filename. This could be tricky because you may end up using too much disk space. So make sure to garbage collect the files properly and delete the files from local disk after opencv processes them.
I'm not sure but you may need to also select a certain VM machine type to ensure you have enough disk space, depending on the size of your files.

How to access images manipulations from Amazon s3 in iphone app

I am currently using Amazon S3 server, in that i am able to upload images from iPhone.
Is there a possibility to manipulate (cropping, transformations, effects, face detection) the images that i get from amazon server.
There are no services in Amazon Web Services that provide image manipulation.
If you wish to manipulate images, you will need to write your own code (eg on a web server, using graphics libraries) or within your iPhone app.

How do I install OpenCV on Windows Azure?

I am a beginner with Windows Azure and I want to make an app which does facial recognition on a video stream. Hence I need to install OpenCV (C++ Library).
How do I do that? And how do I get the video stream from the client app? (I am in control of the client app as well).
If the library simply needs to be on the path for your application to pick it up, then just add it as an item in the project you're deploying, and it will get uploaded up to Azure, and deployed alongside your application.
If some commands are required to install it, you can use startup tasks.
As for the video stream, you can open a socket (using a TCP endpoint) and stream the video up to an azure instance that way. That's probably the most efficient way of doing it if you want real time video processing. If you want to record the video and upload it, look at using blob storage. You can then use a message queue to signal to the worker, that there is a video waiting to be processed.

Cropping and resizing images on the fly with node.js

I run a node.js server on Amazon EC2. I am getting a huge csv file with data containing links to product images on a remote host. I want to crop and store the images in different sizes on Amazon S3.
How could this be done, preferably just with streams, without saving anything to disc?
I don't think you can get around saving the full-size image to disk temporarily, since resizing/cropping/etc would normally require having the full image file. So, I say ImageMagick.

Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload Java API due to broken pipe

I implemented S3 multi-part upload in Java, both high level and low level version, based on the sample code from
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html
When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After multiple retries, it still failed.
Here is the way to repeat the scenario. Take 1.1.7.1 release,
create a new bucket in US standard region
create a large EC2 instance as the client to upload file
create a file of 13GB in size on the EC2 instance.
run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance
test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes.
So far the problem shows up consistently. I did a tcpdump. It appeared the HTTP server (S3 side) kept resetting the TCP stream, which caused the client side to throw IO exception - broken pipe, after the uploaded byte counts exceeding 8GB . Anyone has similar experiences when upload large files to S3 using multi-part upload?

Resources