I am in the process of building a rails/flex application which requires audio to be recorded and then stored in our amazon s3 account. I have found no alternative to using some form of RTMP server for recording audio through flash, but our hosting environment will not allow us to install anything like FMS, Red5, etc.
Is there any existing Ruby/Rails RTMP solution that will allow audio recording? If not, is it possible for Rails to at least intercept the RTMP stream and then I can hope to reference red5's source or something for parsing the data (long shot, I know)?
The other alternative I can think of is hosting a red5 server on another host and communicating with our rails app once the saving/uploading is done, which is not preferred.
Am I going to have any luck here?
I was able to get this to work
1) Flash Player 10.1 can get the microphone's ByteArray
2) I captured this ByteArray, used Adobe's WavWriter class (from a microphone tutorial they put together) to create a new ByteArray in proper wav format
3) Sent this over to rails through RubyAMF
4) Used something along the lines of
wav_data = rubyamf_params[0][:wav_data]
f = File.new('c:/hello.wav')
f << wav_data.pack('c'*wav_data.length)
f.close
Once I've got this wav data it won't be too far of a stretch to convert it to an mp3, woo
Related
I have a lot of long (45 mins - 90 mins) MP4 videos in a public S3 bucket and I want to play them in my iOS app using AVPlayer.
I am using AVPlayerViewController to play them but I need to wait several minutes before they start playing as it downloads the whole video rather than streaming it.
I am caching it locally so this is only happening the first time but I would love to stream the video so the user doesn't have to wait for the entire video to download.
Some people are pointing out that I need Cloudfront to stream videos but in the documentation, I've read that this is only necessary when you have many people streaming the same file. I'm building a MVP so I only need a simple solution.
Is there any way to stream an MP4 video from an S3 bucket with AVPlayerViewController without it fully downloading the file before playing it to the user?
TLDR
AVPlayer does not support 'streaming' (HTTP range requests) as you would define it, so either use an alternative video player that does or use a real media streaming protocol like HLS which is supported by AVPlayer & would start the video before downloading it all.
CloudFront is great for delivery in general but is not truly needed - you may have seen it mentioned due to CloudFront RTMP distributions but they now have been discontinued.
Detailed Answer
S3 supports a concept called byte-range fetches using HTTP range requests - you can verify this by doing a HEAD request to your video file & seeing that the Accept-Ranges header exists with a value set to bytes (or not 'none').
Load your MP4 file in the browser & notice that it can start as soon as you click play. You're also able to move to the end of the video file and yet, you haven't really downloaded the entire video file. HTTP range requests are what allow this mechanism to work. Small chunks of the video can be downloaded as & when the user gets to that part of the video. This saves the file server & the user bandwidth while providing a much better user experience than the client downloading the entire file.
The server would need to support byte-range fetches in the first instance before the client can then decide to make range requests (or not to). The key is that, once the server supports it, it is up to the HTTP client to decide whether it wants to fetch the data in chunks or all in one go.
This isn't really 'streaming' as you know it & are referring to in your question but it is more 'downloading the video from the server in chunks and playing it back' using HTTP 206 Partial Content responses.
You can see this in the Network tab of your browser as a series of multiple 206 responses when seeking in the video. The entire video is not downloaded but the video is streamed from whichever position that you skip to.
The problem with AVPlayer
Unfortunately, AVPlayer does not support 'streaming' using HTTP range requests & HTTP 206 Partial Content responses. I've verified this manually by creating a demo iOS app in Xcode.
This has nothing to do with S3 - if you stored these files on any other cloud provider or file server, you'd see that the file is still fully loaded before playing.
The possible solutions
Now that the problem is clear, there are 2 solutions.
Using an alternative video player
The easiest solution is to use an alternative video player which does support byte-range fetches. I'm not an expert in iOS development so I sadly can't help in recommending an alternative but I'm sure there'll be a popular library that the industry prefers over the in-built AVPlayer. This would provide you with your (extremely common) definition of 'streaming'.
Using a video streaming protocol
However, if you must use AVPlayer, the solution is to implement true media streaming with a video streaming protocol - true streaming also allows you to leverage features like adaptive bitrate switching, live audio switching, licensing etc.
There are quite a few of these protocols available like DASH (Dynamic Adaptive Streaming over HTTP), SRT (Secure Reliable Transport) & last but not least, HLS (HTTP Live Streaming).
Today, the most widely used streaming protocol on the internet is HLS, created by Apple themselves (hey, maybe the reason to not support range requests is to force you to use the protocol). Apple's own documentation is really wonderful for delving deeper if you are interested.
Without getting too much into protocol detail, HLS will allow playback to start more quickly in general, fast-forwarding can be much quicker & delivers video as it is being watched for the true streaming experience.
To go ahead with HLS:
Use AWS Elemental MediaConvert to convert your MP4 file to HLS format - the resulting output will be 1 (or more) .M3U8 manifest files in addition to .ts media segment file(s)
Upload the resulting output to S3
Point AVPlayer to the .M3U8 file
let asset = AVURLAsset(url: "https://ermiya.s3.eu-west-1.amazonaws.com/videos/video1playlist.m3u8")
let item = AVPlayerItem(asset: asset)
...
Enjoy near-instant loading of the video
CloudFront
In regards to Amazon CloudFront, it isn't required per se & S3 is sufficient in this case but a quick Google search will mention loads of benefits that it provides, especially caching which can help you save on S3 costs later on.
Conclusion
I would go with converting to HLS if you can, as it will yield more possibilities down the line & is a better true streaming experience in general, but using an alternative video player will work just as well due to iOS AVPlayer restrictions.
Whether to use CloudFront or not, will depend on your user base, usage of S3 and other factors.
As you're creating an MVP, I would recommend just doing a batch conversion of your MP4 files to HLS format & not using CloudFront which would add additional complexity to your cloud configuration.
Like #ErmiyaEskandary said, you could just use HLS to solve your problem, which is probably a good idea, but you should not have to wait for the entire MP4 file to download before playing it with AVPlayer. The issue is actually not with AVPlayer or byte-range requests at all, but rather with how your MP4 files are formatted.
You could have your MP4 file configured incorrectly for streaming. MP4's have a metadata section called the MOOV atom. By default, many encoders put this at the back of the file. In this case, the player would have to download the entire file before it could begin playing.
For streaming usecases, this would need to be put at the front of the file. The player then will only need to buffer the MOOV atom, and it can begin playing the video as the data is loaded.
You can use ffmpeg with the fast start flag enabled to move the MOOV atom to the beginning of the file.
I am making a drum machine and have implemented a recording function using recorderJS library. The problem as you may expect is limited functionality in terms of not been able to edit the recorded clips. So my question is if I was to implement an audio editor that allows the user to trim the clip, how would I go about saving the edited clip back onto the web server?
Is this even possible using Web Audio API?
Many Thanks
The web audio API doesn't do this for you; you need a back end server that can accept uploads. You'll also probably want to re-encode the audio data (as a WAV, MP3, OGG, etc.)
I use RTMP to stream from my iPhone to my server with FMS. I followed some tutorials and now I have the flv playback file in /webroot/live_recorded.
What i want to do is the following.
1) Stream from iPhone to server using RTMP : DONE
2) Stream back to iPhone using HLS : I don't understand the docs and i read hundreds of threafds but none helped me. I would like the user to read the stream from the beginning, as it is stored on my server. Thanks
i'm actually not about FMS.. i work with Wowza and i suppose you'll need something like nDVR feature or have someone write special module for you that will split live stream into small recordings, and so you'll need to play playlist of such recorded files from your iPhone.
hopefully someone will recommend true solution, not just some assumptions :)
I am building a CDN. I want to be able to stream to an iPhone and iPad. Is this possible using Amazon Cloudfront?
Let me clarify. Is there any documentation anywhere or an example anywhere of someone doing this?
Progressive download works if you ensure that the media's metadata is at the beginning of the file. Google "ffmpeg qtfastart" to accomplish this in the easiest manner (in my experience). If this is not done, the player (in iOS) must download the complete file before it gets to the metadata that it needs to read in order to play. If you are not doing this step in your production workflow, then your progressive download is not functioning as "progressive download", it is actually downloading the entire file (as stated before...so it can get to the metadata) and then playing. This can be done with any video/audio file supported by your platform.
NOTE: I am not sure how this affects any attempts at high-speed scrubbing. It seems the file would need to be downloaded to the point that the app is trying to scrub to.
Another alternative may be to create the format needed for iOS streaming (using a segmenter/transcoder), and serving up those files through http on your regular Cloudfront distribution. Theoretically that should work.
To be more clear - Cloudfront uses and older version of Flash Media Server(v 3.5) that supports streaming through various RTMP protocols. These can be enabled by creating a Streaming Distribution (that is how we do streaming for web and Android) and using something like JW Player on the front end.
http://help.adobe.com/en_US/FlashMediaServer/3.5_TechOverview/WS5b3ccc516d4fbf351e63e3d119ed944a1a-7ffa.html
http://www.adobe.com/devnet/logged_in/ktowes_fms35.html
IOS streaming is done using HTTP Live Streaming which is different. https://developer.apple.com/streaming/
Your options would be to do as I mentioned above, or use EC2 and stand up your own FMS 4.5 instance ( http://aws.typepad.com/aws/2012/03/live-streaming-cloudfront-fms-4-5.html ).
Have struggled a lot over this..
Finally got it working through Audio Streamer.. Love this ...
http://www.cocoawithlove.com/2009/06/revisiting-old-post-streaming-and.html
Awesome way ....
You simply want to use Progressive Download, which means upload the file to S3, create a distribution, and go! It's super simple.
i'm currently working on a project in college. my application should do some things with audio files from my computer. i'm using FMOD as sound library.
the problem i have is, that i dont know how to access the data of a soundfile (wich was opened and startet using the FMOD methods) to stream it over network for playback on another pc in the net.
does anyone has a similar problem?! any help is apreciated.
thanks in advance.
chris
There are two simple ways to access sound data from an FMOD sound. The first is you can load the file as a sample using createSound then use Sound::lock and Sound::unlock to get parts of the result PCM file.
The other method is load the sound as a stream using createStream (you'll want to use the OPEN_ONLY flag here too so it doesn't automatically fill the stream buffer) and use Sound::readData to read a chunk at a time from the file, this will decompress data on demand instead of doing it up front like the other method.