Extract audio from video with Ruby - ruby-on-rails

Is there a way to extract audio from a video file with Ruby? I'm looking to get audio from our sermon videos when posting them online.

There isn't anything built into Ruby, but you can use ffmpeg to strip the audio from the video. Check out the gem https://github.com/streamio/streamio-ffmpeg. This will also require you to have ffmpeg and any dependencies installed on your production machine. You may want to consider moving this into a background process (delayed_job) or putting a progressbar so the user doesn't wonder why it is just churning but not displaying results immediately. You can use something like polling or some type of notification to let the user know once the audio file is ready for streaming/downloading.

Related

YouTube stream from video file

is this possible to create live event by simply using video file instead of web camera? I don't see option like this in live event creation
For doing this directly on youtube: No
For doing this by encoding some video file and push to youtube in real time: Yes
How to do?
Try wirecast play. Just like a live-feed console but free with some limit. Also other rtmp server may work. One of them is ffmpeg. I tried before and can ensure it works. But it's a backend with only command line. For more functionality, you need a front-end app(you can stream/pipe to ffmpeg).
About ffmpeg rtmp read this:
https://www.ffmpeg.org/ffmpeg-protocols.html#rtmp

Web Audio API audio editor saving edited clip back onto web server

I am making a drum machine and have implemented a recording function using recorderJS library. The problem as you may expect is limited functionality in terms of not been able to edit the recorded clips. So my question is if I was to implement an audio editor that allows the user to trim the clip, how would I go about saving the edited clip back onto the web server?
Is this even possible using Web Audio API?
Many Thanks
The web audio API doesn't do this for you; you need a back end server that can accept uploads. You'll also probably want to re-encode the audio data (as a WAV, MP3, OGG, etc.)

Recording output audio with Swift

Is it possible to record output audio in an app using Swift? So, for example, say I'm listening to a podcast, and I want to, within a separate app, record a small segment of the podcast's audio. Is there any way to do that?
I've looked around but have only been able to find information on recording microphone recording and such.
It depends on how you are producing the audio. If the production of the audio is within your control, you can put a tap on the output and record to a file as it plays. The easiest way is with the new AVAudioEngine feature (there are other ways, but AVAudioEngine is basically an easy front end for them).
Of course, if the real problem is to take a copy of a podcast, then obviously all you have to do is download the podcast as opposed to listening to it. Similarly, you could buffer and save streaming audio to a file. There are many apps that do this. But this is not because the device's output is being hijacked; it is, again, because we have control of the sound data itself.
I believe you'll have to write a kernel extension to do that
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptIOKit/iokit_tutorial.html
You'd have to make your own audio driver to record it
It appears as though
That is how softonic made soundflowerbed.
http://features.en.softonic.com/how-to-record-internal-sound-on-a-mac

How can I stream a movie in iOS and playback from the filesystem later?

I've got an app that currently ships with all the videos it can play embedded in it. This doesn't scale well, and unless you want to play all the movies, wastes disk space. It also makes it less desirable to upgrade the app because you have to re-download all movies.
What I would like to do is download the movie on the fly, play it back while downloading, and then if it's successfully downloaded, save it to the file system so that next time they want to watch it, it streams from the local file.
I can do whatever is needed to the video, but currently I'm serving it up as an .mp4 file from Amazon S3, with a mimetype of video/mp4, and so the first half of my issue works fine: the movie downloads, and MPMovieViewController will start playing it as soon as it thinks it has downloaded "enough."
Is there any way to tap into the cache of that video file so that I can save it and control how long it resides on the filesystem? This seems like it would be the easiest approach.
I am targeting iOS 5+6, but if the only solution available required iOS 6, I would consider it also. Thanks!
UPDATE: Using AFNetworking, I am now half-way there, I think. I am downloading the video file from the server, and listening for the download progress. Once I see 25% of the video has been downloaded, I start playback on the local file using an MPMoviePlayerController.
The main issue I'm running into now is playback seems to get screwed up. It's going along fine, 25% downloaded, playback starts... download continues normally... then the file finishes downloading completely, and shortly thereafter video freezes. The onscreen playback timer still indicates playback is ongoing and I don't see any "playback finished" type notifications, but the video is frozen. My guess based on the behavior is that perhaps the initial buffer for the video playback was used up, and it isn't detecting that more video is available on disk now?
Is there any way to interact with MPMoviePlayerController to let it know periodically to refresh the buffer it's playing out of? Or some other way to handle this situation?
UPDATE: Make sure to see the newer answer from #TomHamming.
I have yet to find a conclusive answer, but at this time I believe the answer is: you can't reliably do this. At least not without a lot of work which seems too much like a hack. I filed a feature request with Apple as it really seems like this should be possible with some adjustments to MPMoviePlayerController.
I will go over the variety of things I tried or considered, and the results I encountered.
Pass MPMoviePlayerController a URL to your movie file, which allows it to stream, and then pull the file out of the cache it was saved into, into your local Documents folder. Won't work, as of iOS 6. I filed a feature request with Apple, but as it stands now there's no way to get your hands on the file they are downloading, AFAIK.
Start downloading the movie file with NSURLConnection (or something like AFNetwork), and then when a "decent amount" has been downloaded to the device, pass the file URL to the MPMoviePlayerController and let it stream from disk. Sort of works, but not well. Three problems:
It's really hard to know when to start playing the file. I haven't figured out the algorithm Apple uses, and so I always erred on the side of caution, waiting for 25% to be downloaded before playing.
The MPMoviePlayerController interface provides no sense of the movie being streamed, as it does when Apple is doing the calculations via the network. It appears to the user that the file is totally downloaded when it really is not.
And most importantly, MPMoviePlayerController seems to not work well with playing a file that is not completely downloaded. I experienced playback problems once the file finished downloading, or if the player caught up with the amount downloaded, and never found a graceful way to handle these situations.
Same procedure as above, but use AVFoundation classes to more finely control the playback process, and avoid the issues described above regarding playback stopping, etc. Might work, but I want all the features of MPMoviePlayerController. Re-implementing MPMoviePlayerController myself just to get this one feature seems like a waste of time.
Same procedure as #1 above, but run a small web server in your app to handle streaming the video from the disk to MPMoviePlayerController, with the hope being that the streaming would work more like it normally does when streaming the file directly from an external web server. Works, but results were still sporadic and performance seemed to suffer. I did my test with CocoaHTTP. I decided against this approach because it just felt like a terrible hack.
Run a lightweight HTTP proxy, thus intercepting the downloaded movie file data as it gets streamed from the internet into your MPMoviePlayerController. Not sure if this works or not. I was not able to test this yet, as I have not found a lightweight HTTP proxy written in Objective-C, and at this point don't feel like implementing one just to try this experiment. It seems like the next easiest of all these hacks to implement -- if you don't have to write the proxy!
At this point I've decided to go the less-hacky, but also less user-friendly route of simply downloading the file completely, and then passing it to MPMoviePlayerController, until a better solution comes along.
You can do this as of iOS 10 with AVAssetDownloadTask. See this WWDC 2016 session and this documentation.
Alternatively, if your movie isn't DRM'd, you can do it with AVAssetResourceLoaderDelegate, which effectively lets you give an AVPlayer an arbitrary stream of bytes. See this walkthrough.

How do you write audio to the first frame with AVAssetWriter while capturing video/audio on iOS?

Long story short, I am trying to implement a naive solution for streaming video from the iOS camera/microphone to a server.
I am using AVCaptureSession with audio and video AVCaptureOutputs, and then using AVAssetWriter/AVAssetWriterInput to capture video and audio in the captureOutput:didOutputSampleBuffer:fromConnection method and write the resulting video to a file.
To make this a stream, I am using an NSTimer to break the video files into 1 second chunks (by hot-swapping in a different AVAssetWriter that has a different outputURL) and upload these to a server over HTTP.
This is working, but the issue I'm running into is this: the beginning of the .mp4 files appear to always be missing audio in the first frame, so when the video files are concatenated on the server (running ffmpeg) there is a noticeable audio skip at the intersections of these files. The video is just fine - no skipping.
I tried many ways of making sure there were no CMSampleBuffers dropped and checked their timestamps to make sure they were going to the right AVAssetWriter, but to no avail.
Checking the AVCam example with AVCaptureMovieFileOutput and AVCaptureLocation example with AVAssetWriter and it appears the files they generate do the same thing.
Maybe there is something fundamental I am misunderstanding here about the nature of audio/video files, as I'm new to video/audio capture - but thought I'd check before I tried to workaround this by learning to use ffmpeg as some seem to do to fragment the stream (if you have any tips on this, too, let me know!). Thanks in advance!
I had the same problem and solved it by recording audio with a different API, Audio Queue. This seems to solve it, just need to take care of timing in order to avoid sound delay.

Resources