Stream multiple media sources from a single software/hardware encoder? - stream

It's been a while since I first started looking into this and I still haven't found any feasible solutions, here's to hoping someone might have some suggestions/ideas...
The situation: We currently have a couple of live streams streaming mixed source content (some of the streams are being streamed as file playlists that are modified to change the files in the playlist, while others are streamed as live video directly from input). For each new live stream we usually just end up setting up a new streamer... it's feels rather counterproductive and wasteful.
The question: Does there exist a hardware or software solution (LINUX or Windows) that would allow to live stream multiple, for example, two (independent of each other) file playlists and optionally one or two live A/V inputs, from the same encoder?
According to my findings, with the help of FFMPEG library, it is possible to stream multiple live A/V inputs and even stream file playlists ... but it requires too much hacking to get it working and playlists have to be redone by hand and restarted every time changes have been made. This might work for me personally, but this won't do for a less tech-sawy people...
I'm basically looking for a way to reduce the computer hardware instead of allowing it to exponentially grow with each addition of a new live streaming source/destination.

Thank you for all your input and all the posted solutions. By sheer luck I found the solution I was originally looking for.
For anyone else looking for this or similar solution, the combo of systems that can combat our unusual requirements (and that can be integrated into our existing workflow by adjusting the hardware/software to meet our needs instead of us adjusting to hardware/software requirements/limitations) are: Sorenson Squeeze Server 3.0, MediaExcel Hero Live and MediaExcel File

Related

Designing a library for Hardware-accelerated unsupported containers on iOS (and Airplay)

I'm trying to put together an open source library that allows iOS devices to play files with unsupported containers, as long as the track formats/codecs are supported. e.g.: a Matroska video (MKV) file with an H264 video track and an AAC audio track. I'm making an app that surely could use that functionality and I bet there are many more out there that would benefit from it. Any help you can give (by commenting here or—even better— collaborating with me) is much appreciated. This is where I'm at so far:
I did a bit of research trying to find out how players like AVPlayerHD or Infuse can play non-standard containers and still have hardware acceleration. It seems like they transcode small chunks of the whole video file and play those in sequence instead.
It's a good solution. But if you want to throw that video to an Apple TV, things don't work as planned since the video is actually a bunch of smaller chunks being played as a playlist. This site has way more info, but at its core streaming to Apple TV is essentially a progressive download of the MP4/MPV file being played.
I'm thinking a sort of streaming proxy is the way to go. For the playing side of things, I've been investigating AVSampleBufferDisplayLayer (more info here) as a way of playing the video track. I haven't gotten to audio yet. Things get interesting when you think about the AirPlay side of things: by having a "container proxy", we can make any file look like it has the right container without the file size implications of transcoding.
It seems like GStreamer might be a good starting point for the proxy. I need to read up on it; I've never used it before. Does this approach sound like a good one for a library that could be used for App Store apps?
Thanks!
Finally got some extra time to go over GStreamer. Especially this article about how it is already updated to use the hardware decoding provided by iOS 8. So no need to develop this; GStreamer seems to be the answer.
Thanks!
The 'chucked' solution is no longer necessary in iOS 8. You should simply set up a video decode session and pass in NALUs.
https://developer.apple.com/videos/wwdc/2014/#513

What's the easiest way to merge video, audio and image files?

We are planning a Wep App for a Hackathon that's happening in about 2 weeks.
The app basic functions are:
The users are guided step-by-step to upload a video, audio and image.
The image is used as a cover for the audio. Making it into a video file.
The two video files are merged thus creating a single video from the initial three files.
So, my problem is:
How do you create a video from an audio with an image as "cover".
How do you merge this two videos.
We are thinking of using Heroku for deployment. Is there a way to do it using something like stremio?
What would be the best approach? A VPS running a C++ script? How's the easiest way to do it?
FFMPEG would be a good start as seen here
https://stackoverflow.com/a/6087453/1258001
FFMPEG can be found at http://ffmpeg.org/
Also another option that maybe over kill would be Blender 3d as it could also provide simular results and could be controlled via shell commands and maybe more flexible in terms of complexe needs for asset compositions.
In any case your gonna want a server that can run heavy rendering processes wich will require a large amount of ram and cpu processing. It maybe a good choice to go with a render farm that can run gpu as the main processor for rendering as that will give you more bang for your buck but could be very difficult to set up and kept running correctly. I would also say a VPS would not be a good choice for this. In any case the type of resources your gonna need also so happen to be the most expensive in terms of web server costs. Best of luck please update with your results.

Creating an auto-DJ app

I'm trying to create an 'auto dj' application that would let smartphone users select a playlist of songs, and it would create a seamless mix for playback. There are a couple factors involved in this: read a playlist of audio files, calculate their waveforms/spectrums, determine the BPMs, and organize the compatible songs in a new playlist in the order that they will be played (based on compatible tempos & keys).
The app would have to be able to scan the waveform of a song and recognize the beginning of the 'main' part of the song (skipping slow intros/outros). I also imagine having some effects: filtering, so it can filter the bass out of the new track being mixed in, and switch the basses at an appropriate time. Perhaps reverb that the user could control as well.
I am just seeing how feasible of a project this is for 3-4 busy college students in the span of ~4 months. Not sure if it would be an Android or iOS app, or perhaps even a Windows app. Not sure what language we would use (likely Python or Java); whichever has the most useful audio analyzing libraries. Obviously it would work better for certain genres of music (house, trance), but I'd still really like to try to create this.
Thanks for any feedback
As much as I would like to hear a more experienced person's opinion on this, I would say based on your situation that it would be a very big undertaking. Since it sounds like you don't have experience using audio analyzing libraries/ programs you might want to start experimenting with those and most of them are likely going to be in C/ C++, not java/ Python. Here are some I know of but I would recommend do your own research.
http://www.underbit.com/products/mad/
http://audacity.sourceforge.net/
It doesn't sound that feasible in your situation but that just depends on your programming/project experience and motivation to create it.
Good luck

Progressive Video Download on iOS

I am trying to implement Progressive Downloading of a video in my iOS application that can be played through AVPlayer. I have already implemented a downloader module that can download the files to the iPad. However, I have discovered I cannot play a file that is still being written to
So, as far as I can tell, my only solution would be through downloading a list of file 'chunks' and then keep playing through every file as they are ready (ie: downloaded), probably using HLS
Searching I have come across this question which implements the progressive download through hls but other than that, I can find no other way
However, I keep coming across search results that say how to configure web servers to leverage the iOS support for HTTP Progressive Downloading, but with no mention of how to do it from the iOS side
So, any one have any ideas and/or experience about this?
EDIT: I have also found there could be a way of doing it other way around (ie: streaming, then writing streamed data to disk) which was suggested by this question but still cannot get it to work as it seems it does not work with non-local assets!
From what you say, you might want to change approach and attempt to stream the file. Downloading and playing at the same time, I would say is the definition of Streaming. I hate when people post links to the Apple documentation but in this instance reading a tiny bit of this documentation will help you more than I ever can. It should all make sense if you are lready working with connections and video, you just need to change your approach.
The link: https://developer.apple.com/library/ios/documentation/networkinginternet/conceptual/streamingmediaguide/Introduction/Introduction.html

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Resources