Use vlc mosaic to display multi RTSP streams from Spydroid - vlc

I want to display two RTSP streams in VLC 2.2.4 in the same window but different sections. These streams are from Spydroid in the same local network. I know that mosaic can do such thing.
Below is the configuration file:
new channel1 broadcast enabled
setup channel1 input "rtsp://192.168.43.200:8086"
setup channel1 output #duplicate{dst=mosaic-bridge{id=1,width=640,height=480},select=video}
new channel2 broadcast enabled
setup channel2 input "rtsp://192.168.43.230:8086"
setup channel2 output #duplicate{dst=mosaic-bridge{id=2,width=640,height=480},select=video}
new mosaic broadcast enabled
setup mosaic input /Users/lovejoy/Downloads/bg640960.jpg
setup mosaic output #transcode{sfilter=mosaic,vcodec=mp4v,scale=1}:display
control channel2 play
control channel1 play
control mosaic play
Both of the configurations of two phones are the same:
Video Encoder: H.264
Resolution: 640*480
Framerate: 8fps
Bitrate: 2000kbps
I want to put two streams in a vertical layout, the run arguments like this:
vlc --vlm-conf /Users/lovejoy/Downloads/conf.vlm --mosaic-keep-aspect-ratio --mosaic-keep-picture --mosaic-position=2 --mosaic-order="1,2" --mosaic-offsets="0,0,0,480" --mosaic-width=640 --mosaic-height=960 --image-duration=-1
However, the result does not like what I want: two streams just bump at the top section, they do not show separately. Two streams compete for the same section (compete is the exact word because they show alternately until one stream disconnect).
This is the vlc screenshot
This is the terminal screenshot
Can anyone help me? Thanks.

Related

Some I-frames encoded as P-frames with Media Foundation

I'm using Media foundation for merging two MP4 videos into one. While processing samples and switching to second video segment I send NotifyEndOfSegment and then read it IMFMediaType and set it as input format to IMFSinkWriter. As the result in output video first frame from the second video file is not I-frame but P-frame. This results in some artifacts for several frames (before next I-frame). When I'm skipping setting input format on IMFSinkWriter for second video file everything works fine. But that's only because both video files has almost the same IMFMediaTypes. What I'm doing wrong?

How to render system input from the remote I/O audio unit and to play these sample in stereo

I am implementing a play through program from a (mono) microphone to a stereo output. For the output I configured a AudioStreamBasicDescription with two channels and set this ASBD to the input scope of the remote I/O unit.
However, when I configure the render callback to draw the system input no audio is played. On the other hand, when the ASBD is set to a single channel, audio is played without problems.
The audio unit render is implemented by:
AudioUnitRender(_rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)
Apparently, this is not sufficient to process and play the rendered input. Does anyone know how this should be done?
The number of channels in the ASBD for both sides of RemoteIO should be set the same, both mono, or both stereo. If you want stereo output, configure stereo input. Even if a monophonic mic is plugged-in (or internal).

Distorted sound after sample rate change

This one keeps me awake:
I have an OS X audio application which has to react if the user changes the current sample rate of the device.
To do this I register a callback for both in- and output devices on ‘kAudioDevicePropertyNominalSampleRate’.
So if one of the devices sample rates get changed I get the callback and set the new sample rate on the devices with 'AudioObjectSetPropertyData' and 'kAudioDevicePropertyNominalSampleRate' as the selector.
The next steps were mentioned on the apple mailing list and i followed them:
stop the input AudioUnit and the AUGraph which consists of a mixer and the output AudioUnit
uninitalize them both.
check for the node count, step over them and use AUGraphDisconnectNodeInput to disconnect the mixer from the output
now set the new sample rate on the output scope of the input unit
and on the in- and output scope on the mixer unit
reconnect the mixer node to the output unit
update the graph
init input and graph
start input and graph
Render and Output callbacks start again but now the audio is distorted. I believe it's the input render callback which is responsible for the signal but I'm not sure.
What did I forget?
The sample rate doesn't affect the buffer size as far as i know.
If I start my application with the other sample rate everything is OK, it's the change that leads to the distorted signal.
I look at the stream format (kAudioUnitProperty_StreamFormat) before and after. Everything stays the same except the sample rate which of course changes to the new value.
As I said I think it's the input render callback which needs to be changed. Do I have to notify the callback that more samples are needed? I checked the callbacks and buffer sizes with 44k and 48k and nothing was different.
I wrote a small test application so if you want me to provide code, I can show you.
Edit: I recorded the distorted audio(a sine) and looked at it in Audacity.
What I found was that after every 495 samples the audio drops for another 17 samples.
I think you see where this is going: 495 samples + 17 samples = 512 samples. Which is the buffer size of my devices.
But I still don't know what I can do with this finding.
I checked my Input and Output render procs and their access of the RingBuffer(I'm using the fixed Version of CARingBuffer)
Both store and fetch 512 frames so nothing is missing here...
Got it!
After disconnecting the Graph it seems to be necessary to tell both devices the new sample rate.
I already did this before the callback but it seems this has to be done at a later time.

FMOD FMOD_DSP_READCALLBACK - specifying channels

I would like to create a DSP plugin which takes an input of 8 channels (the 7.1 speaker mode), does some processing then returns the data to 2 output channels. My plan was to use setspeakermode to FMOD_SPEAKERMODE_7POINT1 and FMOD_DSP_DESCRIPTION.channels to 2 but that didnt work, both in and out channels were showing as 2 in my FMOD_DSP_READCALLBACK function.
How can I do this?
You cannot perform a true downmix in FMODEx using the DSP plugin interface. The best you can do is process the incoming 8ch data, then fill just the front left and front right parts of the output buffer leaving the rest silent.
Setting the channel count to 2 tells FMOD your DSP can only handle stereo signal, setting the count to 0 means any channel count.

How to view TV Tuner component input with OpenCV?

I'm trying to use my tvtuner instead of a webcam with opencv.
The problem is that by default cvCaptureFromCAM(0) gives me the tv channel of the tv tuner, but what I actually want the input from my the RCA input of the tv tuner.
I have tried usingcvCaptureFromCAM(-1) to check if there are additional camera devices found within the tvtuner, but it only gives me the general tvtuner as an option.
Is there a way to change the channel of the input?
Probably not.
In Linux (and Windows is similar), the OpenCV cvCaptureFromCAM() only recognizes one input for each tuner/framegrabber/webcam. If your device shows up as multiple logical devices, then you can use the parameter of cvCaptureFromCAM() to select which logical device to use.
For example: if you have:
/dev/video
/dev/video0 <-- tv tuner, tuner input
/dev/video1 <-- tv tuner, rca input
cvCaptureFromCAM(0) will use /dev/video0 and
cvCaptureFromCAM(1) will use /dev/video1.
What might work is if you use some other program such as mythtv or tvtime (or something else on windows) to change the input for your tuner to your RCA input and then try running opencv again.

Resources