I'm using MobileVLCKit to stream video and audio from Wowza RTMP server. At the same time I'm using VideoCore]1 to stream audio to Wowza RTMP server (I closed off the video channel in VideoCore). Now I'm attempting to make this sort of a teleconferencing solution. Now I'm limited to RTMP or RTSP, not teleconferencing solution (WebRTC or SIP or what not...I am not familiar with these at the moment) because of the limitation on the other end of the line.
The above setup doesn't work. Turning the both functions (video and audio streaming down and audio streaming up) individually runs fine. But not when run simultaneously as audio cannot be heard on the other end. In fact, when app started with VideoCore streaming audio upstream, as soon as I started to downstream via MobileVLCKit, audio cannot be heard on the other end, even though the stream is open. It appears that microphone is somehow wrested away from VideoCore, even though MobileVLC should not need the microphone.
However, when I made the two into two apps and allow them to run in the background (audio & airplay background mode), the two runs fine with one app stream down video & audio and the other picking up microphone voices and stream to the other end.
Is there any reason why the two functions appear to apparently be in conflict within the same app, and any ideas how to resolve the conflict?
I encountered the same problem. Say I have two objects, a vlc player and the other audio processor which listens the microphone. It works fine in simulator for operating both functions in the same time. But conflict in the iPhone device. I think the root cause is that there is only one position or right for listening the microphone. And vlc occupies the right so that my audio processor cannot work. But for some reasons, I cannot modify the vlc code. So I'd to figure out the workaround resolution. And I found one.
The problem comes from vlc which occupies the right but doesn't event use the microphone, and my audio processor did. So the way appears clearly. That is, vlc player plays first and then we new the other object instance, audio processor in my case, which needs to listen the microphone. Since audio processor comes after vlc player, it takes back the right of microphone listening. And they both work properly.
For your reference and hope it can help you.
Related
I’m obtaining H.264 video from a DJI drone in an Android library I wrote. From there the video is distributed via WebRTC to many subscribers. This works.
Now one came to the idea if it would be possible to have an RTMP stream aside, so that a parallel publishing of the video to platforms like YT or FB would be possible.
I integrated the code which does H.264 to FLV to RTMP and it works perfectly with at least two open source solutions I have tested: OSS/SRS (https://github.com/ossrs/srs) and node-media-server (https://github.com/illuspas/Node-Media-Server). I publish to instances running here in my LAN and view that by VLC. That works fine.
It doesn’t work if I publish to YT directly. Then I thought I try to insert restream.io into the chain. But it also does not work reliably. Restream at least is a bit more chatty regarding what’s happening, but not chatty enough: What I see is, that I can connect and disconnect - the dashboard window reacts promptly. Same as YT does. I see that the RS dashboard shows bitrate, frame rate and key frame rate and the statistics confirms that. Just - the screen remains black (as with YT) and there is just this spinning wheel.
I can exclude, that I have any kind of weird firewall problem, since I can perfectly uploading H264 as FLV stream using FFMPEG from the command line.
So what is the state: I have two open source RTMP servers, which tell me, all is fine. I have two major public RTMP servers, which don’t say much, but don’t confirm that it works either…
I'm looking for some hints to find out, what is wrong with my stream :)
The simple reason was: YT REQUIRES audio. My stream didn't contain any audio. So I multiplexed a silent fake audio stream into the upload and it worked.
I successfully can play RTSP stream from IP camera using IJKPlayer (FFMpeg)/ VLC player on iOS device. I can also broadcast from iOS device using device camera (HaishinKit / VXG frameworks). But I don't know how combine these two processes.
There is a feature in HaishinKit which allows to attach a view for RTMP broadcasting and I was trying to attach my player view, it worked but there are some problems:
Quality of video is very low because of view is small;
Micro is recording other sounds around device but I need only audio from my IP camera;
MOST IMPORTANT - CPU is running almost 100% after few minutes, I think because of way of getting images in HaishinKit (work with CoreGraphics).
So I think maybe there is other solutions to retranslate signal from IP camera to a server, I really haven't found something similar, maybe somebody has ideas, thank you in advance.
I'm trying to receive a live RTP audio stream in my iPhone but I don't know how to start. I'm seeking some samples but I can't find them anywhere.
I have a Windows desktop app which captures audio from the selected audio interface and streams it as µ-law or a-law. This app works as an audio server that serves any incoming connection with that streaming. I have to say that I've developed an Android app that receives that stream and it works, so I want to replicate this functionality on iOS. In Android we have "android.net.rtp" package to manage this and transmit or receive data streams over the network.
Is there any kind of equivalent package for iOS to implement this? Could you give me any kind of reference / sample to do this, or just tell me where to start?
You can see this libraryHTTPLiveStreaming, But his protocol maybe is not standard one, You can check my fork aelam/HTTPLiveStreaming-1, I'm still working on it, it can be played by ffplay. You can try
check the file rtp.c in ffmpeg, I think it will help out
I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).
Hey, I'm a new developer in Objective C. I'm trying to record the audio running out of iPhone speakers. I can capture the audio by mouth speaker and record it. But I cannot record the audio producing from my iPhone. Please help me.
Unfortunately, there is no way to directly capture from the "audio bus". You can either capture the audio via the internal microphone or headset microphone, but that's it. If you are rendering the audio, you could obviously also write that audio out to a file as well at the same time. That's pretty much your only option.
yes, you only get a handle on the audio generated by your process. There is no way to get the audio generated by the rest of the system.