Receiving "End of file" while streaming RTSP on iOS - ios

I'm using ffmpeg library to stream RTSP from an IP camera in the local network. The streaming is working fine with the code.
The only problem is that the stream seems to stop after some time. On further debugging I found out that I'm receiving an "End of file" and thats why the loop is breaking.
while(!playerShouldStop)// && (av_read_frame(pFormatCtx, &pkt1)>=0))
{
int ret = av_read_frame(pFormatCtx, &pkt1);
NSLog(#"av read frame returned = %s",av_err2str(ret));
if(ret >= 0)
{
// process video
}
else
break;
}
Logs says
av read frame returned = End of file
I downloaded Wireshark to check what RTSP packets I'm getting but no help there.
First of all is it normal to receive EOF in a live stream (which is not supposed to end).
Secondly, calling av_read_frame() again and again is not helping either, but when I restart the entire method ( right from avformat_open_input ) then it works. Just that the streaming isn't smooth and comes to a pause every now and then.

Ok...it seems to be working without EOF when i open the stream with AVDictionary options.
AVDictionary *opts = 0;
int ret = av_dict_set(&opts, "rtsp_transport", "tcp", 0);
// Open video file and read header information into pFormatCtx
if (avformat_open_input(&pFormatCtx, filename, NULL, &opts) != 0)
{
NSLog(#"Error opening video file.");
return;
}
av_dict_free(&opts);
Still, any proper explanation to this would be helpful.

I have met same question about av_read_frame return EOF(End Of File) while decoding realtime stream. Finanlly I found that when this problem appears. It's Because I set the AVFormatCtx.interrupt_callback.callback, and the number of timeout is too small(this call back can prevent av_read_frame() blocking). So When The callback return, av_read_frame() return EOF. Hope this question I met may help you.

Related

opencv videocapture fail to read frame from rtsp

I'm getting and error with read frame from rtsp stream of hikvision camera.
Here is my code to read:
public void readImage(){
VideoCapture capture = new VideoCapture(streamUrl);
if(capture.isOpened()){
Mat frame = new Mat();
while(true){
if(capture.read(frame)){
System.out.println("frame read");
}else{
System.out.println("failed to read frame");
}
}
}
}
with above code i can read frame successfully if the resolution of image from stream is low ex (704x576) but if i resolution is hight or i run some parallel task then the capture fail to read frame. After capture has failed in first read loop then i terminate all other task then capture still fail to read unless i recreate another capture (recreate capture object). What should i do now? (this happen on both open cv2.4 and open cv3.2 when i try )
You may want to release memory after use.
Put the code frame.dispose(); after the end of the while loop

IOS AudioQueue doesnt play when enqueue packets late

I have an app that enqueues packets to AudioQueue, and it's working perfectly. The problem is when I have a delay in the network, and I can't serve packets in time to AudioQueue.
All the application is working well and the enqueueBuffer doesnt return any error, but AudioQueue is discarding packets (so I have no sound), because they are too old.
Can I force AudioQueue to play those audio packets?, or at least, know that the packets are being discarded?. Because if I know it, i can do Pause-Play to restart the Queue... (not very good solution, but I haven't anything better)
Because the delay could be very big, I can't use a big buffer, because this would minimice error, but not solve it
Thank you very much
You're on the right track. You can handle network delays by detecting in your callback procedure when you have reached the end of your network buffer, then pausing the AudioQueue. Later on, restart the queue once enough packets have been buffered. Your code would look like this
if playerState.packetsRead == playerState.packetsWritten {
playerState.isPlaying = false
AudioQueuePause(aq)
}
And in your network code
if (playerState.packetsWritten >= playerState.packetsRead + kNumPacketsToBuffer) {
if !playerState.isPlaying {
playerState.isPlaying = true
for buffer in playerState.buffers {
audioCallback(playerState, aq: playerState.queue, buffer: buffer)
}
AudioQueueStart(playerState.queue, nil)
}
}
You would also need to update your code so that every time you receive a packet from the network, playerState.packetsWritten is incremented, and similarly for playerState.packetsRead when you add a packet to the audio queue. The optimal number for kNumPacketsToBuffer depends on the codec and network conditions. I would recommend using 256 for AAC and adjusting up/down based on performance.

smeared/corrupted capture of RTSP streams

i've using emgu cv 2.4.10 to create a RTSP stream viewer that will eventually be used with IP cameras. as i don't have the camera/s as yet, i'm testing using VLC (the windows GUI) to create the stream from a video file.
:sout=#duplicate{dst=rtp{sdp=rtsp://:8554/stream},dst=display} :sout-all :sout-keep
i'm doing this all testing on localhost.
here's my capture code:
private void ProcessFrame(object sender, EventArgs arg) {
try {
frame = _capture.QueryFrame();
pictureBox1.Image = frame.ToBitmap();
}
catch (Exception ex) {
MessageBox.Show(ex.Message.ToString());
}
}
this method is called using this eventhandler:
_capture = new Capture("rtsp://localhost:8554/stream");
Application.Idle += ProcessFrame;
_capture.Start();
the capture is corrupted with random occurrences of "smearing" that always occurs in the lower portion of the frame:
i've seen several others online have reported this problem as recently as last december but no solution has been found or that would work for me:
http://workingwithcomputervision.blogspot.co.uk/2012/06/issues-with-opencv-and-rtsp.html
EMGU QueryFrame returns "streaky" Image over RTSP
http://www.emgu.com/forum/viewtopic.php?f=7&t=4882&p=10110&hilit=rtsp#p10069
to narrow down the problem, i've run ffplay from the commandline and the capture is perfect. i've run another instance of VLC to capture the RTSP stream and it displays perfectly. so this is clearly a problem in open cv/emgu cv.
on a whim, i changed VLC to stream using HTTP.
:sout=#duplicate{dst=http{mux=ffmpeg{mux=flv},dst=:8080/stream},dst=display} :sout-all :sout-keep
this displays fine in my code, but at a noticeably lower frame rate that won't work for my application. i'd really appreciate any tips to fixing this problem. thanks.
I don't know if you solved your problem but i suggest you not to make your process in application.idle event. Instead, use thread. Create another thread and make your proccess in it. Example c# code:
Thread t = new Thread(new ThreadStart(()=>{ while(true) {frame = _capture.QueryFrame();
pictureBox1.Image = frame.ToBitmap();}})); t.IsBackGround = true; t.Start();

Is it possible to get audio from an ICY stream with percentage and seek function

I'm trying to reproduce audio from an ICY stream. I'm able to reproduce that with AVPlayer and some good open source library but I'm not able to control the stream. I have no idea how I can get the percentage reproduced or how to seek to a specific time in the stream. Is that possible? Is there a good library that can help me?
Actually I'm using AFSoundManager but I'm always receiving negative numbers for percentage and I get invalid time when trying to seek the stream at a specified time.
That's the code that I'm using:
AFSoundManager.sharedManager().startStreamingRemoteAudioFromURL("http://www.abstractpath.com/files/audiosamples/sample.mp3") { (percentage, elapsedTime, timeRemaining, error, poppi) in
if error == nil {
//This block will be fired when the audio progress increases in 1%
if elapsedTime > 0 {
println(elapsedTime)
self.slider.value = Float(elapsedTime*1000)
}
} else {
//Handle the error
println(error)
}
I'm able of course to get the elapsedTime but not the percentage or the remainingTime. I always get negative numbers.
This code works perfectly with remote or local audio file but not with the stream.
This isn't possible.
These streams are live. There is nothing to seek to because what you haven't heard hasn't happened yet. Even streams that playback music end-to-end are still "live" in the sense that the audio you haven't received hasn't been encoded yet. (Small codec and transit buffers aside, of course.)

Recording volume drop switching between RemoteIO and VPIO

In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, &currentOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback

Resources