I'm getting and error with read frame from rtsp stream of hikvision camera.
Here is my code to read:
public void readImage(){
VideoCapture capture = new VideoCapture(streamUrl);
if(capture.isOpened()){
Mat frame = new Mat();
while(true){
if(capture.read(frame)){
System.out.println("frame read");
}else{
System.out.println("failed to read frame");
}
}
}
}
with above code i can read frame successfully if the resolution of image from stream is low ex (704x576) but if i resolution is hight or i run some parallel task then the capture fail to read frame. After capture has failed in first read loop then i terminate all other task then capture still fail to read unless i recreate another capture (recreate capture object). What should i do now? (this happen on both open cv2.4 and open cv3.2 when i try )
You may want to release memory after use.
Put the code frame.dispose(); after the end of the while loop
Related
How do I stop a camera capture in EmguCV 2.X and ensure that I have no connection to the camera in my application anymore.
There does not seem to be a release() function like there is in OpenCV.
Relevant parts of code:
Capture Definition:
Emgu::CV::Capture^ capture; // Creates a capture object
On Start Button Click:
capture = gcnew Emgu::CV::Capture(_CameraIndex); //create a camera capture
If I add the following after initialisation:
capture.Dispose(); //To stop and call Garbage Collector
Then it gives me the following error:
Dispose is not a member of Emgu::CV::Capture
Yes, it's safe to delete the object, which calls the destructor, and in turn Dispose(). You can change it to if (capture != nullptr) delete capture;.
It turns out that whilst Dispose is not a method I can just delete the object and then reinitialise it
capture = gcnew Emgu::CV::Capture(cameraIndex);
delete capture;
capture = gcnew Emgu::CV::Capture(cameraIndex);
AFAIK there is nothing inherently dangerous with doing this but I am not 100%
i've using emgu cv 2.4.10 to create a RTSP stream viewer that will eventually be used with IP cameras. as i don't have the camera/s as yet, i'm testing using VLC (the windows GUI) to create the stream from a video file.
:sout=#duplicate{dst=rtp{sdp=rtsp://:8554/stream},dst=display} :sout-all :sout-keep
i'm doing this all testing on localhost.
here's my capture code:
private void ProcessFrame(object sender, EventArgs arg) {
try {
frame = _capture.QueryFrame();
pictureBox1.Image = frame.ToBitmap();
}
catch (Exception ex) {
MessageBox.Show(ex.Message.ToString());
}
}
this method is called using this eventhandler:
_capture = new Capture("rtsp://localhost:8554/stream");
Application.Idle += ProcessFrame;
_capture.Start();
the capture is corrupted with random occurrences of "smearing" that always occurs in the lower portion of the frame:
i've seen several others online have reported this problem as recently as last december but no solution has been found or that would work for me:
http://workingwithcomputervision.blogspot.co.uk/2012/06/issues-with-opencv-and-rtsp.html
EMGU QueryFrame returns "streaky" Image over RTSP
http://www.emgu.com/forum/viewtopic.php?f=7&t=4882&p=10110&hilit=rtsp#p10069
to narrow down the problem, i've run ffplay from the commandline and the capture is perfect. i've run another instance of VLC to capture the RTSP stream and it displays perfectly. so this is clearly a problem in open cv/emgu cv.
on a whim, i changed VLC to stream using HTTP.
:sout=#duplicate{dst=http{mux=ffmpeg{mux=flv},dst=:8080/stream},dst=display} :sout-all :sout-keep
this displays fine in my code, but at a noticeably lower frame rate that won't work for my application. i'd really appreciate any tips to fixing this problem. thanks.
I don't know if you solved your problem but i suggest you not to make your process in application.idle event. Instead, use thread. Create another thread and make your proccess in it. Example c# code:
Thread t = new Thread(new ThreadStart(()=>{ while(true) {frame = _capture.QueryFrame();
pictureBox1.Image = frame.ToBitmap();}})); t.IsBackGround = true; t.Start();
I am trying to create a delay in live stream obtained from webcam. I am using opencv. However, i am unable to generate the desired delay. I am confused how to set and handle FPS and delay. below is my code:
I am using a constant value for fps at the moment. But i am not sure if we can do that.
Currently, the stream is shown with some initial delay while the queue is being filled. but after that, there is no delay in the stream.
fps=15;
wait= (1000.0/fps);
queue<cv::Mat> _buffer;
while(1)
{
int size_x=0;
//grad a frame from the video camers
boost::unique_lock<boost::mutex> lock(mutex, boost::defer_lock);
bool read = cap.read(image);
if(!read)
break;
locked= lock.try_lock();
if(locked){
if(image.data){
_buffer.push(image);
waitKey(wait);
if((int)_buffer.size() > (buffer_lenght))
{
popped_img=_buffer.front();
_buffer.pop();
imshow("VideoCaptureTutorial", popped_img);
}
}
lock.unlock();
}
I found two problems in your code.
1) Try decreasing waitKey value, with that long waiting period, opencv might skip frames when its a live stream. Which isn't related to your question, but, I think it might be helpful.
waitKey(30);
the above line might be good enough.
2) you have to push Mat.clone(), I assume, this might solve your problem in this case.
_buffer.push(image.clone());
Opencv's get/set fps won't work on live stream, in case, if you want to get fps for live feed then, you have to use your own counter. As I would have did, if, I were you.
VideoCapture cap(0);
double counter=0;
clock_t t1 = clock();
Mat frame;
while(1)
{
counter++;
cap.read(frame);
}
double fps = (Clock()-t1)/counter;
//assuming, you are on windows, clock() would give in seconds or else, it would be in ms
Completely, otherway around is, save the livefeed as a video file using videowriter and then use get fps to know the fps.
What I'm trying to do:
Record up to a specified duration of audio/video, where the resulting output file will have a pre-defined background music from external audio-file added - without further encoding/exporting after recording.
As if you were recording video using the iPhones Camera-app, and all the recorded videos in 'Camera Roll' have background-songs. No exporting or loading after ending recording, and not in a separate AudioTrack.
How I'm trying to achieve this:
By using AVCaptureSession, in the delegate-method where the (CMSampleBufferRef)sample buffers are passed through, I'm pushing them to an AVAssetWriter to write to file. As I don't want multiple audio tracks in my output file, I can't pass the background-music through a separate AVAssetWriterInput, which means I have to add the background-music to each sample buffer from the recording while it's recording to avoid having to merge/export after recording.
The background-music is a specific, pre-defined audio file (format/codec: m4a aac), and will need no time-editing, just adding beneath the entire recording, from start to end. The recording will never be longer than the background-music-file.
Before starting the writing to file, I've also made ready an AVAssetReader, reading the specified audio-file.
Some pseudo-code(threading excluded):
-(void)startRecording
{
/*
Initialize writer and reader here: [...]
*/
backgroundAudioTrackOutput = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:
backgroundAudioTrack
outputSettings:nil];
if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput])
[backgroundAudioReader addOutput:backgroundAudioTrackOutput];
else
NSLog(#"This doesn't happen");
[backgroundAudioReader startReading];
/* Some more code */
recording = YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if(!recording)
return;
if(videoConnection)
[self writeVideoSampleBuffer:sampleBuffer];
else if(audioConnection)
[self writeAudioSampleBuffer:sampleBuffer];
}
The AVCaptureSession is already streaming the camera-video and microphone-audio, and is just waiting for the BOOL recording to be set to YES. This isn't exactly how I'm doing this, but a short, somehow equivalent representation. When the delegate-method receives a CMSampleBufferRef of type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer. If this was to be done normally, without a background-track as I'm trying to do, I'd simply put something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case though, I need to overlap two buffers before writing it:
-(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer
{
CMSampleBufferRef backgroundSampleBuffer =
[backgroundAudioTrackOutput copyNextSampleBuffer];
/* DO MAGIC HERE */
CMSampleBufferRef resultSampleBuffer =
[self overlapBuffer:recordedSampleBuffer
withBackgroundBuffer:backgroundSampleBuffer];
/* END MAGIC HERE */
[assetWriterAudioInput appendSampleBuffer:resultSampleBuffer];
}
The problem:
I have to add incremental sample buffers from a local file to the live buffers coming in. The method I have created named overlapBuffer:withBackgroundBuffer: isn't doing much right now. I know how to extract AudioBufferList, AudioBuffer and mData etc. from a CMSampleBufferRef, but I'm not sure how to actually add them together - however - I haven't been able to test different ways to do that, because the real problem happens before that. Before the Magic should happen, I am in possession of two CMSampleBufferRefs, one received from microphone, one read from file, and this is the problem:
The sample buffer received from the background-music-file is different than the one I receive from the recording-session. It seems like the call to [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I realize that this might be obvious to some people, but I've never before been at this level of media-technology. I see now that it was wishful thinking to call copyNextSampleBuffer each time I receive a sampleBuffer from the session, but I don't know when/where to put it.
As far as I can tell, the recording-session gives one audio-sample in each sample-buffer, while the file-reader gives multiple samples in each sample-buffer. Can I somehow create a counter to count each received recorded sample/buffers, and then use the first file-sampleBuffer to extract each sample, until the current file-sampleBuffer has no more samples 'to give', and then call [..copyNext..], and do the same to that buffer?
As I'm in full control of both the recording and the file's codecs, formats etc, I am hoping that such a solution wouldn't ruin the 'alignment'/synchronization of the audio. Given that both samples have the same sampleRate, could this still be a problem?
Note
I'm not even sure if this is possible, but I see no immediate reason why it shouldn't.
Also worth mentioning that when I try to use a Video-file instead of an Audio-file, and try to continually pull video-sampleBuffers, they align up perfectly.
I am not familiarized with AVCaptureOutput, since all my sound/music sessions were built using AudioToolbox instead of AVFoundation. However, I guess you should be able to set the size of the recording capturing buffer. If not, and you are still get just one sample, I would recommend you to store each individual data obtained from the capture output in an auxiliar buffer. When the auxiliar buffer reaches the same size as the file-reading buffer, then call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];
I hope this would help you. If not, I can provide example about how to do this using CoreAudio. Using CoreAudio I have been able to obtain 1024 LCPM samples buffer from both microphone capturing and file reading. So the overlapping is immediate.
I'm using ffmpeg library to stream RTSP from an IP camera in the local network. The streaming is working fine with the code.
The only problem is that the stream seems to stop after some time. On further debugging I found out that I'm receiving an "End of file" and thats why the loop is breaking.
while(!playerShouldStop)// && (av_read_frame(pFormatCtx, &pkt1)>=0))
{
int ret = av_read_frame(pFormatCtx, &pkt1);
NSLog(#"av read frame returned = %s",av_err2str(ret));
if(ret >= 0)
{
// process video
}
else
break;
}
Logs says
av read frame returned = End of file
I downloaded Wireshark to check what RTSP packets I'm getting but no help there.
First of all is it normal to receive EOF in a live stream (which is not supposed to end).
Secondly, calling av_read_frame() again and again is not helping either, but when I restart the entire method ( right from avformat_open_input ) then it works. Just that the streaming isn't smooth and comes to a pause every now and then.
Ok...it seems to be working without EOF when i open the stream with AVDictionary options.
AVDictionary *opts = 0;
int ret = av_dict_set(&opts, "rtsp_transport", "tcp", 0);
// Open video file and read header information into pFormatCtx
if (avformat_open_input(&pFormatCtx, filename, NULL, &opts) != 0)
{
NSLog(#"Error opening video file.");
return;
}
av_dict_free(&opts);
Still, any proper explanation to this would be helpful.
I have met same question about av_read_frame return EOF(End Of File) while decoding realtime stream. Finanlly I found that when this problem appears. It's Because I set the AVFormatCtx.interrupt_callback.callback, and the number of timeout is too small(this call back can prevent av_read_frame() blocking). So When The callback return, av_read_frame() return EOF. Hope this question I met may help you.