Introducing delay in live stream using opencv boost - opencv

I am trying to create a delay in live stream obtained from webcam. I am using opencv. However, i am unable to generate the desired delay. I am confused how to set and handle FPS and delay. below is my code:
I am using a constant value for fps at the moment. But i am not sure if we can do that.
Currently, the stream is shown with some initial delay while the queue is being filled. but after that, there is no delay in the stream.
fps=15;
wait= (1000.0/fps);
queue<cv::Mat> _buffer;
while(1)
{
int size_x=0;
//grad a frame from the video camers
boost::unique_lock<boost::mutex> lock(mutex, boost::defer_lock);
bool read = cap.read(image);
if(!read)
break;
locked= lock.try_lock();
if(locked){
if(image.data){
_buffer.push(image);
waitKey(wait);
if((int)_buffer.size() > (buffer_lenght))
{
popped_img=_buffer.front();
_buffer.pop();
imshow("VideoCaptureTutorial", popped_img);
}
}
lock.unlock();
}

I found two problems in your code.
1) Try decreasing waitKey value, with that long waiting period, opencv might skip frames when its a live stream. Which isn't related to your question, but, I think it might be helpful.
waitKey(30);
the above line might be good enough.
2) you have to push Mat.clone(), I assume, this might solve your problem in this case.
_buffer.push(image.clone());
Opencv's get/set fps won't work on live stream, in case, if you want to get fps for live feed then, you have to use your own counter. As I would have did, if, I were you.
VideoCapture cap(0);
double counter=0;
clock_t t1 = clock();
Mat frame;
while(1)
{
counter++;
cap.read(frame);
}
double fps = (Clock()-t1)/counter;
//assuming, you are on windows, clock() would give in seconds or else, it would be in ms
Completely, otherway around is, save the livefeed as a video file using videowriter and then use get fps to know the fps.

Related

get a video frame using qt gstreamer

I checked the video stream displayed well in qml video surface. now I want to get the video frame data to do something not bad thing. but, It seems not doing well until now... I made a simple pipeline like below for focus on a test.
nvarguscamerasrc - appsink
I used QGst::Utils::ApplicationSink to get a frame data. I referenced an example "appsink-src"
/* making pipeline */
QGst::ElementPtr source, sink;
SubClassApplicationSink *appsink;
source = QGst::ElementFactory::make("nvarguscamerasrc");
sink = QGst::ElementFactory::make("appsink");
appsink = new SubClassApplicationSink();
// configure elements
source->setProperty("sensor-id", n);
appsink->setElement(sink);
appsink->enableDrop(true);
appsink->setMaxBuffers(7654321);
m_pipeline->add(source, sink);
source->link(sink);
subclass of ApplicationSink implements some callbacks eos, preroll, sample.
and I prints logs some values in a buffer I got from the new sample.
the same outputs are repeated as callback function is called.
result: [start-end offsets are -1, no flags, memory count 1, memory size 1008]
I don't know why... How do you think?
I solved the issue. the problem was a pipeline's composition. after put a "nvvidconv" element between "nvarguscamerasrc" and "appsink" then I could get video frames successfully.
I don't know why needs a nvvidconv element. but, It seems because of source's video type, "video/x-raw(memory:NVMM)" which means using DMA buffers for performance reasons.
https://forums.developer.nvidia.com/t/what-is-the-meaning-of-memory-nvmm/180522

opencv videocapture fail to read frame from rtsp

I'm getting and error with read frame from rtsp stream of hikvision camera.
Here is my code to read:
public void readImage(){
VideoCapture capture = new VideoCapture(streamUrl);
if(capture.isOpened()){
Mat frame = new Mat();
while(true){
if(capture.read(frame)){
System.out.println("frame read");
}else{
System.out.println("failed to read frame");
}
}
}
}
with above code i can read frame successfully if the resolution of image from stream is low ex (704x576) but if i resolution is hight or i run some parallel task then the capture fail to read frame. After capture has failed in first read loop then i terminate all other task then capture still fail to read unless i recreate another capture (recreate capture object). What should i do now? (this happen on both open cv2.4 and open cv3.2 when i try )
You may want to release memory after use.
Put the code frame.dispose(); after the end of the while loop

Monogame - How to properly cap FPS in Monogame?

I wanted to cap FPS of my game at 30fps conditionally if being built for Windows Phone, as I don't need it to be running at 60fps on it and I heard from too many it is better to have it capped on mobile device because of battery draining.
I used same snippet of code used by XNA for Windows Phone 7:
//FrameRate is 30fps by default for WindowsPhone.
TargetElapsedTime = TimeSpan.FromTicks(333333);
But... As it is doing its job capping FPS, it is affecting everything else too, causing stuttering and sound issues. Because of this, I suppose I'm doing something wrong.
Anything which would help me would be great, as I was not able to find anything on the internet regarding this issue (most people wanted quite the opposite :D )
To fix your sound issues look into multi-threading and running your sound system on a separate, uncapped thread. For the game code, specifically the code that updates your assets, your method should work, but personally I do it differently.
// in your game1 class variable definitions
private const float timeToNextUpdate = 1.0f / 30.0f;
private float timeSinceLastUpdate;
//in your game1 update method
public override void Update(GameTime gameTime)
{
timeSinceLastUpdate += (float)gameTime.ElapsedGameTime.TotalSeconds;
if (timeSinceLastUpdate >= timeToNextUpdate)
{
//update game
timeSinceLastUpdate = 0;
}
//systems you don't want to limit would be updated here
}

Is it possible to get audio from an ICY stream with percentage and seek function

I'm trying to reproduce audio from an ICY stream. I'm able to reproduce that with AVPlayer and some good open source library but I'm not able to control the stream. I have no idea how I can get the percentage reproduced or how to seek to a specific time in the stream. Is that possible? Is there a good library that can help me?
Actually I'm using AFSoundManager but I'm always receiving negative numbers for percentage and I get invalid time when trying to seek the stream at a specified time.
That's the code that I'm using:
AFSoundManager.sharedManager().startStreamingRemoteAudioFromURL("http://www.abstractpath.com/files/audiosamples/sample.mp3") { (percentage, elapsedTime, timeRemaining, error, poppi) in
if error == nil {
//This block will be fired when the audio progress increases in 1%
if elapsedTime > 0 {
println(elapsedTime)
self.slider.value = Float(elapsedTime*1000)
}
} else {
//Handle the error
println(error)
}
I'm able of course to get the elapsedTime but not the percentage or the remainingTime. I always get negative numbers.
This code works perfectly with remote or local audio file but not with the stream.
This isn't possible.
These streams are live. There is nothing to seek to because what you haven't heard hasn't happened yet. Even streams that playback music end-to-end are still "live" in the sense that the audio you haven't received hasn't been encoded yet. (Small codec and transit buffers aside, of course.)

Adding audio buffer [from file] to 'live' audio buffer [recording to file]

What I'm trying to do:
Record up to a specified duration of audio/video, where the resulting output file will have a pre-defined background music from external audio-file added - without further encoding/exporting after recording.
As if you were recording video using the iPhones Camera-app, and all the recorded videos in 'Camera Roll' have background-songs. No exporting or loading after ending recording, and not in a separate AudioTrack.
How I'm trying to achieve this:
By using AVCaptureSession, in the delegate-method where the (CMSampleBufferRef)sample buffers are passed through, I'm pushing them to an AVAssetWriter to write to file. As I don't want multiple audio tracks in my output file, I can't pass the background-music through a separate AVAssetWriterInput, which means I have to add the background-music to each sample buffer from the recording while it's recording to avoid having to merge/export after recording.
The background-music is a specific, pre-defined audio file (format/codec: m4a aac), and will need no time-editing, just adding beneath the entire recording, from start to end. The recording will never be longer than the background-music-file.
Before starting the writing to file, I've also made ready an AVAssetReader, reading the specified audio-file.
Some pseudo-code(threading excluded):
-(void)startRecording
{
/*
Initialize writer and reader here: [...]
*/
backgroundAudioTrackOutput = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:
backgroundAudioTrack
outputSettings:nil];
if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput])
[backgroundAudioReader addOutput:backgroundAudioTrackOutput];
else
NSLog(#"This doesn't happen");
[backgroundAudioReader startReading];
/* Some more code */
recording = YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if(!recording)
return;
if(videoConnection)
[self writeVideoSampleBuffer:sampleBuffer];
else if(audioConnection)
[self writeAudioSampleBuffer:sampleBuffer];
}
The AVCaptureSession is already streaming the camera-video and microphone-audio, and is just waiting for the BOOL recording to be set to YES. This isn't exactly how I'm doing this, but a short, somehow equivalent representation. When the delegate-method receives a CMSampleBufferRef of type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer. If this was to be done normally, without a background-track as I'm trying to do, I'd simply put something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case though, I need to overlap two buffers before writing it:
-(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer
{
CMSampleBufferRef backgroundSampleBuffer =
[backgroundAudioTrackOutput copyNextSampleBuffer];
/* DO MAGIC HERE */
CMSampleBufferRef resultSampleBuffer =
[self overlapBuffer:recordedSampleBuffer
withBackgroundBuffer:backgroundSampleBuffer];
/* END MAGIC HERE */
[assetWriterAudioInput appendSampleBuffer:resultSampleBuffer];
}
The problem:
I have to add incremental sample buffers from a local file to the live buffers coming in. The method I have created named overlapBuffer:withBackgroundBuffer: isn't doing much right now. I know how to extract AudioBufferList, AudioBuffer and mData etc. from a CMSampleBufferRef, but I'm not sure how to actually add them together - however - I haven't been able to test different ways to do that, because the real problem happens before that. Before the Magic should happen, I am in possession of two CMSampleBufferRefs, one received from microphone, one read from file, and this is the problem:
The sample buffer received from the background-music-file is different than the one I receive from the recording-session. It seems like the call to [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I realize that this might be obvious to some people, but I've never before been at this level of media-technology. I see now that it was wishful thinking to call copyNextSampleBuffer each time I receive a sampleBuffer from the session, but I don't know when/where to put it.
As far as I can tell, the recording-session gives one audio-sample in each sample-buffer, while the file-reader gives multiple samples in each sample-buffer. Can I somehow create a counter to count each received recorded sample/buffers, and then use the first file-sampleBuffer to extract each sample, until the current file-sampleBuffer has no more samples 'to give', and then call [..copyNext..], and do the same to that buffer?
As I'm in full control of both the recording and the file's codecs, formats etc, I am hoping that such a solution wouldn't ruin the 'alignment'/synchronization of the audio. Given that both samples have the same sampleRate, could this still be a problem?
Note
I'm not even sure if this is possible, but I see no immediate reason why it shouldn't.
Also worth mentioning that when I try to use a Video-file instead of an Audio-file, and try to continually pull video-sampleBuffers, they align up perfectly.
I am not familiarized with AVCaptureOutput, since all my sound/music sessions were built using AudioToolbox instead of AVFoundation. However, I guess you should be able to set the size of the recording capturing buffer. If not, and you are still get just one sample, I would recommend you to store each individual data obtained from the capture output in an auxiliar buffer. When the auxiliar buffer reaches the same size as the file-reading buffer, then call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];
I hope this would help you. If not, I can provide example about how to do this using CoreAudio. Using CoreAudio I have been able to obtain 1024 LCPM samples buffer from both microphone capturing and file reading. So the overlapping is immediate.

Resources