I am trying to capture an RTSP stream from a VIRB 360 camera, into OpenCV. The video is H264 and according to one of the comments here, OpenCV 3.4 should be able to handle it. Here is the code:
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/videoio.hpp>
int main()
{
cv::VideoCapture cap("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG);
if(!cap.isOpened())
{
std::cout << "Input error\n";
return -1;
}
cv::namedWindow("Video Feed", cv::WINDOW_AUTOSIZE);
cv::Mat frame;
for(;;)
{
//std::cout << "Format: " << cap.get(CV_CAP_PROP_FORMAT) << "\n";
cap >> frame;
cv::imshow("Video Feed", frame);
if (cv::waitKey(10) == 27)
{
break;
}
}
cv::destroyAllWindows();
return 0;
}
I have compiled OpenCV with ffmpeg and gstreamer capabilities. When I run the following Gstreamer command, I am able to stream it, but with a delay of 3 seconds (not acceptable):
gst-launch-1.0 playbin uri=rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720\&liveStreamActive=1
On the other hand, I get a 0.5 second delay using ffplay/ffmpeg command (acceptable):
ffplay -fflags nobuffer -rtsp_transport udp rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720\&liveStreamActive=1
or
ffplay -probesize 32 -sync ext rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720\&liveStreamActive=1
In the OpenCV code written above, using cv::CAP_FFMPEG flag in the line:
cv::VideoCapture cap("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG);
gives the error:
[rtsp # 0x2312040] The profile-level-id field size is invalid (65)
[rtsp # 0x2312040] method SETUP failed: 461 Unsupported transport
Input error
If I use cv::CAP_GSTREAMER, it throws no error, but nothing happens. I believe that the problem is that OpenCV is not able to handle UDP transport layer. What are the possible solutions? Kindly provide suggestions.
Edit 1:
I was able to get capture the stream by following this. I made the following changes: instead of cv::VideoCapture cap("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG); the code now has:
#if WIN32
_putenv_s("OPENCV_FFMPEG_CAPTURE_OPTIONS", "rtsp_transport;udp");
#else
setenv("OPENCV_FFMPEG_CAPTURE_OPTIONS", "rtsp_transport;udp", 1);
#endif
auto cap = cv::VideoCapture("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG);
#if WIN32
_putenv_s("OPENCV_FFMPEG_CAPTURE_OPTIONS", "");
#else
unsetenv("OPENCV_FFMPEG_CAPTURE_OPTIONS");
#endif
However, it throws the following errors:
[rtsp # 0x2090580] The profile-level-id field size is invalid (65)
[rtsp # 0x2090580] Error parsing AU headers
[h264 # 0x208d240] error while decoding MB 69 40, bytestream -7
[rtsp # 0x2090580] Error parsing AU headers
[rtsp # 0x2090580] Error parsing AU headers
[h264 # 0x2316700] left block unavailable for requested intra4x4 mode -1
[h264 # 0x2316700] error while decoding MB 0 16, bytestream 112500
[rtsp # 0x2090580] Error parsing AU headers
which means the video is sometimes glitchy and looks like:
I believe it has something to do with:
setenv("OPENCV_FFMPEG_CAPTURE_OPTIONS", "rtsp_transport;udp", 1);
I would appreciate any suggestions or improvements. Thank You.
Set this:
cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); /
I think this was answered here. OpenCV VideoCapture lag due to the capture buffer
Related
I am programming the first time on C++ and i have a not solvable problem at the moment.
My trained network works fine using Onnx-CPU but i have trouble running it on the GPU.
Enviroment and Versions:
Windows 10, RTX 3090 Driver: 516.59
Visual Studio Code Community 2022
Microsoft.ML.OnnxRuntime.Gpu 1.12.0
Cuda 11.4
cuDnn 8.2.2.26
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <vector>
#include <string>
#include <onnxruntime_cxx_api.h>
int main() {
const string imageFile = "image";
auto onnxFile = L"path to onnx file";
//Testing CV
cv::Mat img = cv::imread(imageFile)
//-----------------------------------------------GPU SUPPORT------------------------------------------------
auto providers = Ort::GetAvailableProviders();
for (auto provider : providers) {
cout << provider << endl;
}
Ort::Session session(nullptr);
Ort::Env env(OrtLoggingLevel::ORT_LOGGING_LEVEL_WARNING, "test");
Ort::SessionOptions session_options;
session_options.SetIntraOpNumThreads(1);
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, 0));//Crash is here
//----------------------------------------------------------------------------------------------------------
// create session
session = Ort::Session(env, onnxFile, sessionOptions);
}
The error i get is:
Exception thrown at 0x00007FFD36393F68 (cufft64_10.dll) in
programm.exe: 0xC0000005: Access violation writing location
0x000000D3562B0000. The file is in C:\Program Files\NVIDIA GPU
Computing Toolkit\CUDA\v11.4\bin\cufft64_10.dll.
I hope you can help me figure out what i have done wrong in my installation/code snippet.
I am trying to use FFMPEG HLS Streaming. The code works fine in console application in Linux and Windows. I also load this code (inside a library) dinamically using "dlopen" in Linux and it also works.
The problem raises when I load dinamically from Dart FFI code.
It seems like Dart runs in another thread with another Thread Context without permissions or something like this.
The FFMPEG log output is:
[tcp # 0x7f00a4084f80] Starting connection attempt to 179.184.214.178 port 80
[tcp # 0x7f00a4084f80] Connection to tcp://qthttp.apple.com.edgesuite.net:80 failed: Interrupted system call
cannot open input: -4
Interrupted system call
The Code is below:
AVDictionary* options = NULL;
int err = avformat_open_input(&stream->streamContext, url, nullptr, &options);
if (err < 0)
{
std::cerr << "cannot open input: " << err << std::endl;
char errorDescription[1024];
av_strerror(err, errorDescription, 1024);
std::cerr << errorDescription << std::endl;
avformat_free_context(stream->streamContext);
}
I'm testing an example of stack protector from ARM reference document (https://developer.arm.com/documentation/101754/0616/armclang-Reference/armclang-Command-line-Options/-fstack-protector---fstack-protector-all---fstack-protector-strong---fno-stack-protector), which has code like:
// main.c
#include <stdio.h>
#include <stdlib.h>
void *__stack_chk_guard = (void *)0xdeadbeef;
void __stack_chk_fail(void)
{
fprintf(stderr, "Stack smashing detected.\n");
exit(1);
}
void get_input(char *data);
char main(void)
{
char buffer[8] = { 0 };
char *overflowed = &buffer[8];
get_input(buffer);
// =============================================================
// The following line is added by me. This triggers stack smashing,
// but strcpy() should also trigger stack smashing detect.
// *overflowed = 0xFF;
printf("buffer: %s\n", buffer);
printf("buffer addr: 0x%08x\n", buffer[8]);
// =============================================================
return buffer[0];
}
// get.c
#include <string.h>
void get_input(char *data)
{
strcpy(data, "01234567");
}
I think this code intends to make stack overflow by accessing extra memory(buffer[8]) for NULL character using strcpy(). But the actual behavior is that no messages are thrown at all.
➜ stack_protector gcc main.c get.c -o test -fstack-protector-all
➜ stack_protector ./test
buffer: 01234567
buffer addr: 0x00000000
I checked that stack protector cannot detect smashing error if the stack size is smaller than 8 bytes, but this is not the case I think. Is there anything I am missing?
I stepped through the code with a debugger and figured out the stack guard value was not set properly. Its lower 8bits are always set to zero, which results in detection miss.
Since fs:0x28 register is set from the operating system, I reported this to my Linux distro's bug tracker.
--
(Added) I looked into Kernel code and there is code like:
/*
* On 64-bit architectures, protect against non-terminated C string overflows
* by zeroing out the first byte of the canary; this leaves 56 bits of entropy.
*/
#ifdef CONFIG_64BIT
# ifdef __LITTLE_ENDIAN
# define CANARY_MASK 0xffffffffffffff00UL
# else /* big endian, 64 bits: */
# define CANARY_MASK 0x00ffffffffffffffUL
# endif
#else /* 32 bits: */
# define CANARY_MASK 0xffffffffUL
#endif
Basically, initializing the first byte of the canary to zero is intended by kernel code. This is a kind of boundary that secures canary value from printing out. Due to \0 character, we can prevent canary value from printing out and this is the reason that I cannot reproduce stack overflow error with NULL character using strcpy().
I am following image subscribe tutorial on official ROS page. when I run my_subscriber no window popup appears. I type -
rosrun image_transport_tutorial my_subscriber
output is -
init done
opengl support available
And then nothing happens. (even the output "init done" is unexplained because there is not output line in my_subscriber.cpp)
I am following this tutorial - ROS tutorial
I have roscore and rosrun image_transport_tutorial my_publisher pathtofile already running in dofferent terminals.
I checked that publisher is publishing by running command
rostopic list -v
my_subscriber file has the following contents -
#include <ros/ros.h>
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <opencv2/highgui/highgui.hpp>
#include <cv_bridge/cv_bridge.h>
#include <stdio.h>
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
try
{
cv::imshow("view", cv_bridge::toCvShare(msg, "bgr8")->image);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Could not convert from '%s' to 'bgr8'.", msg->encoding.c_str());
}
}
int main(int argc, char **argv)
{
std::cout<<"kapil";
ros::init(argc, argv, "image_listener");
ros::NodeHandle nh;
cv::namedWindow("view");
cv::startWindowThread();
image_transport::ImageTransport it(nh);
image_transport::Subscriber sub = it.subscribe("camera/image", 1, imageCallback);
ros::spin();
cv::destroyWindow("view");
}
Solved : I added waitKey function in the try block as suggested in one of the answers.
cv::waitKey(30);
According to the comment to this answer, using cv::startWindowThread() does not always work. Maybe this is the issue in your case.
Try to add cv::waitKey(10) after cv::imshow instead. This will wait for some key press for 10 milliseconds, giving the window time to show the image.
(This always seemed to me like a dirty hack, but it is the common way to show images in OpenCV...)
I am new with opencv.
I want to capture images from webcam (intex it-105wc).
I am using Microsoft visual c++ 2008 express edition on windows xp.
There is no problem with build solution, but when i try to debug the code it gives following (this happens wwhile executing cvCaptureFromCAM(CV_CAP_ANY);)
Loaded C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax, Binary was not built with debug information.
and then exits the code.
so, is there any problem with my code or is it compatibility issue with webcam??
#include "stdafx.h"
#include<stdio.h>
#include <cv.h>
#include <highgui.h>
void main(int argc,char *argv[])
{
int c;
IplImage* color_img;
CvCapture* cv_cap = cvCaptureFromCAM(CV_CAP_ANY);
if(!cv_cap)
{
printf( "ERROR: Capture is null!\n");
}
cvNamedWindow("Video",0); // create window
for(;;)
{
color_img = cvQueryFrame(cv_cap); // get frame
if(color_img != 0)
cvShowImage("Video", color_img); // show frame
c = cvWaitKey(10); // wait 10 ms or for key stroke
if(c == 27)
break; // if ESC, break and quit
}
/* clean up */
cvReleaseCapture( &cv_cap );
cvDestroyWindow("Video");
}
This error message seems to be related to a video codec from the nero burning tools.
If you do not need this codec, you could just unregister it and see, if that solves your problem.
To do that, execute the following on the commandline:
regsvr32 /u "C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax"
You should see the message:
DllUnregisterServer in C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax succeeded.
To undo this, execute
regsvr32 "C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax"