FFmpeg - Interrupted System Call - dart

I am trying to use FFMPEG HLS Streaming. The code works fine in console application in Linux and Windows. I also load this code (inside a library) dinamically using "dlopen" in Linux and it also works.
The problem raises when I load dinamically from Dart FFI code.
It seems like Dart runs in another thread with another Thread Context without permissions or something like this.
The FFMPEG log output is:
[tcp # 0x7f00a4084f80] Starting connection attempt to 179.184.214.178 port 80
[tcp # 0x7f00a4084f80] Connection to tcp://qthttp.apple.com.edgesuite.net:80 failed: Interrupted system call
cannot open input: -4
Interrupted system call
The Code is below:
AVDictionary* options = NULL;
int err = avformat_open_input(&stream->streamContext, url, nullptr, &options);
if (err < 0)
{
std::cerr << "cannot open input: " << err << std::endl;
char errorDescription[1024];
av_strerror(err, errorDescription, 1024);
std::cerr << errorDescription << std::endl;
avformat_free_context(stream->streamContext);
}

Related

How to get a wasm stacktrace

I'm looking for something gdb --core equivalent on webassembly.
Take this example:
//crash.cpp
#include <iostream>
int main() {
std::cout << "crashing soon..." << std::endl;
int *a = 0;
*a = 1;
}
I compile this with:
$ em++ -g4 crash.cpp -o crash.html --source-map-base http://localhost:8080/
And start the server:
$ emrun --no_browser --port 8080 crash.html
So how can I get a stack trace of this core dump/crash? The console on both chrome/firefox when visiting page just shows a js stacktrace and that won't help me. Looking at Sources => Call stack on chrome console just shows "Not paused", after the crash.
This is on debian 11, emscripten 2.0.12~dfsg-2, clang-11.
The reason is that what you're doing is not an error in WebAssembly. Like on many embedded platforms, writing to or reading from zero pointer is a perfectly valid operation in WebAssembly memory model.
However, Emscripten tries to help you catch this as for C/C++ it's a common mistake, so what it does instead is checks the value at the address zero after the program has finished execution and throws a helpful assertion if that value happened to be overwritten. For this reason you're getting a stacktrace with only JavaScript bits in it - because the check is done by JavaScript when Wasm stack has already been exited.
If you tried a different operation that does cause immediate abort, for example, assert(false), then you would see WebAssembly and/or C/C++ on the stack as expected.

Cross-platform way to handle std::string/std::wstring with std::filesystem::path

I have a sample piece of C++ code that is throwing an exception on Linux:
namespace fs = std::filesystem;
const fs::path pathDir(L"/var/media");
const fs::path pathMedia = pathDir / L"COMPACTO - Diogo Poças.mxf" // <-- Exception thrown here
The exception being thrown is: filesystem error: Cannot convert character sequence: Invalid in or incomplete multibyte or wide character
I surmise that the issue is related to the use of the ç character.
Why is this wide string (wchar_t) an "invalid or incomplete multibyte or wide character"?
Going forward, how do I make related code cross-platform to run on Windows and/or Linux.
Are there helper functions I need to use?
What rules do I need to enforce from a programmer's PoV?
I've seen a response here that says "Don't use wide strings on Linux", do I use the same rules for Windows?
Linux Environment (not forgetting the fact that I'd like to run cross-platform):
Ubuntu 18.04.3
gcc 9.2.1
C++17
Unfortunately std::filesystem was not written with operating system compatibility in mind, at least not as advertised.
For Unix based systems, we need UTF8 (u8"string", or just "string" depending on the compiler)
For Windows, we need UTF16 (L"string")
In C++17 you can use filesystem::u8path (which for some reason is deprecated in C++20). In Windows, this will convert UTF8 to UTF16. Now you can pass UTF16 to APIs.
#ifdef _WINDOWS_PLATFORM
//windows I/O setup
_setmode(_fileno(stdin), _O_WTEXT);
_setmode(_fileno(stdout), _O_WTEXT);
#endif
fs::path path = fs::u8path(u8"ελληνικά.txt");
#ifdef _WINDOWS_PLATFORM
std::wcout << "UTF16: " << path << std::endl;
#else
std::cout << "UTF8: " << path << std::endl;
#endif
Or use your own macro to set UTF16 for Windows (L"string"), and UTF8 for Unix based systems (u8"string" or just "string"). Make sure UNICODE is defined for Windows.
#ifdef _WINDOWS_PLATFORM
#define _TEXT(quote) L##quote
#define _tcout std::wcout
#else
#define _TEXT(quote) u8##quote
#define _tcout std::cout
#endif
fs::path path(_TEXT("ελληνικά.txt"));
_tcout << path << std::endl;
See also
https://en.cppreference.com/w/cpp/filesystem/path/native
Note, Visual Studio has a special constructor for std::fstream which allows using UTF16 filename, and it's compatible for UTF8 read/write. For example the following code will work in Visual Studio:
fs::path utf16 = fs::u8path(u8"UTF8 filename ελληνικά.txt");
std::ofstream fout(utf16);
fout << u8"UTF8 content ελληνικά";
I am not sure if that's supported on latest gcc versions running on Windows.
Looks like a GCC bug.
According to std::filesystem::path::path you should be able to call std::filesystem::path constructor with a wide-character string and that independent of underlying platform (that's the whole point of std::filesystem).
Clang shows correct behavior.

ffmpeg and docker result in averror_invaliddata

I am running ffmpeg within a docker container, and I am having a problem.
I have a wittled-down debug program (listing below) that simply opens (and reads) a test.mp4 I open the file with fopen(), read and print the first 32 bytes. The values agree and are correct whether running in docker or locally. (ie. the file is accessible and readable in the docker container) However, when it gets to avformat_open_input():
Running locally: Works just fine. (and the real program fully decodes it)
Running in docker container: The call to avformat_open_input() fails with AVERROR_INVALIDDATA. This is with test.mp4 in the docker directory.
I'm a bit lost at this point. I appreciate any ideas.
test program listing: =========================================
#include <stdio.h>
extern "C" {
#include <libavformat/avformat.h>
}
int main (int argc, char **argv)
{
int erc=0;
AVFormatContext* srcFmtCtx = NULL;
const char* srcFile = "test.mp4";
// Simple read/echo
FILE *f = fopen(srcFile, "rb");
printf("fopen() %s 0x%lx\n", srcFile, (unsigned long)f);
for (int i=0; i<4; i++) {
long val;
erc = fread(&val, 1, sizeof(long), f);
printf("[%d]0x%lx ", i, val);
}
printf("\n");
fclose(f);
// Open source with ffmpeg libavformat utils
printf("avformat_open_input(): %s\n", srcFile);
if ((erc = avformat_open_input(&srcFmtCtx, srcFile, NULL, NULL)) < 0) {
printf("avformat_open_input(): Returned AvError: %d\n", erc);
exit(1);
}
printf("avformat_open_input(): Returned normally\n");
avformat_close_input(&srcFmtCtx);
}
It seems my problem is one of version mismatch.
My docker container is constructing a ubuntu:18.10 image (latest available)
That image provides an ffmpeg v3.3.5 (or so), libavformat v57.83.100
avformat_version(): 3756900 Ident: Lavf57.83.100
My local ffmpeg installation is v4.0, libavformat v58.12.100
avformat_version(): 3804260 Build: 3804260 Ident: Lavf58.12.100
One important difference between those versions is that avformat::av_register_all() is deprecated, and no longer used. I based my code on the new sources, and did not have that call. However, the older versions of libavformat requires it. Thus the failure in the docker container, and not in my local environment.

OpenCV - Streaming H264 over RTSP using FFMPEG in version 3.4

I am trying to capture an RTSP stream from a VIRB 360 camera, into OpenCV. The video is H264 and according to one of the comments here, OpenCV 3.4 should be able to handle it. Here is the code:
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/videoio.hpp>
int main()
{
cv::VideoCapture cap("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG);
if(!cap.isOpened())
{
std::cout << "Input error\n";
return -1;
}
cv::namedWindow("Video Feed", cv::WINDOW_AUTOSIZE);
cv::Mat frame;
for(;;)
{
//std::cout << "Format: " << cap.get(CV_CAP_PROP_FORMAT) << "\n";
cap >> frame;
cv::imshow("Video Feed", frame);
if (cv::waitKey(10) == 27)
{
break;
}
}
cv::destroyAllWindows();
return 0;
}
I have compiled OpenCV with ffmpeg and gstreamer capabilities. When I run the following Gstreamer command, I am able to stream it, but with a delay of 3 seconds (not acceptable):
gst-launch-1.0 playbin uri=rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720\&liveStreamActive=1
On the other hand, I get a 0.5 second delay using ffplay/ffmpeg command (acceptable):
ffplay -fflags nobuffer -rtsp_transport udp rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720\&liveStreamActive=1
or
ffplay -probesize 32 -sync ext rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720\&liveStreamActive=1
In the OpenCV code written above, using cv::CAP_FFMPEG flag in the line:
cv::VideoCapture cap("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG);
gives the error:
[rtsp # 0x2312040] The profile-level-id field size is invalid (65)
[rtsp # 0x2312040] method SETUP failed: 461 Unsupported transport
Input error
If I use cv::CAP_GSTREAMER, it throws no error, but nothing happens. I believe that the problem is that OpenCV is not able to handle UDP transport layer. What are the possible solutions? Kindly provide suggestions.
Edit 1:
I was able to get capture the stream by following this. I made the following changes: instead of cv::VideoCapture cap("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG); the code now has:
#if WIN32
_putenv_s("OPENCV_FFMPEG_CAPTURE_OPTIONS", "rtsp_transport;udp");
#else
setenv("OPENCV_FFMPEG_CAPTURE_OPTIONS", "rtsp_transport;udp", 1);
#endif
auto cap = cv::VideoCapture("rtsp://192.168.0.1/livePreviewStream?maxResolutionVertical=720&liveStreamActive=1", cv::CAP_FFMPEG);
#if WIN32
_putenv_s("OPENCV_FFMPEG_CAPTURE_OPTIONS", "");
#else
unsetenv("OPENCV_FFMPEG_CAPTURE_OPTIONS");
#endif
However, it throws the following errors:
[rtsp # 0x2090580] The profile-level-id field size is invalid (65)
[rtsp # 0x2090580] Error parsing AU headers
[h264 # 0x208d240] error while decoding MB 69 40, bytestream -7
[rtsp # 0x2090580] Error parsing AU headers
[rtsp # 0x2090580] Error parsing AU headers
[h264 # 0x2316700] left block unavailable for requested intra4x4 mode -1
[h264 # 0x2316700] error while decoding MB 0 16, bytestream 112500
[rtsp # 0x2090580] Error parsing AU headers
which means the video is sometimes glitchy and looks like:
I believe it has something to do with:
setenv("OPENCV_FFMPEG_CAPTURE_OPTIONS", "rtsp_transport;udp", 1);
I would appreciate any suggestions or improvements. Thank You.
Set this:
cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); /
I think this was answered here. OpenCV VideoCapture lag due to the capture buffer

webcam usage in opencv

I am new with opencv.
I want to capture images from webcam (intex it-105wc).
I am using Microsoft visual c++ 2008 express edition on windows xp.
There is no problem with build solution, but when i try to debug the code it gives following (this happens wwhile executing cvCaptureFromCAM(CV_CAP_ANY);)
Loaded C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax, Binary was not built with debug information.
and then exits the code.
so, is there any problem with my code or is it compatibility issue with webcam??
#include "stdafx.h"
#include<stdio.h>
#include <cv.h>
#include <highgui.h>
void main(int argc,char *argv[])
{
int c;
IplImage* color_img;
CvCapture* cv_cap = cvCaptureFromCAM(CV_CAP_ANY);
if(!cv_cap)
{
printf( "ERROR: Capture is null!\n");
}
cvNamedWindow("Video",0); // create window
for(;;)
{
color_img = cvQueryFrame(cv_cap); // get frame
if(color_img != 0)
cvShowImage("Video", color_img); // show frame
c = cvWaitKey(10); // wait 10 ms or for key stroke
if(c == 27)
break; // if ESC, break and quit
}
/* clean up */
cvReleaseCapture( &cv_cap );
cvDestroyWindow("Video");
}
This error message seems to be related to a video codec from the nero burning tools.
If you do not need this codec, you could just unregister it and see, if that solves your problem.
To do that, execute the following on the commandline:
regsvr32 /u "C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax"
You should see the message:
DllUnregisterServer in C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax succeeded.
To undo this, execute
regsvr32 "C:\Program Files\Common Files\Ahead\DSFilter\NeVideo.ax"

Resources