How to pass command line arguments to FFMpeg in iOS - ios

This is a beginner question, since I am new to iOS(I started it today), so please pardon my ignorance and lack of iOS knowledge.
After building and successfully using FFMpeg for Android I wanted to do the same for iOS.
So I built FFMpeg successfully for iOS by following this link, but after all that pain I am confused as how to use FFMpeg in iOS, I mean how can I pass command line arguments to libffmpeg.a file?
I am assuming that there must be a way to run the .a file as an executable and then pass command line arguments and hope for FFMpeg to do the magic, I did the same in Android and it worked beautifully.
I am also aware that I can use ffmpeg.c class and use its main method, but the question remains; how do I pass those command line arguments?
Is there something I am supposed to be aware of here, is the thing what I am doing now correct or am I falling short on my approach?
I wanted to mix two audio files, so the command for doing that would be ffmpeg -i firstSound.wav -i secondSound.wav -filter_complex amix=inputs=2:duration=longest finalOutput.wav, how do I do the same in iOS?
Can someone please shed some light on this?

You don't pass arguments to a .a file as it's a library file. It's something you build your application with, giving you access to the functions provided by the ffmpeg library. I'm not sure what the state of play with Android is but it's likely it's generating a command line executable instead.
Have a look at the ffmpeg documentation, there's probably a way to do what you want with the library, however building and running ffmpeg as a standalone, pass-in-arguments, binary is unlikely.

You can do it in your main.c, and of course you wouldn't hardcode args these are just for illustration
I assume your using ffmpeg for playback since your playing with iframeextractor, what actually is the goal of what your trying to do.
/* Called from the main */
int main(int argc, char **argv)
int flags, i;
/*
argv[1] = "-fs";
argv[2] = "-skipframe";
argv[3] = "30";
argv[4] = "-fast";
argv[5] = "-sync";
argv[6] = "video";
argv[7] = "-drp";
argv[8] = "-skipidct";
argv[9] = "10";
argv[10] = "-skiploop";
argv[11] = "50";
argv[12] = "-threads";
argv[13] = "5";
//argv[14] = "-an";
argv[15] = "http://172.16.1.33:63478/hulu-f4fa0821-767a-490a-8cb5-f03788760e31/1-hulu-f4fa0821-767a-490a-8cb5-f03788760e31.mpg";
argc += 14;
*/
/* register all codecs, demux and protocols */
avcodec_register_all();
avdevice_register_all();
av_register_all();
parse_options(argc, argv, options, opt_input_file);
. .. mo
}

Related

Why does my code crash when libvlc_media_new_path() is called?

after several days of trying myself to solve my problem I would like to kindly ask for your help:
I am trying to make the libvlc / SDL 2.0 tutorial working.
I am coding in Visual Studio 2022 in x86 C++ Console.
I have linked the libvlc library path and include path and have added the libvlc.lib file in my project linker settings.
The program compiles without error and crashes when libvlc_media_new_path is called.
You see all different formats of path I have used in my minimal reproducible example below:
My sources:
I downloaded the vlc master from Github to get the headers / include directory.
I downloaded the vlc-3.0.17.4-win32 release and from there took the libvlc.dll.
From the libvlc.dll I created the lib file following a visual studio command prompt procedure.
What i noticed is that the function libvlc_media_new_path() only takes the path as an argument now. All examples i find in the internet are with the libvlc instance AND the path as arguments.
Thank you so much for your help!
#include <stdlib.h>
#include "vlc/vlc.h"
int main(int argc, char* argv[]) {
libvlc_instance_t* libvlc;
libvlc_media_t* m;
libvlc_media_player_t* mp;
libvlc = libvlc_new(0, NULL);
if (NULL == libvlc) {
printf("LibVLC initialization failure.\n");
return EXIT_FAILURE;
}
m = libvlc_media_new_path("/1.mp4");
//m = libvlc_media_new_path("C:\\Programmieren\\PACA\\1.mp4");
//m = libvlc_media_new_path("C:/Programmieren/PACA/1.mp4");
//m = libvlc_media_new_path("C://Programmieren//PACA//1.mp4");
//m = libvlc_media_new_path("C:\Programmieren\PACA\1.mp4");
//m = libvlc_media_new_path("file:///C:/Programmieren/PACA/1.mp4");
mp = libvlc_media_player_new_from_media(libvlc, m);
return 0;
}
If you go to Github and click on the Tags link, you can get the headers for version 3.0.17.4. In there you will see that libvlc_media_new_path takes an instance as an argument.
The other option would be to get or build the 3.0.18 DLL.

How do I use the CLI interface of FFMpeg from a static build?

I have added this (https://github.com/kewlbear/FFmpeg-iOS-build-script) version of ffmpeg to my project. I can't see the entry point to the library in the headers included.
How do I get access to the same text command based system that the stand alone application has, or an equivalent?
I would also be happy if someone could point me towards documentation that allows you to use FFmpeg without the command line interface.
This is what I am trying to execute (I have it working on windows and android using the CLI version of ffmpeg)
ffmpeg -framerate 30 -i snap%03d.jpg -itsoffset 00:00:03.23333 -itsoffset 00:00:05 -i soundEffect.WAV -c:v libx264 -vf fps=30 -pix_fmt yuv420p result.mp4
Actually you can build ffmpeg library including the ffmpeg binary's code (ffmpeg.c). Only thing to care about is to rename the function main(int argc, char **argv), for example, to ffmpeg_main(int argc, char **argv) - then you can call it with arguments just like you're executing ffmpeg binary. Note that argv[0] should contain program name, just "ffmpeg" should work.
The same approach was used in the library VideoKit for Android.
To do what you want, you have to use your compiled FFmpeg library in your code.
What you are looking for is exactly the code providing by FFmpeg documentation libavformat/output-example.c (that mean AVFormat and AVCodec FFmpeg's libraries in general).
Stackoverflow is not a "do it for me please" platform. So I prefer explaining here what you have to do, and I will try to be precise and to answer all your questions.
I assume that you already know how to link your compiled (static or shared) library to your Xcode project, this is not the topic here.
So, let's talk about this code. It creates a video (containing video stream and audio stream randomly generated) based on a duration. You want to create a video based on a picture list and sound file. Perfect, there are only three main modifications you have to do:
The end condition is not reaching a duration, but reaching the end of your file list (In code there is already a #define STREAM_NB_FRAMES you can use to iterate over all you frames).
Replace the dummy void fill_yuv_image by your own method that load and decode image buffer from file.
Replace the dummy void write_audio_frame by your own method that load and decode the audio buffer from your file.
(you can find "how to load audio file content" example on documentation starting at line 271, easily adaptable for video content regarding documentation)
In this code, comparing to your CLI, you can figure out that:
const char *filename; in the main should be you output file "result.mp4".
#define STREAM_FRAME_RATE 25 (replace it by 30).
For MP4 generation, video frames will be encoded in H.264 by default (in this code, the GOP is 12). So no need to precise libx264.
#define STREAM_PIX_FMT PIX_FMT_YUV420P represents your desired yuv420p decoding format.
Now, with these official examples and related documentation, you can achieve what you desire. Be careful that there is some differences between FFmpeg's version in these examples and current FFmpeg's version. For example:
st = av_new_stream(oc, 1); // line 60
Could be replaced by:
st = avformat_new_stream(oc, NULL);
st->id = 1;
Or:
if (avcodec_open(c, codec) < 0) { // line 97
Could be replaced by:
if (avcodec_open2(c, codec, NULL) < 0) {
Or again:
dump_format(oc, 0, filename, 1); // line 483
Could be replaced by:
av_dump_format(oc, 0, filename, 1);
Or CODEC_ID_NONE by AV_CODEC_ID_NONE... etc.
Ask your questions, but you got all the keys! :)
MobileFFMpeg is an easy to use pod for the purpose. Instructions on how to use MobileFFMpeg at: https://stackoverflow.com/a/59325680/1466453
MobileFFMpeg gives a very simple method for translating ffmpeg commands to your IOS objective-c program.
Virtually all ffmpeg commands and switches are supported. However you have to get the pod with appropriate license. e.g min-gpl will not give you features of libiconv. libiconv is convered in vidoe, gpl and full-gpl licenses.
Please highlight if you have specific issues regarding use of MobileFFMpeg

OpenCV won't capture frames from a RTMP source, while FFmpeg does

my goal is to capture a frame from a rtmp stream every second, and process it using OpenCV. I'm using FFmpeg version N-71899-g6ef3426 and OpenCV 2.4.9 with the Java interface (but I'm first experimenting with Python).
For the moment, I can only take the simple and dirty solution, which is to capture images using FFmpeg, store them in disk, and then read those images from my OpenCV program. This is the FFmpeg command I'm using:
ffmpeg -i "rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1" -r 1 capImage%03d.jpg
This is currently working for me, at least with this concrete rtmp source. Then I would need to read those images from my OpenCV program in a proper way. I have not actually implemented this part, because I'm trying to find a better solution.
I think the ideal way would be to capture the rtmp frames directly from OpenCV, but I cannot find the way to do it. This is the code in Python I'm using:
cv2.namedWindow("camCapture", cv2.CV_WINDOW_AUTOSIZE)
cap = cv2.VideoCapture()
cap.open('"rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1"')
if not cap.open:
print "Not open"
while (True):
err,img = cap.read()
if img and img.shape != (0,0):
cv2.imwrite("img1", img)
cv2.imshow("camCapture", img)
if err:
print err
break
cv2.waitKey(30)
Instead of read() function, I'm also trying with grab() and retrieve() functions without any good result. The read() function is being executed every time, but no "img" or "err" is received.
Is there any other way to do it? or maybe there is no way to get frames directly from OpenCV 2.4.9 from a stream like this?
I've read OpenCV uses FFmpeg to do this kind of tasks, but as you can see, in my case FFmpeg is able to get frames from the stream while OpenCV is not.
In the case I could not find the way to get the frames directly from OpenCV, my next idea is to pipe somehow, FFmpeg output to OpenCV, which seems harder to implement.
Any idea,
thank you!
UPDATE 1:
I'm in Windows 8.1. Since I was running the python script from Eclipse PyDev, this time I run it from cmd instead, and I'm getting the following warning:
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
This warning means, as far as I could read, that either the file-path is wrong, or either the codec is not supported. Now, the question is the same. Is OpenCV not capable of getting the frames from this source?
Actually I have spent more that one day to figure out how to solve this issue. Finally I have solved this problem with the help of this link.
Here is client side code.
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**) {
cv::VideoCapture vcap;
cv::Mat image;
const std::string videoStreamAddress = "rtmp://192.168.173.1:1935/live/test.flv";
if(!vcap.open(videoStreamAddress)) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
cv::namedWindow("Output Window");
cv::Mat edges;
for(;;) {
if(!vcap.read(image)) {
std::cout << "No frame" << std::endl;
cv::waitKey();
}
cv::imshow("Output Window", image);
if(cv::waitKey(1) >= 0) break;
}
}
Note: In this case I have created a android application to get real time video and send it to rtmp server wowza which is deployed in PC.So that is where I created this c++ implementation for real time video processing.
python -c "import cv2; print(cv2.getBuildInformation())"
check build opencv with ffmpeg。If it is correct, your code should be fine。
If not, rebuild opencv with ffmpeg。
Under osx
brew install opencv --with-ffmpeg

Calling Library in Lua

I have created a Wireshark dissector in Lua for an application over TCP. I am attempting to use zlib compression and base64 decryption. How do I actually create or call an existing c library in Lua?
The documentation I have seen just says that you can get the libraries and use either the require() call or the luaopen_ call, but not how to actually make the program find and recognize the actual library. All of this is being done in Windows.
You can't load any existing C library, which was not created for Lua, with plain Lua. It's not trivial at least.
*.so/*.dll must follow some specific standard, which is bluntly mentioned in programming in Lua#26.2 and lua-users wiki, code sample. Also similar question answered here.
There are two ways You could solve Your problem:
Writing Your own Lua zlib library wrapper, following those standards.
Taking some already finished solution:
zlib#luapower
lua-zlib
ffi
Bigger list #lua-users wiki
The same applies to base64 encoding/decoding. Only difference, there are already plain-Lua libraries for that. Code samples and couple of links #lua-users wiki.
NOTE: Lua module package managers like LuaRocks or
LuaDist MIGHT save You plenty of time.
Also, simply loading a Lua module usually consists of one line:
local zlib = require("zlib")
The module would be searched in places defined in Your Lua interpreter's luaconf.h file.
For 5.1 it's:
#if defined(_WIN32)
/*
** In Windows, any exclamation mark ('!') in the path is replaced by the
** path of the directory of the executable file of the current process.
*/
#define LUA_LDIR "!\\lua\\"
#define LUA_CDIR "!\\"
#define LUA_PATH_DEFAULT \
".\\?.lua;" LUA_LDIR"?.lua;" LUA_LDIR"?\\init.lua;" \
LUA_CDIR"?.lua;" LUA_CDIR"?\\init.lua"
#define LUA_CPATH_DEFAULT \
".\\?.dll;" LUA_CDIR"?.dll;" LUA_CDIR"loadall.dll"
#else
How do I actually create or call an existing c library in Lua?
An arbitrary library, not written for use by Lua? You generally can't.
A Lua consumable "module" must be linked against the Lua API -- the same version as the host interpreter, such as Lua5.1.dll in the root of the Wireshark directory -- and expose a C-callable function matching the lua_CFunction signature. Lua can load the library and call that function, and it's up to that function to actually expose functionality to Lua using the Lua API.
Your zlib and/or base64 libraries know nothing about Lua. If you had a Lua interpreter with a built-in FFI, or you found a FFI Lua module you could load, you could probably get this to work, but it's really more trouble than it's worth. Writing a Lua module is actually super easy, and you can tailor the interface to be more idiomatic for Lua.
I don't have zlib or a base64 C library handy, so for example's sake lets say we wanted to let our Lua script use the MessageBox function from the user32.dll library in Windows.
#include <windows.h>
#include "lauxlib.h"
static int luaMessageBox (lua_State* L) {
const char* message = luaL_checkstring(L,1);
MessageBox(NULL, message, "", MB_OK);
return 0;
}
int __declspec(dllexport) __cdecl luaopen_messagebox (lua_State* L) {
lua_register(L, "msgbox", luaMessageBox);
return 0;
}
To build this, we need to link against user32.dll (contains MessageBox) and lua5.1.dll (contains the Lua API). You can get Lua5.1.lib from the Wireshark source. Here's using Microsoft's compiler to produce messagebox.dll:
cl /LD /Ilua-5.1.4/src messagebox.c user32.lib lua5.1.lib
Now your Lua scripts can write:
require "messagebox"
msgbox("Hello, World!")
Your only option is to use a library library like alien. See my answer Disabling Desktop Composition using Lua Scripting for other FFI libraries.

C++ Eclipse OpenCV : .exe file and binaries generated, but no image displayed

Here's my code (the first DisplayImage.cpp code in the OpenCV documentation)
/*
* DisplayImage.cpp
*
* Created on: Dec 25, 2011
* Author: Arcturus */
#include <iostream>
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char** argv){
Mat image;
image = imread(argv[1], 1);
if(argc!=2 || !image.data){
cout<<"no image data";
return -1;
}
namedWindow("Display Image", CV_WINDOW_AUTOSIZE);
imshow("Display Image", image);
waitKey(10000);
return 0;
}
Build complete, executable generated, binaries generated.
I have my image - blackbuck.bmp- in the DisplayImage Debug folder. To run the code, I go to Run> Run Configurations. Select the DisplayImage Debug exe file, key in blackbuck.bmp (also tried it with absolute path) and run it.
On the top of the console, I get the message : DisplayImage Debug. And it displays no image at all. What could be wrong here?
I am running it on Eclipse, using CDT.
Thank you for your time!
EDIT: Problem solved!!! I had to copy all the dll files from the library folder to the folder in which my executable file was being generated. I still do not understand why, though. After all, the linker was already linking the library folder containing all the dlls. If someone could explain this, it would be of great help for future debugging. Thank you karl and mevotron for your time :)
EDIT 2: From the msdn website:
"A potential disadvantage to using DLLs is that the application is not self-contained; it depends on the existence of a separate DLL module. The system terminates processes using load-time dynamic linking if they require a DLL that is not found at process startup and gives an error message to the user. The system does not terminate a process using run-time dynamic linking in this situation, but functions exported by the missing DLL are not available to the program."
I think this answers my question. Perhaps this means eclipse uses load-time dynamic linking.
How did you compile OpenCV with MinGW (i.e., what were your BUILD_TYPE and SSE* options set to during the CMake configuration)? The reason I ask, is that there is a known bug with SSE optimizations that will cause highgui operations to crash when using MinGW built versions. See my other SO answer here.

Resources