I've created a sample library with Bazel: https://github.com/rynz/test-app
How do I go about building this library in a NodeJS C++ Addon?
Can I build a NodeJS addon with Bazel? Else, what are the steps to include a Bazel library with node-gyp?
Cheers,
Ryan
Nodejs addons are encapsulated in their own side, you have to do the connections.To work with an addon you need to convert all js arguments to c/c++, so you can work with them.
napi string example:
napi_value argv[1];
size_t argc = 1;
char* str[LENGTH];
napi_get_value_string_utf8(
env //environment
, argv[0] //napi_value represents a js string
, (char*) &str //(c/c++)buffer to store the utf8 string
, sizeof(str) //dest buffer size
, &argc // src buffer size
);
//now you can work with str in c/c++
//Do your stuff here
After the addon work you need to convert the c/c++ return values to js values.
Returning the string:
napi_value result;
napi_create_string_utf8(
env
, (char*) &str //(c/c++)buffer to convert to napi
, NAPI_AUTO_LENGTH //buffer size AUTO_LENGTH if its null-terminated
, &result //the resulting napi string
);
//now you can return result as napi_value
Check the complete docs here https://nodejs.org/api/n-api.html
Related
I am writing a Dart package (not Flutter). I have included a few bitmap images as public assets, e.g., lib/assets/empty.png. When this package is running as a command-line app for an end-user, how can I get the file path to these assets on the user's system?
Use-case: My Dart package calls out to FFMPEG, and I need to tell FFMPEG where to find these asset files on the system that's using my package. For example, the call to FFMPEG might look like:
ffmpeg -i "path/to/lib/assets/empty.png" ...
Accessing a Dart package's assets can happen in two modalities:
Running a Dart CLI app with the dart tool and accessing a dependency's assets, or
Running an executable CLI app
The difference between these two situations is that when you're running a CLI app using the dart tool, all of your dependencies are available as structured packages in a local cache on your system. However, when you're running an executable, all relevant code is compiled into a single binary, which means you no longer have access at runtime to your dependencies' packages, you only have access to your dependencies' tree-shaken, compiled code.
Accessing assets when running with dart
The following code will resolve a package asset URI to a file system path.
final packageUri = Uri.parse('package:your_package/your/asset/path/some_file.whatever');
final future = Isolate.resolvePackageUri(packageUri);
// waitFor is strongly discouraged in general, but it is accepted as the
// only reasonable way to load package assets outside of Flutter.
// ignore: deprecated_member_use
final absoluteUri = waitFor(future, timeout: const Duration(seconds: 5));
final file = File.fromUri(absoluteUri);
if (file.existsSync()) {
return file.path;
}
This resolution code was adapted from Tim Sneath's winmd package: https://github.com/timsneath/winmd/blob/main/lib/src/metadatastore.dart#L84-L106
Accessing assets when running an executable
When compiling a client app to an executable, that client app simply cannot access any asset files that were stored with the dependent package. However, there is a work around that may work for some people (it did for me). You can store Base64 encoded versions of your assets in your Dart code, within your package.
First, encode each of your assets into a Base64 string and store those strings somewhere in your Dart code.
const myAsset = "iVBORw0KGgoAAA....kJggg==";
Then, at runtime, decode the string back to bytes, and then write those bytes to a new file on the local file system. Here's the method I used in my case:
/// Writes this asset to a new file on the host's file system.
///
/// The file is written to [destinationDirectory], or the current
/// working directory, if no destination is provided.
String inflateToLocalFile([Directory? destinationDirectory]) {
final directory = destinationDirectory ?? Directory.current;
final file = File(directory.path + Platform.pathSeparator + fileName);
file.createSync(recursive: true);
final decodedBytes = base64Decode(base64encoded);
file.writeAsBytesSync(decodedBytes);
return file.path;
}
This approach was suggested by #passsy
Have a look at the dcli package.
It has a 'pack' command designed to solve exactly this problem.
It encodes assets into dart files that can be unpacked at runtime.
I'm trying to render PDF document on Android within Mono for Android application. I'm using MuPdf library wiritten in C and have problem with invoking one C function. What I get:
System.EntryPointNotFoundException: fz_pixmap_samples
C function:
unsigned char *fz_pixmap_samples(fz_context *ctx, fz_pixmap *pix)
{
if (!pix)
return NULL;
return pix->samples;
}
My C# wrapper:
public class APV
{
[DllImport("libmupdf.so", EntryPoint = "fz_pixmap_samples", CallingConvention = CallingConvention.Cdecl)]
private static extern IntPtr fz_pixmap_samples(IntPtr ctx, IntPtr pix);
public static IntPtr GetSamples(IntPtr ctx, IntPtr pix)
{
return fz_pixmap_samples(ctx, pix);
}
}
the way I'm calling GetSamples:
APV.GetSamples(context, pix);
Function fz_pixmap_samples(fz_context *ctx, fz_pixmap *pix) should return me pointer to bitmap data. I'm assuming mapping unsigned char * to IntPtr is not correct? Could anyone help?
System.EntryPointNotFoundException: fz_pixmap_samples
means that the library does not export a function named fz_pixmap_samples. Most likely there is some name decoration that means that the function is exported with a different name.
The first thing to do is to remove the EntryPoint argument which will allow the managed code to look for decorated names.
If that doesn't get it done then you need to study the .so library file to find out exactly what name is used to export the function. And use that in your p/invoke declaration.
I know it's old, but for those looking we solved it by:
fz_pixmap_samples wasn't actually exposed (exported) in the 1.8 version of the .so files we were using. If you run nm on it, you'll see it isn't exported. That's why there is the runtime error when trying to use it.
So we had to go to the muPDF website, get the project and source, and make a change and recompile it. I know, it's a pain. Seemed to be the only answer.
Had to go to muPDF.c inside the source/platform/android/jni folder, and in there call fz_pixmap_samples(NULL, NULL) inside one of the methods that has the jni export call. Just calling fz_pixmap_samples(NULL, NULL) in there will now expose it in the .so file when you recompile it.
To recompile muPDF, follow the instructions that are provided in the mupdf project for recompiling for android. They are good instructions.
This is a beginner question, since I am new to iOS(I started it today), so please pardon my ignorance and lack of iOS knowledge.
After building and successfully using FFMpeg for Android I wanted to do the same for iOS.
So I built FFMpeg successfully for iOS by following this link, but after all that pain I am confused as how to use FFMpeg in iOS, I mean how can I pass command line arguments to libffmpeg.a file?
I am assuming that there must be a way to run the .a file as an executable and then pass command line arguments and hope for FFMpeg to do the magic, I did the same in Android and it worked beautifully.
I am also aware that I can use ffmpeg.c class and use its main method, but the question remains; how do I pass those command line arguments?
Is there something I am supposed to be aware of here, is the thing what I am doing now correct or am I falling short on my approach?
I wanted to mix two audio files, so the command for doing that would be ffmpeg -i firstSound.wav -i secondSound.wav -filter_complex amix=inputs=2:duration=longest finalOutput.wav, how do I do the same in iOS?
Can someone please shed some light on this?
You don't pass arguments to a .a file as it's a library file. It's something you build your application with, giving you access to the functions provided by the ffmpeg library. I'm not sure what the state of play with Android is but it's likely it's generating a command line executable instead.
Have a look at the ffmpeg documentation, there's probably a way to do what you want with the library, however building and running ffmpeg as a standalone, pass-in-arguments, binary is unlikely.
You can do it in your main.c, and of course you wouldn't hardcode args these are just for illustration
I assume your using ffmpeg for playback since your playing with iframeextractor, what actually is the goal of what your trying to do.
/* Called from the main */
int main(int argc, char **argv)
int flags, i;
/*
argv[1] = "-fs";
argv[2] = "-skipframe";
argv[3] = "30";
argv[4] = "-fast";
argv[5] = "-sync";
argv[6] = "video";
argv[7] = "-drp";
argv[8] = "-skipidct";
argv[9] = "10";
argv[10] = "-skiploop";
argv[11] = "50";
argv[12] = "-threads";
argv[13] = "5";
//argv[14] = "-an";
argv[15] = "http://172.16.1.33:63478/hulu-f4fa0821-767a-490a-8cb5-f03788760e31/1-hulu-f4fa0821-767a-490a-8cb5-f03788760e31.mpg";
argc += 14;
*/
/* register all codecs, demux and protocols */
avcodec_register_all();
avdevice_register_all();
av_register_all();
parse_options(argc, argv, options, opt_input_file);
. .. mo
}
In a single application, the following code is ok
CvMat src_image_mat;
cvInitMatHeader(&src_image_mat,1,src_image_data.size(), \
CV_8U,(void *)src_image_data.c_str());
m_pSrcImage = cvDecodeImage(&src_image_mat, 0);
where src_image_data contains all the bytes in a given jpeg file,
after calling ,m_pSrcImage is not NULL.
but when this code is run in a cgi program, the value cvDecodeImage returns is NULL, and the src_image_data is the same as in the single application.
btw: when in cgi context, the picture is uploaded by some users.
assure file is already read into memory to pData with size nSize.
CvMat cm = cvMat(1, nSize, CV_8UC1, pData);
pImage = cvDecodeImage(&cm);
I have download and install KaZip2.0 on C++Builder2009 (with little minor changes => only set type String to AnsiString). I have write:
KAZip1->FileName = "test.zip";
KAZip1->CreateZip("test.zip");
KAZip1->Active = true;
KAZip1->Entries->AddFile("pack\\text.txt","xxx.txt");
KAZip1->Active = false;
KAZip1->Close();
now he create a test.zip with included xxx.txt (59byte original, 21byte packed). I open the archiv in WinRAR successful and want open the xxx.txt, but WinRAR says file is corrupt. :(
What is wrong? Can somebody help me?
Extract not working, because file is corrupt?
KAZip1->FileName = "test.zip";
KAZip1->Active = true;
KAZip1->Entries->ExtractToFile("xxx.txt","zzz.txt");
KAZip1->Active = false;
KAZip1->Close();
with little minor changes => only set
type String to AnsiString
Use RawByteString instead of AnsiString.
I have no idea how KaZip2.0 is implemented, but in general, to make a Delphi/C++ library that was designed without Unicode support in mind working properly you need to do two things:
Replace all Char with AnsiChar and all string to AnsiString
Replace all Win API calls with their Ansi variant, i.e. replace AWin32Function with AWin32FunctionA.
In Delphi < 2009, Char = AnsiChar, String = AnsiString, AWin32Function = AWin32FunctionA, but in Delphi >= 2009, by default, Char = WideChar, String = UnicodeString, AWin32Function = AWin32FunctionW.
WinRAR could be simply failing to recognize the header. Try opening it in Windows or some other zip programs.
with little minor changes => only set
type String to AnsiString
That's doesn't work always right, it may compile but it doesn't mean it will work right in D2009 or CB2009, you need to show the places that you convert Strings to AnsiStrings, specially the code deal with : Buffers, Streams and I/O.
It's not surprising that your code is wrong; KaZip has no documentation.
Proper code is:
//Create a new empty zip file
KAZip1->CreateZip("test.zip");
//Open our newly created zip file so we can add files to it
KAZIP1->Open("test.zip");
//Compress text.txt into xxx.txt
KAZip1->Entries->AddFile("pack\\text.txt","xxx.txt");
//Close the file stream
KAZip1->Close();