I'm trying to render PDF document on Android within Mono for Android application. I'm using MuPdf library wiritten in C and have problem with invoking one C function. What I get:
System.EntryPointNotFoundException: fz_pixmap_samples
C function:
unsigned char *fz_pixmap_samples(fz_context *ctx, fz_pixmap *pix)
{
if (!pix)
return NULL;
return pix->samples;
}
My C# wrapper:
public class APV
{
[DllImport("libmupdf.so", EntryPoint = "fz_pixmap_samples", CallingConvention = CallingConvention.Cdecl)]
private static extern IntPtr fz_pixmap_samples(IntPtr ctx, IntPtr pix);
public static IntPtr GetSamples(IntPtr ctx, IntPtr pix)
{
return fz_pixmap_samples(ctx, pix);
}
}
the way I'm calling GetSamples:
APV.GetSamples(context, pix);
Function fz_pixmap_samples(fz_context *ctx, fz_pixmap *pix) should return me pointer to bitmap data. I'm assuming mapping unsigned char * to IntPtr is not correct? Could anyone help?
System.EntryPointNotFoundException: fz_pixmap_samples
means that the library does not export a function named fz_pixmap_samples. Most likely there is some name decoration that means that the function is exported with a different name.
The first thing to do is to remove the EntryPoint argument which will allow the managed code to look for decorated names.
If that doesn't get it done then you need to study the .so library file to find out exactly what name is used to export the function. And use that in your p/invoke declaration.
I know it's old, but for those looking we solved it by:
fz_pixmap_samples wasn't actually exposed (exported) in the 1.8 version of the .so files we were using. If you run nm on it, you'll see it isn't exported. That's why there is the runtime error when trying to use it.
So we had to go to the muPDF website, get the project and source, and make a change and recompile it. I know, it's a pain. Seemed to be the only answer.
Had to go to muPDF.c inside the source/platform/android/jni folder, and in there call fz_pixmap_samples(NULL, NULL) inside one of the methods that has the jni export call. Just calling fz_pixmap_samples(NULL, NULL) in there will now expose it in the .so file when you recompile it.
To recompile muPDF, follow the instructions that are provided in the mupdf project for recompiling for android. They are good instructions.
Related
In reading the documentation for zig, I was under the impression that zig could compile both C and C++ code. Consequently, I thought you could import a C++ file's header via #cImport and have zig build succeed. However, I can't seem to get this to work for a C++ library integration.
I first create my project, zig init-lib and then add my import to src/main.zig via the #cImport directive. Specifically, I #cInclude("hooks/hooks.h") the C++ header file of this library. If I attempt to zig build at this point, the build fails, unable to find the header. I fix this by modifying build.zig to lib.addIncludeDir("/usr/include/library").
Since this C++ library is now being parsed and uses the C++ standard library, the next error I get when I zig build is that the stdexcept header is not found. To fix this, I modify build.zig to lib.linkSystemLibrary("c++").
Lastly, and the error I'm stuck on now, is an assortment of errors in /path/to/zig-linux-x86_64-0.9.1/lib/libcxx/include/<files>. I get stuff like unknown type name '__LIBCPP_PUSH_MACROS, unknown type name 'namespace', or unknown type name 'template'.
Googling this, the only thing of partial relevance that I could find was that this is due to clang's default interpretation of .h files is as C files which obviously don't have namespace or template keywords, but I don't know what to do with that knowledge. LLVM on MacOs - unknown type name 'template' in standard file iosfwd
Does anyone have any insight as to how to actually integrate with a C++ (not pure C) library through zig?
Specifically, I #cInclude("hooks/hooks.h") the C++ header file of this library.
#cImport() is for translating C header files into zig so they can be used without writing bindings. Unfortunately, it does not support C++ headers. To use a C++ library, you'll have to write C bindings for it and then #cImport() those headers.
// src/bindings.cpp
#include <iostream>
extern "C" void doSomeCppThing(void) {
std::cout << "Hello, World!\n";
}
// src/bindings.h
void doSomeCppThing(void);
// build.zig
const std = #import("std");
pub fn build(b: *std.build.Builder) void {
const target = b.standardTargetOptions(.{});
const mode = b.standardReleaseOptions();
const exe = b.addExecutable("tmp", "src/main.zig");
exe.setTarget(target);
exe.setBuildMode(mode);
exe.linkLibC();
exe.linkSystemLibrary("c++");
exe.addIncludeDir("src");
exe.addCSourceFile("src/bindings.cpp", &.{});
exe.install();
}
// src/main.zig
const c = #cImport({
#cInclude("bindings.h");
});
pub fn main() !void {
c.doSomeCppThing();
}
I would like to include a Kotlin file that only performs data processing and network operations in an existing iOS project, while keeping native iOS UI code.
While I thought that this may be achievable with Kotlin/Native, the iOS samples (1,2) that I found that use Kotlin/Native seem to take over the iOS UI code as well.
Is including a Kotlin file for data transfer in iOS possible with Kotlin/Native without touching the UI code, and if so, what are the steps to do so?
Yes, it is possible in a cross-platform project to transfer data between Kotlin and native iOS UI Code by using Kotlin/Native. This allows to have a common code base for the data model based on Kotlin, while e.g. continuing to use native UI code for iOS.
The original proof:
The project https://github.com/justMaku/Kotlin-Native-with-Swift pointed me in the right direction, since it shows the essential steps to do so:
In a Swift UIViewController, it calls a wrapper function that shall receive a string from a Kotlin function. The call is mediated through a C++ layer, which itself starts the Kotlin runtime, passes the request to a Kotlin function, receives the string from it, and passes it back to the Swift UIViewController, which then displays it.
On the technical level, the project contains a script that compiles the Kotlin, C++, and Kotlin/Native part into a static library, which then can be called from the native iOS project.
To get the code to run, I had (after cloning from git) to perform a "git submodule sync" before running "./setup.sh".
To transfer data with a data model based on Kotlin, I would like to have a generic function, that can pass data to Kotlin, modify that data, and return the result back to the native iOS code. As a proof of principle, that such a function can be build, I extended the project to not only receive a string from Kotlin, but send one to Kotlin, append it, and send the result back.
Extension of the project:
Since there were some roadblocks in this seemingly simple extension, I lay out the steps for anybody interested. If you follow along, you should get the following displayed:
The text may be stupid, but it tells you, what happens.
The changes in ViewController.swift in the function viewDidAppear are:
let swiftMessage: String = "Hello Kotlin, this is Swift!"
let cStr = swiftMessage.cString(using: String.Encoding.utf8)
if let retVal = kotlin_wrapper(cStr) {
let string = String(cString: retVal)
...
}
You see the text that Swift sends to Kotlin in the wrapper function (in the end, the resulting 'string' variable will be displayed). One could directly pass the Swift String to the wrapper, but I wanted to highlight that the wrapper will consider the input and output as c-strings. Indeed, the file Kotlin Native-Bridging-Header.h inside the native iOS project now becomes:
extern const char* kotlin_wrapper(const char* swiftMessage);
On it goes to the file Launcher.cpp. Since the original file used a KString as result value of kotlin_main, I tried for some time to convert const char* to KString and pass that to kotlin_main. In the end I found, that it is much simpler to directly transfer the const char* variables to Kotlin, and do the transformation there with the functions that are given to us by Kotlin/Native.
My Launcher.cpp then became more compact than the original. Here is the complete file:
#include "Memory.h"
#include "Natives.h"
#include "Runtime.h"
#include "KString.h"
#include <stdlib.h>
#include <string>
extern "C" const char* kotlin_main(const char* swiftMessageChar);
extern "C" const char* kotlin_wrapper(const char* swiftMessageChar) {
RuntimeState* state = InitRuntime();
if (state == nullptr) {
return "Failed to initialize the kotlin runtime";
}
const char* exitMessage = kotlin_main(swiftMessageChar);
DeinitRuntime(state);
return exitMessage;
}
You see how the wrapper first starts the Kotlin runtime and then calls the function kotlin_main, which resides in the file kotlin.kt:
import konan.internal.ExportForCppRuntime
import kotlinx.cinterop.CPointer
import kotlinx.cinterop.ByteVar
import kotlinx.cinterop.cstr
import kotlinx.cinterop.nativeHeap
import kotlinx.cinterop.toKString
#ExportForCppRuntime
fun kotlin_main(cPtr: CPointer<ByteVar>): CPointer<ByteVar> {
val swiftMessage = cPtr.toKString()
val kotlinMessage = "Hello Swift, I got your message: '$swiftMessage'."
val returnPtr = kotlinMessage.cstr.getPointer(nativeHeap)
return returnPtr
}
The pointer is converted to a Kotlin String, and then used in the creation of the kotlinMessage (the example of a data transformation). The result message is then transformed back to a pointer, and passed through the wrapper back to the Swift UIViewController.
Where to go from here?
In principle, one could use this framework without touching the C++ layer again. Just define pack and unpack functions, that pack arbitrary data types into a string and unpack the string to the respective data type on the other side. Such pack and unpack functions have to be written only once per language, and can be reused for different projects, if done sufficiently generic. In practice, I probably would first rewrite the above code to pass binary data, and then write the pack and unpack functions to transform arbitrary data types to and from binary data.
You can use kotlin as a framework if you want, so the kotlin code stays in framework file so you can use some common code on both android and iOS without writing your complete iOS app in kotlin.
Use gradle to build your kotlin code in objc/swift compatible framework
In your build.gradle file
buildscript {
ext.kotlin_native_version = '0.5'
repositories {
mavenCentral()
maven {
url "https://dl.bintray.com/jetbrains/kotlin-native-dependencies"
}
}
dependencies {
classpath "org.jetbrains.kotlin:kotlin-native-gradle-plugin:$kotlin_native_version"
}
}
group 'nz.salect'
version '0.1'
apply plugin: "konan"
konan.targets = ["iphone", "iphone_sim"]
konanArtifacts {
framework('nativeLibs')
}
It will generate two .framework files, one for simulator other for the actual device, put the framework in your project and link that to your project as any other third party framework.
Cmd: ./gradlew build
Note: Every time you change your kotlin files build and replace your
framework file as well(you can create a shell script and add that to
build phases to do that automatically).
Cheers !!!
I'm trying to extract c++ source code's info.
One is field's type.
when source code like under I want to extract info's Type when info.call() is called.
Info info;
//skip
info.call(); //<- from here
Trough making a visitor which visit IASTName node, I tried to extract type info like under.
public class CDTVisitor extends ASTVisitor {
public CDTVisitor(boolean visitNodes) {
super(true);
}
public int visit(IASTName node){
if(node.resolveBinding().getName().toString().equals("info"))
System.out.println(((IField)node.getBinding()).getType());
// this not work properly.
//result is "org.eclipse.cdt.internal.core.dom.parser.ProblemType#86be70a"
return 3;
}
}
Assuming the code is in fact valid, a variable's type resolving to a ProblemType is an indication of a configuration problem in whatever tool or plugin is running this code, or in the project/workspace containing the code on which it is run.
In this case, the type of the variable info is Info, which is presumably a class or structure type, or a typedef. To resolve it correctly, CDT needs to be able to see the declaration of this type.
If this type is not declared in the same file that's being analyzed, but rather in a header file included by that file, CDT needs to use the project's index to find the declaration. That means:
The AST must be index-based. For example, if using ITranslationUnit.getAST to create the AST, the overload that takes an IIndex parameter must be used, and a non-null argument must be provided for it.
Since an IIndex is associated with a CDT project, the code being analyzed needs to be part of a CDT project, and the project needs to be indexed.
In order for the indexer to resolve #include directives correctly, the project's include paths need to be configured correctly, so that the indexer can actually find the right header files to parse.
Any one of these not being the case can lead to a type resolving to a ProblemType.
Self response.
The reason I couldn't get a binding object was the type of AST.
When try to parse C++ source code, I should have used ICPPASTTranslationUnit.
There is no code related this, I used IASTTranslationUnit as a return type of AST.
After using ICPPASTTranslationUnit instead of IASTTranslationUnit, I solved this problem.
Yes, I figure it out! Here is the entire code which can index all files in "src" folder of a cpp project and output the resolved type binding for all code expressions including the return value of low level API such as memcpy. Note that the project variable in following code is created by programatically importing an existing manually configured cpp project. I often manually create an empty cpp project and programatically import it as a general project (once imported, Eclipse will automatically detect the project type and complete the relevant configuration of CPP project). This is much more convenient than creating and configuring a cpp project from scratch programmatically. When importing project, you'd better not to copy the project or containment structures into workspace, because this may lead to infinitely copying same project in subfolder (infinite folder depth). The code works in Eclipse-2021-12 version. I download Eclipse-For-cpp and install plugin-development and jdt plugins. Then I create an Eclipse plugin project and extend the "org.eclipse.core.runtime.applications" extension point.
In another word, it is an Eclipse-Application plugin project which can use nearly all features of Eclipse but do not start the graphical interface (UI) of Eclipse. You should add all cdt related non-ui plugins as the dependencies because new version of Eclipse does not automatically add missing plugins any more.
ICProject cproject = CoreModel.getDefault().getCModel().getCProject(project.getName());
// this code creates index for entire project.
IIndex index = CCorePlugin.getIndexManager().getIndex(cproject);
IFolder folder = project.getFolder("src");
IResource[] rcs = folder.members();
// iterate all source files in src folder and visit all expressions to print the resolved type binding.
for (IResource rc : rcs) {
if (rc instanceof IFile) {
IFile f = (IFile) rc;
ITranslationUnit tu= (ITranslationUnit) CoreModel.getDefault().create(f);
index.acquireReadLock(); // we need a read-lock on the index
ICPPASTTranslationUnit ast = null;
try {
ast = (ICPPASTTranslationUnit) tu.getAST(index, ITranslationUnit.AST_SKIP_INDEXED_HEADERS);
} finally {
index.releaseReadLock();
}
if (ast != null) {
ast.accept(new ASTVisitor() {
#Override
public int visit(IASTExpression expression) {
// get the resolved type binding of expression.
IType etp = expression.getExpressionType();
System.out.println("IASTExpression type:" + etp + "#expr_str:" + expression.toString());
return super.visit(expression);
}
});
}
}
}
I have created a Wireshark dissector in Lua for an application over TCP. I am attempting to use zlib compression and base64 decryption. How do I actually create or call an existing c library in Lua?
The documentation I have seen just says that you can get the libraries and use either the require() call or the luaopen_ call, but not how to actually make the program find and recognize the actual library. All of this is being done in Windows.
You can't load any existing C library, which was not created for Lua, with plain Lua. It's not trivial at least.
*.so/*.dll must follow some specific standard, which is bluntly mentioned in programming in Lua#26.2 and lua-users wiki, code sample. Also similar question answered here.
There are two ways You could solve Your problem:
Writing Your own Lua zlib library wrapper, following those standards.
Taking some already finished solution:
zlib#luapower
lua-zlib
ffi
Bigger list #lua-users wiki
The same applies to base64 encoding/decoding. Only difference, there are already plain-Lua libraries for that. Code samples and couple of links #lua-users wiki.
NOTE: Lua module package managers like LuaRocks or
LuaDist MIGHT save You plenty of time.
Also, simply loading a Lua module usually consists of one line:
local zlib = require("zlib")
The module would be searched in places defined in Your Lua interpreter's luaconf.h file.
For 5.1 it's:
#if defined(_WIN32)
/*
** In Windows, any exclamation mark ('!') in the path is replaced by the
** path of the directory of the executable file of the current process.
*/
#define LUA_LDIR "!\\lua\\"
#define LUA_CDIR "!\\"
#define LUA_PATH_DEFAULT \
".\\?.lua;" LUA_LDIR"?.lua;" LUA_LDIR"?\\init.lua;" \
LUA_CDIR"?.lua;" LUA_CDIR"?\\init.lua"
#define LUA_CPATH_DEFAULT \
".\\?.dll;" LUA_CDIR"?.dll;" LUA_CDIR"loadall.dll"
#else
How do I actually create or call an existing c library in Lua?
An arbitrary library, not written for use by Lua? You generally can't.
A Lua consumable "module" must be linked against the Lua API -- the same version as the host interpreter, such as Lua5.1.dll in the root of the Wireshark directory -- and expose a C-callable function matching the lua_CFunction signature. Lua can load the library and call that function, and it's up to that function to actually expose functionality to Lua using the Lua API.
Your zlib and/or base64 libraries know nothing about Lua. If you had a Lua interpreter with a built-in FFI, or you found a FFI Lua module you could load, you could probably get this to work, but it's really more trouble than it's worth. Writing a Lua module is actually super easy, and you can tailor the interface to be more idiomatic for Lua.
I don't have zlib or a base64 C library handy, so for example's sake lets say we wanted to let our Lua script use the MessageBox function from the user32.dll library in Windows.
#include <windows.h>
#include "lauxlib.h"
static int luaMessageBox (lua_State* L) {
const char* message = luaL_checkstring(L,1);
MessageBox(NULL, message, "", MB_OK);
return 0;
}
int __declspec(dllexport) __cdecl luaopen_messagebox (lua_State* L) {
lua_register(L, "msgbox", luaMessageBox);
return 0;
}
To build this, we need to link against user32.dll (contains MessageBox) and lua5.1.dll (contains the Lua API). You can get Lua5.1.lib from the Wireshark source. Here's using Microsoft's compiler to produce messagebox.dll:
cl /LD /Ilua-5.1.4/src messagebox.c user32.lib lua5.1.lib
Now your Lua scripts can write:
require "messagebox"
msgbox("Hello, World!")
Your only option is to use a library library like alien. See my answer Disabling Desktop Composition using Lua Scripting for other FFI libraries.
I have a DLL that I have ported from VC2008 to C++ Builder XE2. The DLL is used in LabVIEW's TestStand.
TestStand, when importing the VC2008 DLL, can see the function names and their arguments. When using the C++ Builder DLL, all its sees are the function names and not the arguments. All exports are C functions and use extern "C" declspec( dllexport ).
Is there a way to get the exports correct?
I have read that adding a TLB file will do the job, if this is true, how do I create a TLB that exports only C functions?
TestStand can read a .c/.cpp file and derive parameters from that file. You still load the DLL and select the function you want to call. You then 'verify' the parameters and select the .c/.cpp file in the dialog. TestStand will find the function with the same name and insert the parameters itself.
The function must be very specific, I had to create a dummy .c file that contained the prototypes as TestStand could not handle the #defines for dllexport and dllimport. It likes a very specific format. For the function:
TESTAPI bool StartTest( long inNumber ) {}
where TESTAPIis either extern "C" __declspec( dllexport ) or extern "C" __declspec( dllimport I had to write the line below in my dummy file:
bool __declspec( dllexport ) StartTest( long inNumber ) {}
That does it.
DLL function parameters cannot be determined from exports alone, unless they are being decorated by the calling convention (which is unusual to do in a DLL). If a TLB (aka a Type Library) solves the problem, then the VC2008 DLL is likely an In-Process ActiveX/COM object rather than a flat C DLL. If so, then in C+Builder you can use the IDE wizards on the "File | New" menu to create an "ActiveX Library" project, then a "COM Object" to add to the library. Then you will have a TLB that you can define your object with, and the IDE will generate stub code that you can fill in with your object's implementation.
If that is not what LabViews is expecting, then I suggest you contact them and ask. If all it needs is a TLB with flat C functions (which is very unusual, because TLB's are object-oriented), then you can omit the "COM Object" portion and just create an "ActiveX Library" project to get a bare-bones TLB, then add your definitions to it as needed, an then add your exports to the project.
From the reference here:
Avoid using the extern "C" syntax to export symbols. The extern "C" syntax prevents the C/C++ DLL Adapter from obtaining type information for function and method parameters."
A little late to the game, but your problem may be that C++ Builder is decorating the exported function with a leading underscore. The TLIB command line utility should help prove this (assuming tlib still ships with C++Builder)
TLIB mydll.lib, mydll.lst
Look at the resulting lst file and see if it contains StartTest or _StartTest. LabView is probably expecting to find a function without the underscore.
You can add a DEF file to your C++Builder project that will suppress the leading underscore. Try this:
Use the __cdecl calling convention instead of __stdcall.
Export plain "C" functions. No C++ classes or member functions.
Make sure you have an extern "C" {} around your function prototypes.
Create a DEF file that aliases the exported functions to a Microsoft
compatible name. Alias the names so they don't contain a leading
underscore. The DEF file will look like this:
EXPORTS
; MSVC name = C++Builder name
StartTest = _StartTest
Foo = _Foo
Bar = _Bar
5- Add the DEF file to your BCB DLL project and rebuild it.
Check out these ancient articles for more details:
http://bcbjournal.org/articles/vol4/0012/Using_Visual_C_DLLs_with_CBuilder.htm
The reverse article (creating C++Builder DLLs that get called from VC++ created applications) is buried in this archive:
http://www.frasersoft.net/program/bcbdev.zip : /articles/bcbdll.htm. It describes the DEF file trick in more detail, plus some other options.
Note that my answer is based on the way thing were in 1998 or so. They may have changed since then. If they have, then the C++Builder command line tools impdef, tlib, tdump, plus the Microsoft equivalents of those tools, should be able to show you exactly what is in your DLL vs the MSVC one.
H^2
I suggest to use ActiveX object: you can create an automation object in C++Builder and in Labview / TestStand you can import this object. If you use automation, in Lavbiew you will have the correct parameter definition. Make sure you are using a set of compatible type variables with Labview / TestStand.
For example, this fragment of code is the implementation of an array passed from Labview to C++:
STDMETHODIMP TCanLibraryImpl::DataDownload(VARIANT Data, long* RV)
{
_precondition_cmodule();
*RV = 0;
TSafeArrayLong1 mySafeArray(Data.parray);
int dLen =mySafeArray.BoundsLength[0];
...
}
In Labview you will pass to this function an array of I64