I have a problem when I add an ANE (that I built by myself) to a Flex mobile project.
The problem is that the size of the objects are different than before I add the ANE. I've never used this ANE, I've only added it.
Here is an image before I add the ANE:
And an image after I add the ANE:
As you can see, the size of whole app are different. Do you know which can be the problem?
Thanks in advance
** Update info **
Code on ios:
#import "FlashRuntimeExtensions.h"
FREContext eventContext;
FREObject init(FREContext ctx, void* funcData, uint32_t argc, FREObject argv[])
{
eventContext = ctx;
NSLog(#"init");
return NULL;
}
void CameraExtContextInitializer(void* extData, const uint8_t* ctxType, FREContext ctx, uint32_t* numFunctionsToTest, const FRENamedFunction** functionsToSet)
{
NSLog(#"camera ext context initializer");
*numFunctionsToTest = 1;
FRENamedFunction* func = (FRENamedFunction*) malloc(sizeof(FRENamedFunction) * *numFunctionsToTest);
func[0].name = (const uint8_t*) "init";
func[0].functionData = NULL;
func[0].function = &init;
*functionsToSet = func;
}
void CameraExtensionUniversalInitializer(void** extDataToSet, FREContextInitializer* ctxInitializerToSet, FREContextFinalizer* ctxFinalizerToSet)
{
NSLog(#"Camera extension initializer");
*extDataToSet = NULL;
*ctxInitializerToSet = &CameraExtContextInitializer;
}
Code of library in AS:
package com.xxx.Controller
{
import flash.events.EventDispatcher;
import flash.events.IEventDispatcher;
import flash.events.StatusEvent;
import flash.external.ExtensionContext;
public class ReaderDeviceExtensionController extends EventDispatcher
{
private static var _instance:ReaderDeviceExtensionController;
private var extContext:ExtensionContext;
public function ReaderDeviceExtensionController(enforcer:SingletonEnforcer)
{
super();
extContext = ExtensionContext.createExtensionContext( "com.xxx.Controller", "" );
if ( !extContext ) {
throw new Error( "Reader device native extension is not supported on this platform." );
}
}
public static function get instance():ReaderDeviceExtensionController {
if ( !_instance ) {
_instance = new ReaderDeviceExtensionController( new SingletonEnforcer() );
_instance.init();
}
return _instance;
}
public function dispose():void {
extContext.dispose();
}
private function init():void {
extContext.call( "init" );
}
}
}
class SingletonEnforcer {
}
Extension.xml:
<extension xmlns="http://ns.adobe.com/air/extension/3.1">
<id>com.xxx.Controller</id>
<versionNumber>0.0.1</versionNumber>
<platforms>
<platform name="iPhone-ARM">
<applicationDeployment>
<nativeLibrary>libTestSimpleAne.a</nativeLibrary>
<initializer>CameraExtensionUniversalInitializer</initializer>
</applicationDeployment>
</platform>
</platforms>
</extension>
I build the ANE with the following ADT command:
adt -package -target ane MWCameraNativeExtension.ane extension.xml -swc NativeDevicePluginForCamera.swc -platform iPhone-ARM -C ios .
Code of my application in Flex:
<?xml version="1.0" encoding="utf-8"?>
<s:View xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark" title="HTMLMainContent">
<fx:Declarations>
<!-- Place non-visual elements (e.g., services, value objects) here -->
</fx:Declarations>
<s:Label x="346" y="290" fontSize="16" paddingLeft="20" text="HI HI HI"/>
</s:View>
Is this running on an iPad 3? My guess is that you are accidentally unlocking the retina resolution when you use the native extension.
The default iOS SDK that Flash Builder / packager uses is an older version that didn't support the retina resolution of the iPad 3. So when you package an application without a native extension you get the iPad 2 resolution and iOS scales everything. When you use a native extension and you specify a newer iOS SDK it will automatically use the native resolution of the device.
Try tracing out stage.stageWidth and stage.stageHeight with and without the extension included.
Also, check in the Project Properties -> Flex Build Packaging -> Apple iOS -> Native Extensions -> Apple iOS SDK box. If that is populated then you are most likely have the problem I described.
Related
I am trying to use libpng1.6 to load a png from file, but the execution fails with a segmentation fault.
I load the library and libpng function called png_image_begin_read_from_file as shown:
final dylib = ffi.DynamicLibrary.open(
"/opt/homebrew/Cellar/libpng/1.6.37/lib/libpng.dylib");
final dart_function_t png_image_begin_read_from_file = dylib
.lookup<ffi.NativeFunction<native_function_t>>(
'png_image_begin_read_from_file')
.asFunction();
Here is the full snippet to reproduce the problem.
import 'dart:ffi' as ffi;
import 'package:ffi/ffi.dart';
class png_controlp extends ffi.Opaque {}
// Fields taken from "png.h"
class PngImage extends ffi.Struct {
#ffi.Uint32()
external int version,
width,
height,
flags,
format,
colormap_entries,
warning_or_error;
#ffi.Array(64)
external ffi.Array<ffi.Char> message;
external ffi.Pointer<png_controlp> opaque;
}
// native type
typedef native_function_t = ffi.Int32 Function(
ffi.Pointer<PngImage> image, ffi.Pointer<Utf8> file_name);
// dart type
typedef dart_function_t = int Function(
ffi.Pointer<PngImage> image, ffi.Pointer<Utf8> file_name);
void main() {
final dylib = ffi.DynamicLibrary.open(
"/opt/homebrew/Cellar/libpng/1.6.37/lib/libpng.dylib");
final dart_function_t png_image_begin_read_from_file = dylib
.lookup<ffi.NativeFunction<native_function_t>>(
'png_image_begin_read_from_file')
.asFunction();
final path = "my_image.png";
ffi.Pointer<PngImage> p_pngImage = calloc<PngImage>();
p_pngImage.ref.version = 1;
final res = png_image_begin_read_from_file(p_pngImage, path.toNativeUtf8());
}
===== CRASH =====
si_signo=Segmentation fault: 11(11), si_code=2, si_addr=0x11
version=2.17.0-266.7.beta (beta) (Mon Apr 25 14:54:53 2022 +0000) on "macos_arm64"
pid=42862, thread=20995, isolate_group=main(0x123810200), isolate=main(0x122034200)
isolate_instructions=10263d8c0, vm_instructions=10263d8c0
pc 0x00000001051b6604 fp 0x000000016e1be830 png_image_free+0x24
pc 0x00000001051b66c8 fp 0x000000016e1be850 png_image_error+0x38
It looks like the Dart struct is in the wrong order - check the C header:
2672 typedef struct
2673 {
2674 png_controlp opaque; /* Initialize to NULL, free with png_image_free */
2675 png_uint_32 version; /* Set to PNG_IMAGE_VERSION */
2676 png_uint_32 width; /* Image width in pixels (columns) */
2677 png_uint_32 height; /* Image height in pixels (rows) */
2678 png_uint_32 format; /* Image format as defined below */
2679 png_uint_32 flags; /* A bit mask containing informational flags */
2680 png_uint_32 colormap_entries;
2681 /* Number of entries in the color-map */
....
2704
2705 png_uint_32 warning_or_error;
2706
2707 char message[64];
2708 } png_image, *png_imagep;
So, the Dart struct should be:
class PngImage extends ffi.Struct {
external ffi.Pointer<png_controlp> opaque;
#ffi.Uint32()
external int version,
width,
height,
flags,
format,
colormap_entries,
warning_or_error;
#ffi.Array(64)
external ffi.Array<ffi.Char> message;
}
Also note the comment in the header:
2976 * The png_image passed to the read APIs must have been initialized by setting
2977 * the png_controlp field 'opaque' to NULL (or, safer, memset the whole thing.)
Be sure to set the opaque pointer to nullPtr - though this is probably unnecessary as you are using calloc to create it - which will zero it for you. (Also remember that you may need to make another C call to free stuff allocated by the read_from_file method, and also free anything that you calloc.)
I've built the native kotlin library for Android Arm64 target, kotlin generated *.h and *.so. files, added these files to Android project and tried to create kref_values on c++ side by default h-file kotlin functions:
mylib_ExportedSymbols* symbols = mylib_symbols();
auto kdouble = symbols->createNullableDouble(12345678.88);
if (kdouble.pinned != nullptr) {
auto val = *(static_cast<double*>)(kdouble.pinned);
// val has strange value, and it diferents from 12345678.88
std::cout << "double = " << val << std::endl;
}
kdouble has the type:
typedef struct {
mylib_KNativePtr pinned;
} mylib_kref_kotlin_Double;
How to read this double if I don't have another appropriate functions in h-file to extract double value from mylib_kref_kotlin_Double type?
Thanks.
Does MonoTouch have a simple mechanism for retrieving the device serial number (not UDID) of an iOS device? Is there a third-party library which I can use to obtain this?
In case it matters, I'm looking to use this functionality in an in-house application and am not concerned with the App Store approval process.
UPDATE: from iOS 8, we cannot retrieve the serial number of our iDevice.
To retrieve iphone serial number from Monotouch, you can use this technic:
Create a static library .a from XCode that have a function to get serial number
In MonoDevelop, create a binding project to bind you .a library into C# classes/functions (http://docs.xamarin.com/guides/ios/advanced_topics/binding_objective-c_libraries)
In your application, you call this binding library (in step 2).
For detail:
STEP 1. In my library.a, I have a class DeviceInfo, here is the implementation to get Serial number
#import "DeviceInfo.h"
#import <dlfcn.h>
#import <mach/port.h>
#import <mach/kern_return.h>
#implementation DeviceInfo
- (NSString *) serialNumber
{
NSString *serialNumber = nil;
void *IOKit = dlopen("/System/Library/Frameworks/IOKit.framework/IOKit", RTLD_NOW);
if (IOKit)
{
mach_port_t *kIOMasterPortDefault = dlsym(IOKit, "kIOMasterPortDefault");
CFMutableDictionaryRef (*IOServiceMatching)(const char *name) = dlsym(IOKit, "IOServiceMatching");
mach_port_t (*IOServiceGetMatchingService)(mach_port_t masterPort, CFDictionaryRef matching) = dlsym(IOKit, "IOServiceGetMatchingService");
CFTypeRef (*IORegistryEntryCreateCFProperty)(mach_port_t entry, CFStringRef key, CFAllocatorRef allocator, uint32_t options) = dlsym(IOKit, "IORegistryEntryCreateCFProperty");
kern_return_t (*IOObjectRelease)(mach_port_t object) = dlsym(IOKit, "IOObjectRelease");
if (kIOMasterPortDefault && IOServiceGetMatchingService && IORegistryEntryCreateCFProperty && IOObjectRelease)
{
mach_port_t platformExpertDevice = IOServiceGetMatchingService(*kIOMasterPortDefault, IOServiceMatching("IOPlatformExpertDevice"));
if (platformExpertDevice)
{
CFTypeRef platformSerialNumber = IORegistryEntryCreateCFProperty(platformExpertDevice, CFSTR("IOPlatformSerialNumber"), kCFAllocatorDefault, 0);
if (CFGetTypeID(platformSerialNumber) == CFStringGetTypeID())
{
serialNumber = [NSString stringWithString:(__bridge NSString*)platformSerialNumber];
CFRelease(platformSerialNumber);
}
IOObjectRelease(platformExpertDevice);
}
}
dlclose(IOKit);
}
return serialNumber;
}
#end
STEP 2. In ApiDefinition.cs of my Binding Library project in Monotouch, I add this binding:
[BaseType (typeof (NSObject))]
public interface DeviceInfo {
[Export ("serialNumber")]
NSString GetSerialNumber ();
}
STEP 3. In my application, I import Reference to Binding library project in step 2, then add
using MyBindingProject;
...
string serialNumber = "";
try {
DeviceInfo nativeDeviceInfo = new DeviceInfo ();
NSString temp = nativeDeviceInfo.GetSerialNumber();
serialNumber = temp.ToString();
} catch (Exception ex) {
Console.WriteLine("Cannot get serial number {0} - {1}",ex.Message, ex.StackTrace);
}
Hope that helps. Don't hesitate if you have any question.
Is there any support of OpenCV graphics library is available for Windows Phone 8 and Windows 8. I made a search on Google but didn't find any resource related with OpenCV to connect with Windows Phone 8 / Windows 8. If any of you know more about this please help me, and provide some link to reach the library.
This is the latest information what I get from OpenCV team.
OpenCV development team is working on port for Windows RT. Here is current development branch for WinRT(https://github.com/asmorkalov/opencv/tree/winrt). You can build it for ARM using Visual Studio Express for Windows 8 and Platform SDK.
Open Visual Studio development console.
Setup environment for cross compilation by command "C:\Program Files(x86)\Microsoft
Visual Studio 11.0\VC\bin\x86_arm\vcvarsx86_arm.bat"
cd <opencv_source_dir>/platforms/winrt/
run scripts/cmake_winrt.cmd
run ninja
Alternatively you can use nmake instead ninja. You need to edit cmake_winrt.cmd and change project generator fro -GNinja to -G "NMake Makefiles". Only algorithmic part of the library is supported now, no tbb, no UI, no video IO.
Please check the below given URL from more details.
http://answers.opencv.org/question/9847/opencv-for-windows-8-tablet/?answer=9851#post-id-9851
By windows-8, I guess you mean winRT ? AFAIK, there is no official port to winRT. You need to compile it by yourself as a Win8 Store DLL for instance, so that you can reference it from a Win8 Store Application.
Just start by opencv-core, then add the lib you need, one by one, because all the components will not be able to compile (for instance, opencv-highgui is highly dependant on Windows API which is not fully compatible with Win8 Store Apps).
You'll also need to code by yourself some Win32 methods used by OpenCV and not accessible from Win8 App like GetSystemInfo(), GetTempPathA(), GetTempFileNameA() and all methods related to thread local storage (TLS).
I've been able to use a small subset of OpenCV in WinRT by compiling opencv_core, opencv_imgproc and zlib, as 3 seperate static libs. I've added one another, called opencv_winrt, that contains only the two following files:
opencv_winrt.h
#pragma once
#include "combaseapi.h"
void WINAPI GetSystemInfo(
_Out_ LPSYSTEM_INFO lpSystemInfo
);
DWORD WINAPI GetTempPathA(
_In_ DWORD nBufferLength,
_Out_ char* lpBuffer
);
UINT WINAPI GetTempFileNameA(
_In_ const char* lpPathName,
_In_ const char* lpPrefixString,
_In_ UINT uUnique,
_Out_ char* lpTempFileName
);
DWORD WINAPI TlsAlloc();
BOOL WINAPI TlsFree(
_In_ DWORD dwTlsIndex
);
LPVOID WINAPI TlsGetValue(
_In_ DWORD dwTlsIndex
);
BOOL WINAPI TlsSetValue(
_In_ DWORD dwTlsIndex,
_In_opt_ LPVOID lpTlsValue
);
void WINAPI TlsShutdown();
# define TLS_OUT_OF_INDEXES ((DWORD)0xFFFFFFFF)
opencv_winrt.cpp
#include "opencv_winrt.h"
#include <vector>
#include <set>
#include <mutex>
#include "assert.h"
void WINAPI GetSystemInfo(LPSYSTEM_INFO lpSystemInfo)
{
GetNativeSystemInfo(lpSystemInfo);
}
DWORD WINAPI GetTempPathA(DWORD nBufferLength, char* lpBuffer)
{
return 0;
}
UINT WINAPI GetTempFileNameA(const char* lpPathName, const char* lpPrefixString, UINT uUnique, char* lpTempFileName)
{
return 0;
}
// Thread local storage.
typedef std::vector<void*> ThreadLocalData;
static __declspec(thread) ThreadLocalData* currentThreadData = nullptr;
static std::set<ThreadLocalData*> allThreadData;
static DWORD nextTlsIndex = 0;
static std::vector<DWORD> freeTlsIndices;
static std::mutex tlsAllocationLock;
DWORD WINAPI TlsAlloc()
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
// Can we reuse a previously freed TLS slot?
if (!freeTlsIndices.empty())
{
DWORD result = freeTlsIndices.back();
freeTlsIndices.pop_back();
return result;
}
// Allocate a new TLS slot.
return nextTlsIndex++;
}
_Use_decl_annotations_ BOOL WINAPI TlsFree(DWORD dwTlsIndex)
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
assert(dwTlsIndex < nextTlsIndex);
assert(find(freeTlsIndices.begin(), freeTlsIndices.end(), dwTlsIndex) == freeTlsIndices.end());
// Store this slot for reuse by TlsAlloc.
try
{
freeTlsIndices.push_back(dwTlsIndex);
}
catch (...)
{
return false;
}
// Zero the value for all threads that might be using this now freed slot.
for each (auto threadData in allThreadData)
{
if (threadData->size() > dwTlsIndex)
{
threadData->at(dwTlsIndex) = nullptr;
}
}
return true;
}
_Use_decl_annotations_ LPVOID WINAPI TlsGetValue(DWORD dwTlsIndex)
{
ThreadLocalData* threadData = currentThreadData;
if (threadData && threadData->size() > dwTlsIndex)
{
// Return the value of an allocated TLS slot.
return threadData->at(dwTlsIndex);
}
else
{
// Default value for unallocated slots.
return nullptr;
}
}
_Use_decl_annotations_ BOOL WINAPI TlsSetValue(DWORD dwTlsIndex, LPVOID lpTlsValue)
{
ThreadLocalData* threadData = currentThreadData;
if (!threadData)
{
// First time allocation of TLS data for this thread.
try
{
threadData = new ThreadLocalData(dwTlsIndex + 1, nullptr);
std::lock_guard<std::mutex> lock(tlsAllocationLock);
allThreadData.insert(threadData);
currentThreadData = threadData;
}
catch (...)
{
if (threadData)
delete threadData;
return false;
}
}
else if (threadData->size() <= dwTlsIndex)
{
// This thread already has a TLS data block, but it must be expanded to fit the specified slot.
try
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
threadData->resize(dwTlsIndex + 1, nullptr);
}
catch (...)
{
return false;
}
}
// Store the new value for this slot.
threadData->at(dwTlsIndex) = lpTlsValue;
return true;
}
// Called at thread exit to clean up TLS allocations.
void WINAPI TlsShutdown()
{
ThreadLocalData* threadData = currentThreadData;
if (threadData)
{
{
std::lock_guard<std::mutex> lock(tlsAllocationLock);
allThreadData.erase(threadData);
}
currentThreadData = nullptr;
delete threadData;
}
}
And I modify the file cvconfig.h: I've commented out every #define, except PACKAGE* and VERSION, and I added #include "opencv_winrt.h" at the end.
Just a hint - there is a C# wrapper for OpenCV called EmguCV (http://www.emgu.com/wiki/index.php/Main_Page), by looking at the forum posts I see that there is some activity towards using it on Windows 8 but it's hard to tell if it's now working since the posts claiming issues are quite old. I'd suggest you just give it a try and see if this C# wrapper is able to run on Windows Phone 8, I think it should definitely run on Windows 8.
Firstly, what I want to do is to intercept an arbitrary standard C function (like fopen, read, write, malloc, ...) of an iOS application.
I have a libtest.dylib with this code:
typedef struct interpose_s {
void *new_func;
void *orig_func;
} interpose_t;
FILE *vg_fopen(const char * __restrict, const char * __restrict);
static const interpose_t interposing_functions[] \
__attribute__ ((section("__DATA, __interpose"))) = {
{ (void *)vg_fopen, (void *)fopen },
};
FILE *vg_fopen(const char * __restrict path, const char * __restrict mode) {
printf("vg_fopen");
return fopen(path, mode);
}
After compiled the dylib, I go to the binary of the host iOS app and add an LC_LOAD_DYLIB to the end of the LC_LOAD_COMMANDS list and point it to #executable_path/libtest.dylib
What I expect is that it will override the implementation of fopen, and print "vg_fopen" whenever fopen is called. However, I do not get it, so the interposition might have been failed.
I'd like to know what might be the reason. This is for in-house development for learning purpose only, so please don't mention about the impact or warn me about inappropriate use.
Thanks in advance.
From the dyld source:
// link any inserted libraries
// do this after linking main executable so that any dylibs pulled in by inserted
// dylibs (e.g. libSystem) will not be in front of dylibs the program uses
if ( sInsertedDylibCount > 0 ) {
for(unsigned int i=0; i < sInsertedDylibCount; ++i) {
ImageLoader* image = sAllImages[i+1];
link(image, sEnv.DYLD_BIND_AT_LAUNCH, ImageLoader::RPathChain(NULL, NULL));
// only INSERTED libraries can interpose
image->registerInterposing();
}
}
So no, only libraries inserted via DYLD_INSERT_LIBRARIES have their interposing applied.