I'm writing a plugin for unity, and I need to send a texture from ios to unity.
There is a UnitySendMessage function which takes char* as a parameter, but I didn't find a way to convert id<MTLTexture> to char*.
How can I send id<MTLTexture> from ios and receive it in unity?
My current code :
//ios side ...
id<MTLTexture> _texture = CVMetalTextureGetTexture(texture);
UnitySendMessage(CALLBACK_OBJECT, CALLBACK_TEXTURE_READY,_texture);//error
//...
//unity side
private void OnTextureReady(string texture_str)
{
IntPtr texture = new IntPtr(Int32.Parse(texture_str));
int width = 256;
int height = 256;
rawImage.texture = Texture2D.CreateExternalTexture(width, height,
TextureFormat.ARGB32, false, false, texture);
}
iOS plugin documentation says that you can only pass strings using UnitySendMessage.
The workaround would be to create a mapping from string to texture objects in Objective-C side, pass the string key via UnitySendMessage and then retrieve the texture object using a custom DllImport function.
Declare you map:
// class field
{
NSMutableDictionary<NSString *, id<MTLTexture>> _textures;
}
// in constructor
_textures = [NSMutableDictionary new];
// in function code
NSString *textureName = #"cookies";
_textures[textureName] = texture; // save MTLTexture for later
UnitySendMessage(CALLBACK_OBJECT, CALLBACK_TEXTURE_READY, textureName);
On the C# side CreateExternalTexture requires a pointer to a texture object of type IntPtr. To obtain it you can declare a DllImport function that takes a texture name and returns IntPtr:
[DllImport("__Internal")]
static extern IntPtr GetMetalTexturePointerByName(string textureName);
and implement it on the iOS side like so:
return plugin->_textures[textureName];
Not sure if it works though in terms of what CreateExternalTexture expects.
See also this post, a guy is doing something similar (but reverse):
Convert uintptr_t to id<MTLTexture>
Related
From here, we know if malloc_logger global function is defined, it will be called whenever there is a malloc or free operation. I want to use it to record memory allocations in my app like this:
typedef void(malloc_logger_t)(uint32_t type,
uintptr_t arg1,
uintptr_t arg2,
uintptr_t arg3,
uintptr_t result,
uint32_t num_hot_frames_to_skip);
extern malloc_logger_t *malloc_logger;
void my_malloc_stack_logger(uint32_t type, uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t result, uint32_t num_hot_frames_to_skip);
malloc_logger = my_malloc_stack_logger;
void my_malloc_stack_logger(uint32_t type, uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t result, uint32_t num_hot_frames_to_skip)
{
// do my work
}
In my_malloc_stack_logger, I can directly get the allocated size and address. But how about object types? I want to record the class name if it is an NSObject instance. Is it possible to get this information?
After playing around with the hook, it looks like what you want to achieve is not quite possible.
First problem here is that if you try to read a class name from within this function (by calling any of object_getClassName, class_getName
or NSStringFromClass), this action on its own tends to trigger new allocations. That apparently happens because some Cocoa classes load lazily. I noticed however that when requesting all classes with objc_getClassList it makes a lot of preliminary allocations that helps to avoid them later on. So my idea is to cache all class names before subscribing to the allocations hook and refer to the cached values when needed. For the storage I used Apple's CFMutableDictionary:
CFMutableDictionaryRef objc_class_records;
void refresh_objc_class_list(void) {
pthread_mutex_lock(&objc_class_records_mutex);
if (objc_class_records) {
CFRelease(objc_class_records);
}
objc_class_records = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, NULL, &kCFTypeDictionaryValueCallBacks);
// The buffer needs to accomodate at least 26665 instances
static const unsigned buffer_length = 100000;
Class registered_classes[buffer_length];
objc_getClassList(registered_classes, buffer_length);
for (unsigned i = 0; i < buffer_length; ++i) {
if (!registered_classes[i]) {
break;
}
const Class class = registered_classes[i];
const CFStringRef class_name = CFStringCreateWithCString(kCFAllocatorDefault, class_getName(class), kCFStringEncodingUTF8);
CFDictionarySetValue(objc_class_records, class, class_name);
CFRelease(class_name);
}
}
Be advised that you don't want to have it called when the malloc logger is enabled (especially from within the hook itself).
Now you need to obtain a Class instance from the Objective-C objects. Depending on the type of allocation, the pointer argument goes to fifth or third parameter:
void my_malloc_logger(uint32_t type, uintptr_t param0, uintptr_t param1, uintptr_t param2,
uintptr_t param3, uint32_t frames_to_skip) {
void *ptr = NULL;
unsigned size = 0;
switch (type) {
case MALLOC_OP_MALLOC:
case MALLOC_OP_CALLOC:
ptr = (void *)param3;
size = (unsigned)param1;
break;
case MALLOC_OP_REALLOC:
ptr = (void *)param3;
size = (unsigned)param2;
break;
case MALLOC_OP_FREE:
ptr = (void *)param1;
break;
}
id objc_ptr = (id)ptr;
Class objc_class = object_getClass(objc_ptr);
if (!objc_class) {
return;
}
const CFStringRef class_name;
const bool found = CFDictionaryGetValueIfPresent(objc_class_records, objc_class, (const void **)&class_name);
if (found) {
const static unsigned name_max_length = 256;
char c_class_name[name_max_length];
if (CFStringGetCString(class_name, c_class_name, name_max_length, kCFStringEncodingUTF8)) {
const char *alloc_name = alloc_type_name(type);
nomalloc_printf_sync("%7s: Pointer: %p; Size: %u; Obj-C class: \"%s\"\n", alloc_name, objc_ptr, size, c_class_name);
}
}
}
And now why it won't work as expected:
object_getClass is not able to tell whether a pointer is an object of Cococa classes at the time of allocation (it will find the class, however, when the object is already allocated, e.g. before deallocation). Thus, the following code:
[NSObject new];
Will produce output similar to this:
CALLOC: Pointer: 0x600000600080; Size: 16
FREE: Pointer: 0x600000600080; Size: 0; Obj-C class: "NSObject"
Most of the standard Cocoa classes are in fact so-called Class Clusters and under the hood the actual allocation happens for an instance of a private class (which is not always recognisable by its public interface), thus this information is incomplete and sometimes misleading.
There are also many other factors which need to be taken into account (which i didn't cover here because it's beyond the question asked): the way you output data to standard output should not cause allocation by itself; the logging needs synchronisation since allocation happens a lot from any number of threads; if you want to enable/disable recording the Objective-C classes (or update the cache occasionally) access to the storage also needs to be synchronised.
Having that said if you are satisfied with what can be done with it, feel free to refer to the repository I made where this approach is already implemented in a form of a static library.
See update 1 below for my guess as to why the error is happening
I'm trying to develop an application with some C#/WPF and C++. I am having a problem on the C++ side on a part of the code that involves optimizing an object using GNU Scientific Library (GSL) optimization functions. I will avoid including any of the C#/WPF/GSL code in order to keep this question more generic and because the problem is within my C++ code.
For the minimal, complete and verifiable example below, here is what I have. I have a class Foo. And a class Optimizer. An object of class Optimizer is a member of class Foo, so that objects of Foo can optimize themselves when it is required.
The way GSL optimization functions take in external parameters is through a void pointer. I first define a struct Params to hold all the required parameters. Then I define an object of Params and convert it into a void pointer. A copy of this data is made with memcpy_s and a member void pointer optimParamsPtr of Optimizer class points to it so it can access the parameters when the optimizer is called to run later in time. When optimParamsPtr is accessed by CostFn(), I get the following error.
Managed Debugging Assistant 'FatalExecutionEngineError' : 'The runtime
has encountered a fatal error. The address of the error was at
0x6f25e01e, on thread 0x431c. The error code is 0xc0000005. This error
may be a bug in the CLR or in the unsafe or non-verifiable portions of
user code. Common sources of this bug include user marshaling errors
for COM-interop or PInvoke, which may corrupt the stack.'
Just to ensure the validity of the void pointer I made, I call CostFn() at line 81 with the void * pointer passed as an argument to InitOptimizer() and everything works. But in line 85 when the same CostFn() is called with the optimParamsPtr pointing to data copied by memcpy_s, I get the error. So I am guessing something is going wrong with the memcpy_s step. Anyone have any ideas as to what?
#include "pch.h"
#include <iostream>
using namespace System;
using namespace System::Runtime::InteropServices;
using namespace std;
// An optimizer for various kinds of objects
class Optimizer // GSL requires this to be an unmanaged class
{
public:
double InitOptimizer(int ptrID, void *optimParams, size_t optimParamsSize);
void FreeOptimizer();
void * optimParamsPtr;
private:
double cost = 0;
};
ref class Foo // A class whose objects can be optimized
{
private:
int a; // An internal variable that can be changed to optimize the object
Optimizer *fooOptimizer; // Optimizer for a Foo object
public:
Foo(int val) // Constructor
{
a = val;
fooOptimizer = new Optimizer;
}
~Foo()
{
if (fooOptimizer != NULL)
{
delete fooOptimizer;
}
}
void SetA(int val) // Mutator
{
a = val;
}
int GetA() // Accessor
{
return a;
}
double Optimize(int ptrID); // Optimize object
// ptrID is a variable just to change behavior of Optimize() and show what works and what doesn't
};
ref struct Params // Parameters required by the cost function
{
int cost_scaling;
Foo ^ FooObj;
};
double CostFn(void *params) // GSL requires cost function to be of this type and cannot be a member of a class
{
// Cast void * to Params type
GCHandle h = GCHandle::FromIntPtr(IntPtr(params));
Params ^ paramsArg = safe_cast<Params^>(h.Target);
h.Free(); // Deallocate
// Return the cost
int val = paramsArg->FooObj->GetA();
return (double)(paramsArg->cost_scaling * val);
}
double Optimizer::InitOptimizer(int ptrID, void *optimParamsArg, size_t optimParamsSizeArg)
{
optimParamsPtr = ::operator new(optimParamsSizeArg);
memcpy_s(optimParamsPtr, optimParamsSizeArg, optimParamsArg, optimParamsSizeArg);
double ret_val;
// Here is where the GSL stuff would be. But I replace that with a call to CostFn to show the error
if (ptrID == 1)
{
ret_val = CostFn(optimParamsArg); // Works
}
else
{
ret_val = CostFn(optimParamsPtr); // Doesn't work
}
return ret_val;
}
// Release memory used by unmanaged variables in Optimizer
void Optimizer::FreeOptimizer()
{
if (optimParamsPtr != NULL)
{
delete optimParamsPtr;
}
}
double Foo::Optimize(int ptrID)
{
// Create and initialize params object
Params^ paramsArg = gcnew Params;
paramsArg->cost_scaling = 11;
paramsArg->FooObj = this;
// Convert Params type object to void *
void * paramsArgVPtr = GCHandle::ToIntPtr(GCHandle::Alloc(paramsArg)).ToPointer();
size_t paramsArgSize = sizeof(paramsArg); // size of memory block in bytes pointed to by void pointer
double result = 0;
// Initialize optimizer
result = fooOptimizer->InitOptimizer(ptrID, paramsArgVPtr, paramsArgSize);
// Here is where the loop that does the optimization will be. Removed from this example for simplicity.
return result;
}
int main()
{
Foo Foo1(2);
std::cout << Foo1.Optimize(1) << endl; // Use orig void * arg in line 81 and it works
std::cout << Foo1.Optimize(2) << endl; // Use memcpy_s-ed new void * public member of Optimizer in line 85 and it doesn't work
}
Just to reiterate I need to copy the params to a member in the optimizer because the optimizer will run all through the lifetime of the Foo object. So it needs to exist as long as the Optimizer object exist and not just in the scope of Foo::Optimize()
/clr support need to be selected in project properties for the code to compile. Running on an x64 solution platform.
Update 1: While trying to debug this, I got suspicious of the way I get the size of paramsArg at line 109. Looks like I am getting the size of paramsArg as size of int cost_scaling plus size of the memory storing the address to FooObj instead of the size of memory storing FooObj itself. I realized this after stumbling across this answer to another post. I confirmed this by checking the value of paramsArg after adding some new dummy double members to Foo class. As expected the value of paramsArg doesn't change. I suppose this explains why I get the error. A solution would be to write code to correctly calculate the size of a Foo class object and set that to paramsArg instead of using sizeof. But that is turning out to be too complicated and probably another question in itself. For example, how to get size of a ref class object? Anyways hopefully someone will find this helpful.
I'm working on a (Universal Windows) c++/cx Directx project, which builds to a dll used in a c# UWP project.
I'm using the DirectX Toolkit to load textures.
I already use it to create a texture from file, but now I need it to create a texture from a byte array that was send from the UWP project.
But when trying to use CreateWICTextureFromMemory(), the HRESULT says 0x88982F50:"The component cannot be found"
All I can find about this problem indicates the bytes are not a correct image, but I tested it in the UWP project, there I get the byte array from bingmaps (it's a static map image), and I could make a working image from these bytes.
Does annyone know what I'm doing wrong?
UWP c# download code (to get the bytes):
private async Task DownloadTexture()
{
byte[] buffer = null;
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(_url);
WebResponse response = await request.GetResponseAsync();
using (Stream stream = response.GetResponseStream())
using (MemoryStream ms = new MemoryStream())
{
stream.CopyTo(ms);
buffer = ms.ToArray();
}
}
catch (Exception exception)
{
Logger.Error($"Could not Download Texture: {exception}");
}
_track3D.SetImage(out buffer[0], (ulong)buffer.Length);
}
Directx C++ code (that fails):
void Track3D::SetImage(uint8_t* ddsData, size_t ddsDataSize)
{
HRESULT result = CreateWICTextureFromMemory(_d3dDevice.Get(), _d3dContext.Get(), ddsData, ddsDataSize, nullptr, _Terrain.ReleaseAndGetAddressOf());
//here it goes wrong
//TODO: use the texture
}
UWP C# test code that works (displays image):
private async void setImage(byte[] buffer) //test
{
try
{
BitmapImage bmpImage = new BitmapImage();
using (InMemoryRandomAccessStream stream = new InMemoryRandomAccessStream())
{
await stream.WriteAsync(buffer.AsBuffer());
stream.Seek(0);
await bmpImage.SetSourceAsync(stream);
}
Image image = new Image();
image.Source = bmpImage;
((Grid)Content).Children.Add(image);
}
catch (Exception exception)
{
Logger.Error($"{exception}");
}
}
EDIT:
OK, turns out the first byte in the buffer is different in the C++ code, than it was when sent from UWP. When I change that first byte to the correct value in the C++ code (as a test), the texture is correctly created.
Which raises the question, why did the value of the first byte change?
(Or what did I do wrong?)
As requested, The function setImage() looks like this in c#:
[MethodImpl]
public void __ITrack3DPublicNonVirtuals.SetImage(out byte ddsData, [In] ulong ddsDataSize);
(also, I just realised the parameter names still have 'dds' in their name, sorry about that, will change that in my code as it is misleading)
0x88982F50: “The component cannot be found”
This is WINCODEC_ERR_COMPONENTNOTFOUND which happens whenever WIC can't determine what format codec to use for a file/binary. Your problem is your transfer of the data from managed to native code is wrong.
Your interop method is set to:
[MethodImpl]
public void __ITrack3DPublicNonVirtuals.SetImage(out byte ddsData, [In] ulong ddsDataSize);
With the C++ method signature being:
void Track3D::SetImage(uint8_t* ddsData, size_t ddsDataSize)
Because of the out your first parameter is being passed as a safe array with the length in the first element.
Instead you should use:
SetImage([In] byte ddsData, [In] ulong ddsDataSize); // C#
void Track3D::SetImage(const uint8_t* ddsData, size_t ddsDataSize); // C++.
Is it possible to access OpenGL ES on iOS from RoboVM without using LibGDX? If so, are there any useful references?
The only thing I can find is this super-simple demo from over 2 years ago: http://robovm.com/ios-opengles-in-java-on-robovm/
But it doesn't provide any functions besides glClearColor and glClear.
The Apple GLKit framework seems to be implemented, though. I just can't find all the actual glWhatever(...) functions...
Yes, it is possible. You need two things for this: 1. Access to the OpenGL ES functions (like glClear(...), etc.) and 2. a UIView in your app that can draw the GL image.
Turns out the second point is very easy. You can either use a GLKView (requires iOS 5.0) or a CAEAGLLayer (requires iOS 2.0) if you're feeling nostalgic. For both, there are tons of tutorials online on how to use them in Objective-C, which can readily be translated to RoboVM. So, I won't spend too much time on this point here.
Access to the OpenGL ES functions is a little more difficult, as RoboVM doesn't ship with the definitions file out of the box. So, we'll have to build our own using Bro. Turns out, once you wrap your head around how Bro handles C-strings, variable pointers, IntBuffers and such (which is actually quite beautiful!), it's really pretty straight forward. The super-simple demo I linked to in the original question is the right starting point.
In the interest of brevity, let me post here just a very abridged version of the file I wrote to illustrate the way the different data types can be handled:
import java.nio.Buffer;
import java.nio.IntBuffer;
import org.robovm.rt.bro.Bro;
import org.robovm.rt.bro.Struct;
import org.robovm.rt.bro.annotation.Bridge;
import org.robovm.rt.bro.annotation.Library;
import org.robovm.rt.bro.ptr.BytePtr;
import org.robovm.rt.bro.ptr.BytePtr.BytePtrPtr;
import org.robovm.rt.bro.ptr.IntPtr;
#Library("OpenGLES")
public class GLES20 {
public static final int GL_DEPTH_BUFFER_BIT = 0x00000100;
public static final int GL_STENCIL_BUFFER_BIT = 0x00000400;
public static final int GL_COLOR_BUFFER_BIT = 0x00004000;
public static final int GL_FALSE = 0;
public static final int GL_TRUE = 1;
private static final int MAX_INFO_LOG_LENGTH = 10*1024;
private static final ThreadLocal<IntPtr> SINGLE_VALUE =
new ThreadLocal<IntPtr>() {
#Override
protected IntPtr initialValue() {
return Struct.allocate(IntPtr.class, 1);
}
};
private static final ThreadLocal<BytePtr> INFO_LOG =
new ThreadLocal<BytePtr>() {
#Override
protected BytePtr initialValue() {
return Struct.allocate(BytePtr.class, MAX_INFO_LOG_LENGTH);
}
};
static {
Bro.bind(GLES20.class);
}
#Bridge
public static native void glClearColor(float red, float green, float blue, float alpha);
#Bridge
public static native void glClear(int mask);
#Bridge
public static native void glGetIntegerv(int pname, IntPtr params);
// DO NOT CALL THE NEXT METHOD WITH A pname THAT RETURNS MORE THAN ONE VALUE!!!
public static int glGetIntegerv(int pname) {
IntPtr params = SINGLE_VALUE.get();
glGetIntegerv(pname, params);
return params.get();
}
#Bridge
private static native int glGetUniformLocation(int program, BytePtr name);
public static int glGetUniformLocation(int program, String name) {
return glGetUniformLocation(program, BytePtr.toBytePtrAsciiZ(name));
}
#Bridge
public static native int glGenFramebuffers(int n, IntPtr framebuffers);
public static int glGenFramebuffer() {
IntPtr framebuffers = SINGLE_VALUE.get();
glGenFramebuffers(1, framebuffers);
return framebuffers.get();
}
#Bridge
private static native void glShaderSource(int shader, int count, BytePtrPtr string, IntPtr length);
public static void glShaderSource(int shader, String code) {
glShaderSource(shader, 1, new BytePtrPtr().set(BytePtr.toBytePtrAsciiZ(code)), null);
}
#Bridge
private static native void glGetShaderInfoLog(int shader, int maxLength, IntPtr length, BytePtr infoLog);
public static String glGetShaderInfoLog(int shader) {
BytePtr infoLog = INFO_LOG.get();
glGetShaderInfoLog(shader, MAX_INFO_LOG_LENGTH, null, infoLog);
return infoLog.toStringAsciiZ();
}
#Bridge
public static native void glGetShaderPrecisionFormat(int shaderType, int precisionType, IntBuffer range, IntBuffer precision);
#Bridge
public static native void glTexImage2D(int target, int level, int internalformat, int width, int height, int border, int format, int type, IntBuffer data);
#Bridge
private static native void glVertexAttribPointer(int index, int size, int type, int normalized, int stride, Buffer pointer);
public static void glVertexAttribPointer(int index, int size, int type, boolean normalized, int stride, Buffer pointer) {
glVertexAttribPointer(index, size, type, normalized ? GL_TRUE : GL_FALSE, stride, pointer);
}
}
Note how most methods are exposed via just trivial #Bridge-annotated native definitions, but for some it's convenient to define a wrapper method in Java that converts a String to a *char or unpacks a result from an IntPtr for example.
I didn't post my whole library file, since it is still very incomplete and it'll just make it harder to find the examples of how different parameter types are handled.
To save yourself some work, you can copy the GL constant definitions from libGDX's GL20.java. And the OpenGL ES docs are a great reference for the calling signature of the methods (the data types GLenum and GLbitfield correspond to a Java int).
You can then call the gl-methods statically by prepending GLES20. (just like on Android), e.g.:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
Turns out Bro is so smart that you don't even need to include the <framework>OpenGLES</framework> tag in robovm.xml any more, like you would with libGDX.
And - What do you know? - my app starts about 3 times as quickly as it did when it was still using libGDX. And it fixed another issue I had (see LibGDX displays black screen while app is paused but still visible (e.g. during in-app purchase password dialog) on iOS). "Yay!" for getting rid of unnecessary baggage.
The one thing that makes life a little annoying is that if you mess up the call signature of a method or the memory allocation, your app will simply crash with a very unhelpful "Terminated due to signal 11" message in the IDE-console that contains no information about where the app died.
I would like to get the int value of my extern const by its name.
For example in my .h file:
extern const int MY_INT_CONST;
In my .m file:
const int MY_INT_CONST = 0;
What I want:
- (void) method {
int i = [getMyConstantFromString:#"MY_INT_CONST"];
}
How can I do that?
I searched in RunTime api and I did not find anything.
There's no simple way to do this. Neither the language nor the runtime provide a facility for this.
It can be done using the API of the dynamic loader to look up a symbol's address by its name.
// Near top of file
#include <dlfcn.h>
// elsewhere
int* pointer = dlsym(RTLD_SELF, "MY_INT_CONST");
if (pointer)
{
int value = *pointer;
// use value...
}
Note, that's a C-style string that's passed to dlsym(). If you have an NSString, you can use -UTF8String to get a C-style string.
No need for [getMyConstantFromString:#"MY_INT_CONST"];
directly use as follows
- (void) method {
int i = MY_INT_CONST;
}