JNA 2 Dimensional Arrays
My C function is
void func(void** bufs, int numBufs);
Tye C-code is expecting an array of pointers to arrays of bytes. func() knows the length of each of the byte arrays and fills them with data.
What is the JNA signature for this?
I have wrestled with this seemingly simple problem for two days and not cracked it.
On the Java side I have DirectBuffer bufs[] and the intention is for the C function to populate bufs[] with data.
I had expected that I could declare the JNA signature as
public static native boolean func(Pointer[] bufs, int numBufs);
and then construct a Java array of Pointers each Pointer being new Pointer(db.address());
But whilst I can construct the java array of Pointers I get the error:
java.lang.IllegalArgumentException: class [Lcom.sun.jna.Pointer; is not a supported argument type (in method func in class SomeLib)
I have experimented at length and am getting nowhere. I have looked at all the JNA examples on StackOver flow, but none quite fit.
I am using using JNA via Maven
<dependency>
<groupId>net.java.dev.jna</groupId>
<artifactId>jna</artifactId>
<version>5.3.1</version>
</dependency>
Any help gratefully appreciated.
You're close. Unfortunately the mapping of non-primitive arrays to C only works one way... you can map retrieved memory to the Java array, but not send a Java array to C (except for primitives).
The reason for this restriction is that in C, arrays are contiguous blocks of memory. To access the second element of an array, you just offset by a number of bytes equal to the size of the first element. But the argument you pass to a C function is simply a pointer to the beginning of the array.
So your mapping of func() should use a Pointer for the array argument.
You didn't describe how you constructed your Pointer[] array, but allocating each with a new Pointer() call will produce pointers scattered across the native memory rather than contiguous.
There are basically two approaches to ensuring you have contiguous memory, depending on the level of abstraction you want.
One low-level approach is to create a Memory object, allocating enough room for your Pointer array (probably new Memory(Native.POINTER_SIZE * numBufs)) and then using setPointer() with the appropriate multiple of Native.POINTER_SIZE offset to map your array to the Memory object. Then pass the Memory object to the C function.
A higher level approach is to wrap the Pointer in a JNA Structure, using the Structure.toArray() method to do that contiguous array allocation for you. So you could have this:
public class Foo extends Structure {
public Pointer bar;
}
And then create the array:
Foo[] array = (Foo[]) new Foo().toArray(numBufs);
At this point you have a (contiguous) array of native memory mapped to JNA's Pointer type. Now you just need to assign these Pointers-to-pointers to the pointers attached to your data:
for (int i = 0; i < numBufs; i++) {
// Assign Pointer to corresponding DirectBuffer
array[i].bar = bufs[i];
}
Then you should be able to pass the Array to C by passing the first element's pointer:
func(array[0].bar, numBufs);
// or equivalently
func(array[0].getPointer(), numBufs);
Related
Simple question: Is there a way how to find out how much memory is taken by particular struct?
Ideally I would like it printed to console.
Edit: Krumelur came with simple solution using sizeof function.
Unfortunatelly it does not seems to work well with arrays. Following code
println("Size of int \(123) is: \(sizeofValue(123))")
println("Size of array \([0]) is: \(sizeofValue([0]))")
println("Size of array \([0, 1, 8, 20]) is: \(sizeofValue([0, 1, 8, 20]))")
Produces this output:
Size of int 123 is: 8
Size of array [0] is: 8
Size of array [0, 1, 8, 20] is: 8
So different sizes of arrays give same size what is surely incorrect (at least for my purpose).
The sizeof(T) operator is available in Swift. It returns the size taken up by the specified type or variable, just like in C.
Unlike C, however, there is no concept of a stack-allocated array (static array). An array is a pointer to an object, meaning that its size will always be a size of a pointer (this is the same as for heap-allocated arrays in C). To get the size of an array, you have to do something like
array.count * sizeof(Telement)
but even that is only true if Telement is not an object that allocates heap memory.
This appears to be supported within the Swift standard library now.
Docs
MemoryLayout.size(ofValue: self)
As Declan McKenna pointed out, MemoryLayout.size is now a part of the standard library ("Foundation").
You use this in one of two ways: either you get the size of a type via <> bracket syntax, or of a value by calling it as a function:
var someInt: Int
let a = MemoryLayout.size(ofValue: someInt)
let b = MemoryLayout<Int>.size
/* a and b are equal */
Suppose you have an array arr of type T, you can get the allocated size as follows:
let size = arr.capacity * MemoryLayout<T>.size
Note that you should use arr.capacity, not arr.count. Capacity refers to how much memory has been reserved for the array, even if all the values haven't been written to.
Another thing to note is that memory allocations can be a bit tricky. If you allocate a large block of memory and never write to it, the operating system might not report the application as actually using that memory. The operating system might let you malloc some enormous block of memory that's a thousand times larger than your actual memory, but if you never actually write anything to it, it won't really get allocated.
The method described in this answer is for getting the hypothetical maximum amount allocated by the array.
This question is specifically for hashtables, but might also cover other data structures such as linked lists or trees.
For instance, if you have a struct as follows:
struct Data
{
int value1;
int value2;
int value3;
}
And each integer is 4-byte aligned and stored in memory sequentially, are the key and value of a hash table stored sequentially as well? If you consider the following:
std::map<int, string> list;
list[0] = "first";
Is that first element represented like this?
struct ListNode
{
int key;
string value;
}
And if the key and value are 4-byte aligned and stored sequentially, does it matter where the next pair is stored?
What about a node in a linked list?
Just trying to visualize this conceptually, and also see if the same guidelines for memory storage also apply for open-addressing hashing (the load is under 1) vs. chained hashing (load doesn't matter).
It's highly implementation-specific. And by that I am not only referring to the compiler, CPU architecture and ABI, but also the implementation of the hash table.
Some hash tables use a struct that contains a key and a value next to each other, much like you have guessed. Others have one array of keys and one array of values, so that values[i] is the associated value for the key at keys[i]. This is independent of the "open addressing vs. separate chaining" question.
A hash is a data structure itself. Here's your visualizing:
http://en.wikipedia.org/wiki/Hash_table
http://en.wikipedia.org/wiki/Hash_function
Using a hash function (langauge-specific), the keys are turned into places, and the values are placed there (in an array.)
Linked-lists i'm not as sure about, but i would be they are stored sequentially if they are created sequentially. Obviously, if what the nodes hold increases in size, they'd need to be moved and the pointer redefined to that point.
Usually when the value is not that big (int) it's best to group it together with the key (which by default shouldn't be too big), otherwise only a pointer to it is kept.
The simplest representation of a hash table is an array (the table).
A hash function generates a number between 0 and the size of the array. That number is the index for the item.
There is more to it than this, bit that's the general concept and explains why lookups are so fast.
If we type
MyObject *obj = [[MyObject alloc] init];
"obj" is a pointer to the memory address.
...When we create an int, we type:
int x = 10;
Why don't we type?
int *x = 10;
The question is, why do we need a pointer to object and not int, float, etc...
Efficiency.
Moving an int from one place to another is easy. Moving an object needs a little bit more work from the CPU. Moving the address of an object is as easy as moving an int.
In plain C, it is common to handle pointers to structs for the same reason. C makes it easy with the -> operator.
There are languages where you can create objects “without a pointer”, on the stack. C++, for example. One great thing about having objects on the stack is that they get automatically deallocated when the scope ends, which helps with memory management. It’s also faster.
One bad thing about having objects on the stack is that they get automatically deallocated when the scope ends and the stack disappears. And since objects are usually longer-lived than local variables, you would have to copy the object’s memory somewhere. It’s entirely possible, but it complicates matters.
And it’s not just the memory lifecycle that’s complicated with stack-based objects. Consider assignment, foo = bar for two object types. If the objects are always pointers (Class*), you just assigned a pointer and got two pointers to the same objects; easy. If foo is stack-based (Class), the assignment semantics starts to get blurry – you could well end with a copy of the original object.
Introducing a rule that all objects are allocated on the heap (“with pointers”) is a great simplification. And as it happens, the speed difference doesn’t matter that much, and the compiler can now also automatically insert code to deallocate heap-based objects after they go out of scope, so it’s generally a win-win situation.
You can have pointer for int, float as well.
Objects are created on heap. To access it, you need the address. Thats why they are of pointer types.
Because that is the nature of an object.
Objective-C is directly derrived from C. That is why objects are referred to as pointers. An int-type variable of the size of an memory address is a pointer.
In the end, an object in memory is not much different from an struct in memory.
However, when working in Objective-C it is advisable to think of these variables as references of objects rather than pointers to an object's memory areas. Think of them like Java does and do not spend much thoughts on how the system manages the references. There are far more important things to think of such as alloc/retain vs. release/autorelease or following the much easier ARC rules respectively.
BTW:
MyObject obj;
That would declare an object, not a pointer. It is not possible in Objective-C (afaik) and certainly not reasonable. But if it was reasonable and possible, that is what the syntax would look like.
int *x;
That does create a pointer to an int. For using it you would have to allocate memory and assign its address to x. Rarerly reasonable in Objective C either but quite useful in standard C.
its the difference between objects being on the stack or the heap.
going int x = 10 x is now on the stack. int *x = 10 is just simply wrong (well most likely not what you want) since that is declaring a pointer to address 10 whatever that may be. you would want int *x = malloc(sizeOf(int)); as CodaFi suggested. that will allocate memory on the heap of the size of an int.
going MyObject *obj = [[Myobject alloc] init]; the compiler behind the scenes is allocating your object to the heap for you, but its basically the same principle
I have to store a TList of something that can easily be implemented as a record in Delphi (five simple fields). However, it's not clear to me what happens when I do TList<TMyRecordType>.Add(R).
Since R is a local variable in the procedure in which I create the my TList, I assume that the memory for it will be released when the function returns. Does this leave an invalid record pointer in the list? Or does the list know to copy-on-assign? If the former, I assume I would have to manually manager the memory for R with New() and Dispose(), is that correct?
Alternatively, I can "promote" my record type to a class type by simply declaring the fields public (without even bothering with making them formal properties). Is that considered OK, or ought I to take the time to build out the class with private fields and public properties?
Simplified: records are blobs of data and are passed around by value - i.e. by copying them - by default. TList<T> stores values in an array of type T. So, TList<TMyRecordType>.Add(R) will copy the value R into the array at position Count, and increment the Count by one. No need to worry about allocation or deallocation of memory.
More complex issues that you usually don't need to worry about: if your record contains fields of a string type, an interface type, a dynamic array, or a record which itself contains fields of one of these types, then it's not just a simply copy of data; instead, CopyRecord from System.pas is used, which ensures that reference counts are updated correctly. But usually you don't need to worry about this detail unless you are using Move to shift the bits around yourself, or doing similar low-level operations.
I'd like to understand what happens when the size of a dynamic array is increased.
My understanding so far:
Existing array elements will remain unchanged.
New array elements are initialised to 0
All array elements are contiguous in memory.
When the array size is increased, will the extra memory be tacked onto the existing memory block, or will the existing elements be copied to a entirely new memory block?
Does changing the size of a dynamic array have consequences for pointers referencing existing array elements?
Thanks,
[edit] Incorrect assumption struck out. (New array elements are initialised to 0)
Existing array elements will remain unchanged: yes
New array elements are initialized to 0: yes (see update) no, unless it is an array of compiler managed types such as string, an other array or a variant
All array elements are contiguous in memory: yes
When the array size is increased, the array will be copied. From the doc:
...memory for a dynamic array is reallocated when you assign a value to the array or pass it to the SetLength procedure.
So yes, increasing the size of a dynamic array does have consequences for pointers referencing existing array elements.
If you want to keep references to existing elements, use their index in the array (0-based).
Update
Comments by Rob and David prompted me to check the initialization of dynamic arrays in Delphi5 (as I have that readily available anyway). First using some code to create various types of dynamic arrays and inspecting them in the debugger. They were all properly initialized, but that could still have been a result prior initialization of the memory location where they were allocated. So checked the RTL. It turns out D5 already has the FillChar statement in the DynArraySetLength method that Rob pointed to:
// Set the new memory to all zero bits
FillChar((PChar(p) + elSize * oldLength)^, elSize * (newLength - oldLength), 0);
In practice Embarcadero will always zero initialise new elements simply because to do otherwise would break so much code.
In fact it's a shame that they don't officially guarantee the zero allocation because it is so useful. The point is that often at the call site when writing SetLength you don't know whether you are growing or shrinking the array. But the implementation of SetLength does know – clearly it has to. So it really makes sense to have a well-defined action on any new elements.
What's more, if they want people to be able to switch easily between managed and native worlds then zero allocation is desirable since that's what fits with the managed code.