Binding error with OpenGL buffers and Direct State Access (DSA) - buffer

I got this error from OpenGL when I use glNamedBufferStorage() :
GL_INVALID_OPERATION error generated. Buffer must be bound.
Normally I don't have to use glBindBuffer() with direct state access !?
Here is my gl call sequence :
glCreateBuffers(1, &m_identifier);
...
glNamedBufferStorage(m_identifier, static_cast< GLsizeiptr >(bytes + offset), data, GL_DYNAMIC_STORAGE_BIT);
...
glNamedBufferSubData(m_identifier, static_cast< GLintptr >(offset), static_cast< GLsizeiptr >(bytes), data);
I only use DSA functions, so I don't understand why I got the problem.

My bad, I forget this little one : glGetBufferParameteriv().
Replaced by glGetNamedBufferParameteriv() in DSA.
It was wrapped into a method of my class.

Related

ClearUnorderedAccessViewFloat on a Buffer

On a RGBA Texture this works as expected. Each component of the FLOAT[4] argument gets casted to the corresponding component of the DXGI_FORMAT of the texture.
However, with a Buffer this doesn't work and some rubbish is assigned the buffer based on the first component of the FLOAT[4] argument.
Although, this makes sense since a buffer UAV has no DXGI_FORMAT to specify what cast happens.
The docs said
If you want to clear the UAV to a specific bit pattern, consider using ID3D12GraphicsCommandList::ClearUnorderedAccessViewUint.
so you can just use as follows:
float fill_value = ...;
auto Values = std::array<UINT, 4>{ *((UINT*)&fill_value),0,0,0};
commandList->ClearUnorderedAccessViewUint(
ViewGPUHandleInCurrentHeap,
ViewCPUHandle,
pResource,
Values.data(),
0,
nullptr
);
I believe the debug layer should raise a error when using ClearUnorderedAccessViewFloat on a Buffer.
Edit: it actually does I just missed it.
D3D12 ERROR: ID3D12CommandList::ClearUnorderedAccessViewUint: ClearUnorderedAccessView* methods are not compatible with Structured Buffers. StructuredByteStride is set to 4 for resource 0x0000023899A09A40'. [ RESOURCE_MANIPULATION ERROR #1156: CLEARUNORDEREDACCESSVIEW_INCOMPATIBLE_WITH_STRUCTURED_BUFFERS]

Choosing between buffers in a Metal shader

I'm struggling with porting my OpenGL application to Metal. In my old app, I used to bind two buffers, one with vertices and respective colours and one with vertices and respective textures, and switch between the two based on some app logic. Now in Metal I've started with the Hello Triangle example where I tried running this vertex shader
vertex RasterizerData
vertexShader(uint vertexID [[vertex_id]],
constant AAPLVertex1 *vertices1 [[buffer(AAPLVertexInputIndexVertices1)]],
constant AAPLVertex2 *vertices2 [[buffer(AAPLVertexInputIndexVertices2)]],
constant bool &useFirstBuffer [[buffer(AAPLVertexInputIndexUseFirstBuffer)]])
{
float2 pixelSpacePosition;
if (useFirstBuffer) {
pixelSpacePosition = vertices1[vertexID].position.xy;
} else {
pixelSpacePosition = vertices2[vertexID].position.xy;
}
...
and this Objective-C code
bool useFirstBuffer = true;
[renderEncoder setVertexBytes:&useFirstBuffer
length:sizeof(bool)
atIndex:AAPLVertexInputIndexUseFirstBuffer];
[renderEncoder setVertexBytes:triangleVertices
length:sizeof(triangleVertices)
atIndex:AAPLVertexInputIndexVertices1];
(where AAPLVertexInputIndexVertices1 = 0, AAPLVertexInputIndexVertices2 = 1 and AAPLVertexInputIndexUseFirstBuffer = 3), which should result in vertices2 never getting accessed, but still I get the error: failed assertion 'Vertex Function(vertexShader): missing buffer binding at index 1 for vertices2[0].'
Everything works if I replace if (useFirstBuffer) with if (true) in the Metal code. What is wrong?
When you're hard-coding the conditional, the compiler is smart enough to eliminate the branch that references the absent buffer (via dead-code elimination), but when the conditional must be evaluated at runtime, the compiler doesn't know that the branch is never taken.
Since all declared buffer parameters must be bound, leaving the unreferenced buffer unbound trips the validation layer. You could bind a few "dummy" bytes at the Vertices2 slot (using -setVertexBytes:length:atIndex:) when not following that path to get around this. It's not important that the buffers have the same length, since, after all, the dummy buffer will never actually be accessed.
In the atIndex argument, you call the code with the values AAPLVertexInputIndexUseFirstBuffer and AAPLVertexInputIndexVertices1 but in the Metal code the values AAPLVertexInputIndexVertices1 and AAPLVertexInputIndexVertices2 appear in the buffer() spec. It looks like you need to use AAPLVertexInputIndexVertices1 instead of AAPLVertexInputIndexUseFirstBuffer in your calling code.

JNA pointer to pointer mapping

I am working on a Java binding for the excellent libvips
Using this function all is fine:
VipsImage *in;
in = vips_image_new_from_file( test.jpg, NULL )
vips_image_write_to_file( in, "out.jpg", NULL )
So mapped in Java:
Pointer vips_image_new_from_file(String filename,String params);
But I have a problem when the parameter like this:
VipsImage *in;
VipsImage *out;
vips_invert( in, &out, NULL )
vips_image_write_to_file( out, "out.jpg", NULL )
I have tried:
int vips_resize(Pointer in, PointerByReference out, Double scale, String params);
Pointer in = vips_image_new_from_file("file.png",null);
PointerByReference ptr1 = new PointerByReference();
vips_invert(in, ptr1, null);
vips_image_write_to_file( ptr1.getValue(), "fileout.png", null);
But doesn't work. The ptr1.getValue() does not contains the expected result.
How can I do it?
Thanks
I'm the libvips maintainer, a Java binding would be great!
But I think you might be taking the wrong approach. I think you are trying a straight wrap of the C API, but that's going to be tricky to do well, since it makes use of a lot of C-isms that don't map well to Java. For example, in C you can write:
VipsImage *image;
if (!(image = vips_image_new_from_file("somefile.jpg",
"shrink", 2,
"autorotate", TRUE,
NULL)))
error ...;
ie. the final NULL marks the end of a varargs name / value list. Here I'm asking the jpeg loader to do a x2 shrink during load, and to apply any Orientation tags it finds in the EXIF.
libvips has a lower-level API based on GObject which is much easier to bind to. There's some discussion and example code in this issue, where someone is making a C# binding using p/invoke.
https://github.com/jcupitt/libvips/issues/558
The code for the C++ and PHP bindings might be a useful reference:
https://github.com/jcupitt/libvips/tree/master/cplusplus
https://github.com/jcupitt/php-vips-ext
That's a PHP binding for the entire library in 1800 lines of C.
I'd be very happy to help if I can. Open an issue on the libvips tracker:
https://github.com/jcupitt/libvips/issues

How to get the size of a user defined struct? (sizeof)

I've got a structure with C representation:
struct Scard_IO_Request {
proto: u32,
pciLength: u32
}
when I want to ask the sizeof (like in C sizeof()) using:
mem::sizeof<Scard_IO_Request>();
I get compilation error:
"error: `sizeof` is a reserved keyword"
Why can't I use this sizeof function like in C? Is there an alternative?
For two reasons:
There is no such function as "sizeof", so the compiler is going to have a rather difficult time calling it.
That's not how you invoke generic functions.
If you check the documentation for mem::size_of (which you can find even if you search for "sizeof"), you will see that it includes a runnable example which shows you how to call it. For posterity, the example in question is:
fn main() {
use std::mem;
assert_eq!(4, mem::size_of::<i32>());
}
In your specific case, you'd get the size of that structure using
mem::size_of::<Scard_IO_Request>()

confusion passing data to pthread_create()... how it works?

Please take a look to the pthread_create() prototype we have:
int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
void *(*start_routine) (void *), void *arg);
to the last argument is a void pointer. But taking a look in some code in the internet I see developers doing:
long t;
pthread_create( &thread, NULL, function, (void*)t);
and it works!!! I mean they are not doing:
pthread_create( &thread, NULL, function, (void*)&t);
in other words, the reference of "t" is not being used.
However, if I change the datatype to "int" instead "long".. does not work.
I believe the reference should be considered always but do you have idea why long is working with no references?
Thank you guys!
The parameter being passed to the thread function is a void*. In the general case, that pointer can point to some block of data the function can use.
However, remember that the pointer itself is a value. It's common to simply use that value as the data for the thread function if the amount of data you're passing is small enough to fit in a void* - namely if all you need to pass to the function is a integer value. That's what's happening the case:
long t;
t = /* some value to pass to the thread */;
pthread_create( &thread, NULL, function, (void*)t);
One advantage to this is that you don't have lifetime issues to deal with on the thread data.

Resources