ios - presentRenderbuffer triggers EXC_BAD_ACCESS - ios

I've found that when I use textures above GL_TEXTURE18 on ios (tested on iOS 10), presentRenderbuffer triggers an EXC_BAD_ACCESS. Is there any reason for that? Can I not use textures up to GL_TEXTURE31

The GL_TEXTUREX are just some defined values, defined enumerations. In your case a GPU is the one that defines the actual number of supported textures and it is your responsibility to check what are these limitations.
You can get that by using glGet something like:
GLint max_combined_texture_image_units;
glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &max_combined_texture_image_units);
Try this thread.
Do note that these defines/enumerations are here just to help you, it does not mean they are actually valid or supported. The openGL API is mostly designed by passing integer values typedef uint32_t GLenum; so as far as the API goes you may replace GL_TEXTURE0 with 1200 or any other value but you do need to ensure that the value is actually valid.

Related

vkGetMemoryFdKHR is return the same fd?

In WIN32:
I'm sure that if the handle is the same, the memory may not be the same, and the same handle will be returned no matter how many times getMemoryWin32HandleKHR is executed.
This is consistent with vulkan's official explanation: Vulkan shares memory.
It doesn't seem to work properly in Linux.
In my program,
getMemoryWin32HandleKHR works normally and can return a different handle for each different memory.
The same memory returns the same handle.
But in getMemoryFdKHR, different memories return the same fd.
Or the same memory executes getMemoryFdKHR twice, it can return two different handles.
This causes me to fail the device memory allocation during subsequent imports.
I don't understand why this is?
Thanks!
#ifdef WIN32
texGl.handle = device.getMemoryWin32HandleKHR({ info.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32 });
#else
VkDeviceMemory memory=VkDeviceMemory(info.memory);
int file_descriptor=-1;
VkMemoryGetFdInfoKHR get_fd_info{
VK_STRUCTURE_TYPE_MEMORY_GET_FD_INFO_KHR, nullptr, memory,
VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT
};
VkResult result= vkGetMemoryFdKHR(device,&get_fd_info,&file_descriptor);
assert(result==VK_SUCCESS);
texGl.handle=file_descriptor;
// texGl.handle = device.getMemoryFdKHR({ info.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueFd });
Win32 is nomal.
Linux is bad.
It will return VK_ERROR_OUT_OF_DEVICE_MEMORY.
#ifdef _WIN32
VkImportMemoryWin32HandleInfoKHR import_allocate_info{
VK_STRUCTURE_TYPE_IMPORT_MEMORY_WIN32_HANDLE_INFO_KHR, nullptr,
VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT, sharedHandle, nullptr };
#elif __linux__
VkImportMemoryFdInfoKHR import_allocate_info{
VK_STRUCTURE_TYPE_IMPORT_MEMORY_FD_INFO_KHR, nullptr,
VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT,
sharedHandle};
#endif
VkMemoryAllocateInfo allocate_info{
VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, // sType
&import_allocate_info, // pNext
aligned_data_size_, // allocationSize
memory_index };
VkDeviceMemory device_memory=VK_NULL_HANDLE;
VkResult result = vkAllocateMemory(m_device, &allocate_info, nullptr, &device_memory);
NVVK_CHECK(result);
I think it has something to do with fd.
In my some test: if I try to get fd twice. use the next fd that vkAllocateMemory is work current......but I think is error .
The fd obtained in this way is different from the previous one.
Because each acquisition will be a different fd.
This makes it impossible for me to distinguish, and the following fd does vkAllocateMemory.
Still get an error.
So this test cannot be used.
I still think it should have the same process as win32. When the fd is obtained for the first time, vkAllocateMemory can be performed correctly.
thanks very much!
The Vulkan specifications for the Win32 handle and POSIX file descriptor interfaces explicitly state different things about their importing behavior.
For HANDLEs:
Importing memory object payloads from Windows handles does not transfer ownership of the handle to the Vulkan implementation. For handle types defined as NT handles, the application must release handle ownership using the CloseHandle system call when the handle is no longer needed.
For FDs:
Importing memory from a file descriptor transfers ownership of the file descriptor from the application to the Vulkan implementation. The application must not perform any operations on the file descriptor after a successful import.
So HANDLE importation leaves the HANLDE in a valid state, still referencing the memory object. File descriptor importation claims ownership of the FD, leaving it in a place where you cannot use it.
What this means is that the FD may have been released by the internal implementation. If that is the case, later calls to create a new FD may use the same FD index as a previous call.
The safest way to use both of these APIs is to have the Win32 version emulate the functionality of the FD version. Don't try to do any kinds of comparisons of handles. If you need some kind of comparison logic, then you'll have to implement it yourself. When you import a HANDLE, close it immediately afterwards.

What is ID3D12GraphicsCommandList::DiscardResource?

What exactly should I expect to happen when using DiscardResource?
What's the difference between discard and destroying/deleting a resource.
When is a good time/use-case to discard a resource?
Unfortunately Microsoft doesn't seem to say much about it other than it "discards a resource".
TL;DR: Is a rarely used function that provides a driver hint related to handling clear compression structures. You are unlikely to use it except based on specific performance advice.
DiscardResource is the DirectX 12 version of the Direct3D 11.1 method. See Microsoft Docs
The primary use of these methods it to optimize the performance of tiled-based deferred rasterizer graphics parts by discarding the render target after present. This is a hint to the driver that the contents of the render target are no longer relevant to the operation of the program, so it can avoid some internal clearing operations on the next use.
For DirectX 11, this is in the DirectX 11 App template to use DiscardView because it makes use of DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL:
void DX::DeviceResources::Present()
{
// The first argument instructs DXGI to block until VSync, putting the application
// to sleep until the next VSync. This ensures we don't waste any cycles rendering
// frames that will never be displayed to the screen.
DXGI_PRESENT_PARAMETERS parameters = { 0 };
HRESULT hr = m_swapChain->Present1(1, 0, &parameters);
// Discard the contents of the render target.
// This is a valid operation only when the existing contents will be entirely
// overwritten. If dirty or scroll rects are used, this call should be removed.
m_d3dContext->DiscardView1(m_d3dRenderTargetView.Get(), nullptr, 0);
// Discard the contents of the depth stencil.
m_d3dContext->DiscardView1(m_d3dDepthStencilView.Get(), nullptr, 0);
// If the device was removed either by a disconnection or a driver upgrade, we
// must recreate all device resources.
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
HandleDeviceLost();
}
else
{
DX::ThrowIfFailed(hr);
}
}
The DirectX 12 App template doesn't need those explicit calls because it uses DXGI_SWAP_EFFECT_FLIP_DISCARD.
If you are wondering why the DirectX 11 app doesn't just use DXGI_SWAP_EFFECT_FLIP_DISCARD, it probably should. The DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL swap effect was the only one supported by Windows 8.x for Windows Store apps, which is when DiscardView was introduced. For Windows 10 / DirectX 12 / UWP, it's probably better to always use DXGI_SWAP_EFFECT_FLIP_DISCARD unless you specifically don't want the backbuffer discarded.
It is also useful for multi-GPU SLI / Crossfire configurations since the clearing operation can require synchronization between the GPUs. See this GDC 2015 talk
There are also other scenario-specific usages. For example, if doing deferred rendering for the G-buffer where you know every single pixel will be overwritten, you can use DiscardResource instead of doing ClearRenderTargetView / ClearDepthStencilView.

Passing Data through the Stack

I wanted to see if you could pass struct through the stack and I manage to get a local var from a void function in another void function.
Do you guys thinks there is any use to that and is there any chance you can get corrupted data between the two function call ?
Here's the Code in C (I know it's dirty)
#include <stdio.h>
typedef struct pouet
{
int a,b,c;
char d;
char * e;
}Pouet;
void test1()
{
Pouet p1;
p1.a = 1;
p1.b = 2;
p1.c = 3;
p1.d = 'a';
p1.e = "1234567890";
printf("Declared struct : %d %d %d %c \'%s\'\n", p1.a, p1.b, p1.c, p1.d, p1.e);
}
void test2()
{
Pouet p2;
printf("Element of struct undeclared : %d %d %d %c \'%s\'\n", p2.a, p2.b, p2.c, p2.d, p2.e);
p2.a++;
}
int main()
{
test1();
test2();
test2();
return 0;
}
Output is :
Declared struct : 1 2 3 a '1234567890'
Element of struct undeclared : 1 2 3 a '1234567890'
Element of struct undeclared : 2 2 3 a '1234567890'
Contrary to the opinion of the majority, I think it can work out in most of the cases (not that you should rely on it, though).
Let's check it out. First you call test1, and it gets a new stack frame: the stack pointer which signifies the top of the stack goes up. On that stack frame, besides other things, memory for your struct (exactly the size of sizeof(struct pouet)) is reserved and then initialized. What happens when test1 returns? Does its stack frame, along with your memory, get destroyed?
Quite the opposite. It stays on the stack. However, the stack pointer drops below it, back into the calling function. You see, this is quite a simple operation, it's just a matter of changing the stack pointer's value. I doubt there is any technology that clears a stack frame when it is disposed. It's just too costy a thing to do!
What happens then? Well, you call test2. All it stores on the stack is just another instance of struct pouet, which means that its stack frame will most probably be exactly the same size as that of test1. This also means that test2 will reserve the memory that previously contained your initialized struct pouet for its own variable Pouet p2, since both variables should most probably have the same positions relative to the beginning of the stack frame. Which in turn means that it will be initialized to the same value.
However, this setup is not something to be relied upon. Even with concerns about non-standartized behaviour aside, it's bound to be broken by something as simple as a call to a different function between the calls to test1 and test2, or test1 and test2 having stack frames of different sizes.
Also, you should take compiler optimizations into account, which could break things too. However, the more similar your functions are, the less chances there are that they will receive different optimization treatment.
Of course there's a chance you can get corrupted data; you're using undefined behavior.
What you have is undefined behavior.
printf("Element of struct undeclared : %d %d %d %c \'%s\'\n", p2.a, p2.b, p2.c, p2.d, p2.e);
The scope of the variable p2 is local to function test2() and as soon as you exit the function the variable is no more valid.
You are accessing uninitialized variables which will lead to undefined behavior.
The output what you see is not guaranteed at all times and on all platforms. So you need to get rid of the undefined behavior in your code.
The data may or may not appear in test2. It depends on exactly how the program was compiled. It's more likely to work in a toy example like yours than in a real program, and it's more likely to work if you turn off compiler optimizations.
The language definition says that the local variable ceases to exist at the end of the function. Attempting to read the address where you think it was stored may or may produce a result; it could even crash the program, or make it execute some completely unexpected code. It's undefined behavior.
For example, the compiler might decide to put a variable in registers in one function but not in the other, breaking the alignment of variables on the stack. It can even do that with a big struct, splitting it into several registers and some stack — as long as you don't take the address of the struct it doesn't need to exist as an addressable chunk of memory. The compiler might write a stack canary on top of one of the variables. These are just possibilities at the top of my head.
C lets you see a lot behind the scenes. A lot of what you see behind the scenes can completely change from one production compilation or run to the next.
Understanding what's going on here is useful as a debugging skill, to understand where values that you see in a debugger might be coming from. As a programming technique, this is useless since you aren't making the computer accomplish any particular result.
Just because this works for one compiler doesn't mean that it will for all. How uninitialized variables are handled is undefined and one computer could very well init pointers to null etc without breaking any rules.
So don't do this or rely on it. I have actually seen code that depended on functionality in mysql that was a bug. When that was fixed in later versions the program stopped working. My thoughts about the designer of that system I'll keep to myself.
In short, never rely on functionality that is not defined. If you knowingly use it for a specific function and you are prepared that an update to the compiler etc can break it and you keep an eye out for this at all times it might be something you could explain and live with. But most of the time this is far from a good idea.

Set an initial focal distance on iOS

I'm working on an iOS-app where one of the features is scanning QR-codes. For this I'm using the excellent library, ZBar. The scanning works fine and is generally really quick. However when you use smaller QR-codes it takes a bit longer to scan, mostly due to the fact that the autofocus needs some time to adjust. I was experimenting and noticed that the focus could be locked using the following code:
AVCaptureDevice *cameraDevice = readerView.device;
if ([cameraDevice lockForConfiguration:nil]) {
[cameraDevice setFocusMode:AVCaptureFocusModeLocked];
[cameraDevice unlockForConfiguration];
}
When this code is used after a successful scan, the coming scans are really quick. That made me wonder, could I somehow lock the focus before even scanning one code? The app will only scan rather small QR-codes so there will never be a need for focusing on something far away. Sure, I could implement something like tap to focus, but preferably I would like to avoid that extra step.
Is there a way to achieve this? Or are there maybe another way of speeding things up when dealing with smaller QR-codes?
// Alexander
In iOS7 this is now possible!
Apple has added the property autoFocusRangeRestriction to the AVCaptureDevice class. This property is of the enum AVCaptureAutoFocusRangeRestriction which has three different values:
AVCaptureAutoFocusRangeRestrictionNone - Default, no restrictions
AVCaptureAutoFocusRangeRestrictionNear - The subject that matters is close to the camera
AVCaptureAutoFocusRangeRestrictionFar - The subject that matters is far from the camera
To check if the method is available we should first check if the property autoFocusRangeRestrictionSupported is true. And since it's only supported in iOS7 an onwards we should also use respondsToSelector so we don't get an exception on earlier iOS-versions.
So the resulting code should look something like this:
AVCaptureDevice *cameraDevice = zbarReaderView.device;
if ([cameraDevice respondsToSelector:#selector(isAutoFocusRangeRestrictionSupported)] && cameraDevice.autoFocusRangeRestrictionSupported) {
// If we are on an iOS version that supports AutoFocusRangeRestriction and the device supports it
// Set the focus range to "near"
if ([cameraDevice lockForConfiguration:nil]) {
cameraDevice.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
[cameraDevice unlockForConfiguration];
}
}
This seems to somewhat speed up the scanning of small QR-codes according to my initial tests :)
Update - iOS8
With iOS8, Apple has given us lots of new camera API's to play with. One of this new methods is this one:
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition completionHandler:(void (^)(CMTime syncTime))handler
This method locks focus by moving the lens to a position between 0.0 and 1.0. I played around with the method, locking the lens at close values. However, in general it caused more problems then it solved. You had to keep the QR-codes/barcodes at a very specific distance, which could cause issues when you had codes of different sizes.
But. I think I have found a pretty good alternative to locking focus altogether. When the user press the scan button, I lock the lens to a close distance, and when it's finished I switch the camera back to auto focus. This gives us the benefits of keeping auto focus on, but forces the camera to begin at a close distance where a QR-code/barcode is likely to be found. This in combination with:
cameraDevice.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
And:
cameraDevice.focusPointOfInterest = CGPointMake(0.5,0.5);
Results in a pretty snappy scanner.
I also built a custom scanner with the API's introduced in iOS7, instead of using ZBar. Mostly because the ZBar-libs are quite outdated and as when iPhone 5 introduced ARMv7s I now had to recompile it again for ARM64.
// Alexander
iOS 8 recently added this configuration! It is almost like they read stack overflow
/*!
#method setFocusModeLockedWithLensPosition:completionHandler:
#abstract
Sets focusMode to AVCaptureFocusModeLocked and locks lensPosition at an explicit value.
#param lensPosition
The lens position, as described in the documentation for the lensPosition property. A value of AVCaptureLensPositionCurrent can be used
to indicate that the caller does not wish to specify a value for lensPosition.
#param handler
A block to be called when lensPosition has been set to the value specified and focusMode is set to AVCaptureFocusModeLocked. If
setFocusModeLockedWithLensPosition:completionHandler: is called multiple times, the completion handlers will be called in FIFO order.
The block receives a timestamp which matches that of the first buffer to which all settings have been applied. Note that the timestamp
is synchronized to the device clock, and thus must be converted to the master clock prior to comparison with the timestamps of buffers
delivered via an AVCaptureVideoDataOutput. The client may pass nil for the handler parameter if knowledge of the operation's completion
is not required.
#discussion
This is the only way of setting lensPosition.
This method throws an NSRangeException if lensPosition is set to an unsupported level.
This method throws an NSGenericException if called without first obtaining exclusive access to the receiver using lockForConfiguration:.
*/
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition completionHandler:(void (^)(CMTime syncTime))handler NS_AVAILABLE_IOS(8_0);
EDIT: this is a method of AVCaptureDevice

Should I clean OpenGL state when working with renderbuffers and framebuffers?

I am writing a simple OOP wrapper around OpenGL ES. While writing the render- and framebuffer I have to bind the buffer in order to work with it:
- (void) setupSomething
{
…
glBindRenderbufferOES(GL_RENDERBUFFER_OES, myBufferID);
…
}
Now what if this setup code is called in a context where there’s already some other render buffer bound? My simple version mentioned above would have the nasty side effect of switching the current buffer, which sounds quite fragile. I figured I should write the code more defensively:
- (void) setupSomething
{
// Store current state
GLint previousRenderBuffer = 0;
glGetIntegerv(GL_RENDERBUFFER_BINDING_OES, &previousRenderBuffer);
// Do whatever I want to do
glBindRenderbufferOES(GL_RENDERBUFFER_OES, myBufferID);
…
// Restore previous state
glBindRenderbufferOES(GL_RENDERBUFFER_OES, previousRenderBuffer);
}
My questions are: is it really necessary/wise/customary to save the previous state like this, and if yes, is there some kind of glPushSomething that would do it for me?
Working with a graphics api like OpenGL it's usually a good idea to minimize the number of api calls. Some calls can be quite expensive - I'm not sure about glBindRenderBuffer though - it could be as cheap as just storing one int. But it could be some complex state switching operation. You'd better handle it youself - using your own 'stack' of buffers(there's not glPushAttrib or something like this for render buffers in OpenGL ES) or, better in my humble opinion, avoid those situations - always make sure you finish your work with render buffer you have bound before passing to another buffer.

Resources