what's the max size glRenderbufferStorageMultisampleAPPLE support? - ios

I mean what's the max width and height the function support, thanks!
I tried two times height of the screen, report an error 0x8cdd which means not support.

When in doubt, always read the extension specification... in this case: GL_APPLE_framebuffer_multisample.
If you read the extension specification, it points you to GL_MAX_RENDERBUFFER_SIZE and an additional implementation-defined limit specific to the extension itself: GL_MAX_SAMPLES_APPLE.
In short, width and height cannot exceed the value of GL_MAX_RENDERBUFFER_SIZE and the number of samples cannot exceed GL_MAX_SAMPLES_APPLE. So you should query these values at run-time and act accordingly.
GLuint max_rb_size, max_samples_apple;
glGetIntegerv (GL_MAX_RENDERBUFFER_SIZE, &max_rb_size);
glGetIntegerv (GL_MAX_SAMPLES_APPLE, &max_samples_apple);
This ought to answer your question, since this is implementation specific this is the best I can do for you. I could tell you that GLES2 requires MAX_RENDERBUFFER_SIZE to be at least 1x1 (no joke), and Apple's extension only requires 1 sample to be supported. Neither one of these required minimum values is particularly useful so you will have to query it at run-time to find out what a real system supports :)
OpenGL ES 2.0.25 Specification - 6.2. State Tables - pp. 154

Related

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

epoll_create and epoll_wait

I was wondering about the parameters of two APIs of epoll.
epoll_create (int size) - in this API, size is defined as the size of event pool. But, it seems that having more events than the size still works. (I've put the size as 2 and forced event pool to have 3 events... but it still works !?) Thus I was wondering what this parameter actually means and curious about the maximum value of this parameter.
epoll_wait (int maxevents) - for this API, the maxevents definition is straight-forward. However, I can see the lackness of information or advices on how to determin this parameter. I expect this parameter to be changed depending on the size of epoll event pool size. Any suggestions or advices will be great. Thank you!
1.
"man epoll_create"
DESCRIPTION
...
The size is not the maximum size of the backing store but just a hint
to the kernel about how to dimension internal structures. (Nowadays,
size is unused; see NOTES below.)
NOTES
Since Linux 2.6.8, the size argument is unused, but must be greater
than zero. (The kernel dynamically sizes the required data strucā€
tures without needing this initial hint.)
2.
Just determine an accurate number by yourself, but be aware that
giving it a small number may drop out the efficiency a little bit.
Because the smaller number assigned to "maxevent" , the more often you may have to call epoll_wait() to consume all the events, queued already on the epoll.

What are the alignment restrictions on the new Haswell AVX "gather" instructions?

I'm looking at the AVX programming reference. The new Haswell instructions include some eagerly awaited "gather" loads. However, I can't figure out what the alignment restrictions are on the indexed data items. Section 2.5 "Memory alignment" of the reference seems like it ought to list the various VGATHER* instructions in one of tables 2.4 or 2.5... but it doesn't.
Background: while gather instructions' supported data sizes are 4 and 8 bytes, my application could benefit from gather-loading adjacent 16-bit data value pairs to DWORDS. Odd indices with a 2-byte scale will produce 2-byte aligned 4-byte loads and it's not clear to me from the manual whether this will fault or otherwise fail to work as intended (I rather suspect I'm out of luck given all the instructions supporting unaligned accesses seem to have a 'U' in them).
This is the first time I hear about AVX2. But I'm guessing the memory alignment restriction won't be different from current implementation of AVX on Sandy Bridge with the new VEX coding scheme. I.e. no alignment required unless explicitly using aligned VMOV instruction with A in the name. Most instruction allow access with any byte-granularity alignment.
In fact, see section 2.5, page 35 of Intel(R) Advanced Vector Extensions Programming Reference which states exactly this.

DirectX Anisotropic Filtering Option Levels and Future Proofing It

Ok so i have a configuration program for a 3d directx application. When querying the AF level i get a 16. If 16 is available will 8, 4 and 2 always be available? and when / if in the future levels go up to 32 / 64, will these lower values be available too?
will this always be the case? can i always divide the max af level by two until i arrive at 2 to obtain all the possible af levels? Lastly if that is not the case is there a way to query directx much in the same way you can check if the hardware supports a multi-sample level to see if it supports the select anisotropic filtering level?
Yes, the query is for the maximum anisotropy, not the one value you can set it to. Also, there's nothing I can find in the docs that says that the anisotropy you set must be a power of two (though admittedly, I only did a quick search).

LuaJit increase stack/heap size

I keep getting a out of memory error in LuaJit. How do I increase the stack or heap size?
Thanks
I haven't used LuaJIT myself, other than with toy examples. But since noone else has provided any answers yet...
From skimming the documentation, LuaJIT depends on the Coco extensions to the standard coroutine library. One of the changes introduced by Coco is that the functions that create a new coroutine now take an optional argument that specifies the stack size.
Quoting the Coco docs:
coro = coroutine.create(f [, cstacksize])
func = coroutine.wrap(f [, cstacksize])
The optional argument cstacksize
specifies the size of the C stack to
allocate for the coroutine:
A default stack size is used if cstacksize is not given or is nil or
zero.
No C stack is allocated if cstacksize is -1.
Any other value is rounded up to the minimum size (i.e. use 1 to get
the minimum size).
There is also the new function coroutine.cstacksize([newdefault]) that sets the default C stack size, as well as some corresponding changes to the C API.
Furthermore, there are numerous compile-time configuration options in the LuaJIT version of luaconf.h. There may be something in there that sets the default. On Windows, there is also a link-time setting for the executable's basic stack, set by MSVC's LINK.EXE via the STACKSIZE statement in the application's .DEF file.

Resources