I would like to display two primitives:
The first displaying a texture with antialiasing
The second displaying a texture without antialiasing
Here is my code to setup the texture before each primitive render:
device.SetTexture(0, texture);
if(antialiasing)
{
device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Linear);
device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Linear);
}
else
{
device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.None);
device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.None);
}
It works but if I use DirectX debug mode I get an exception with the *D3DERR_UNSUPPORTEDTEXTUREFILTER: Unsupported texture filter*
I'm using SlimDX but I think this code would crash anyway with C++ API.
Related
I am in the process of migrating a small iPad application from OpenGL ES 2.0 to OpenGL ES 3.0. In the App, I use a subclass of GLKView to handle all my drawing, though the only GLKit features I use are:
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3]; // Or 2
self.drawableDepthFormat = GLKViewDrawableDepthFormatNone;
self.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
self.drawableMultisample = GLKViewDrawableMultisample4X;
self.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
self.enableSetNeedsDisplay = YES;
// ... gl code following
My -drawRect method looks like this:
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
[_currentProgram use]; // Use program
glUniformMatrix4fv([_currentProgram getUniformLocation:#"modelViewProjectionMatrix"], 1, 0, modelViewProjectionMatrix.m);
// ...
if (isES3) {
glBindVertexArray(vertexArray);
}
else {
glBindVertexArrayOES(vertexArray);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, verticiesLength);
I do not yet have a OpenGL ES 3.0-capable device so all of my OpenGL ES 3.0 testing is being done in the iOS Simulator. OpenGL ES 2.0 testing is done on-device and in-simulator.
As expected, in ES2, the screen is cleared to white immediately on startup (-drawRect having been called once and no vertices to draw yet). However, when I make the swap to ES3, the context is successfully created, no gl calls fail, and yet the screen does not clear as it should - it just appears as a black screen. Fishing around for what was going wrong, I decided to remove multi-sampling:
self.drawableMultisample = GLKViewDrawableMultisampleNone;
And it worked! (Albeit without antialiasing.) My question is therefore, are there any known issues with GLKit multi-sampling with OpenGL ES 3.0 in the iOS Simulator (iPad, iPad Retina and iPad Retina (64-Bit))? My laptop has more than enough free memory to cope with the multi-sampling.
OpenGL is a very asynchronous API. For example, if you call glClear, you should not expect the screen to be cleared when the call returns. You can only reliably look at the result of the rendering you produced after you finished rendering the frame, and it is displayed (typically by swapping the buffers when using double buffered rendering).
So what you're observing most likely does not mean anything. Does everything look fine at the end of the frame? If yes, there's no reason to be worried.
The difference is likely caused in the different rendering process if multisampling is enabled. In this case, rendering goes to a higher resolution buffer first, and is only downsampled to the actual framebuffer at the end of the frame.
When you deploy, chose the device type to be iPad, not universal, from the General>Deployment Info menu, that will fix it.
I'm working on an app based on Apple's GLPaint sample code. I've changed the clear color to transparent black and have added an opacity slider, however when I mix colors together with a low opacity setting they don't mix the way I'm expecting. They seem to mix the way light mixes, not the way paint mixes. Here is an example of what I mean:
The "Desired Result" was obtained by using glReadPixels to render each color separately and merge it with the previous rendered image (i.e. using apple's default blending).
However, mixing each frame with the previous is too time consuming to be done on the fly, how can I get OpenGL to blend the colors properly? I've been researching online for quite a while and have yet to find a solution that works for me, please let me know if you need any other info to help!
From the looks of it, with your current setup, there is no easy solution. For what you are trying to do, you need custom shaders. Which is not possible using just GLKit.
Luckily you can mix GLKit and OpenGL ES.
My recommendation would be to:
Stop using GLKit for everything except setting up your rendering
surface with GLKView (which is tedious without GLKit).
Use an OpenGl program with custom shaders to draw to a texture that is backing an FBO.
Use a second program with custom shaders that does post processing (after drawing above texture to a quad which is then rendered to the screen).
A good starting point would be to load up the OpenGl template that comes with Xcode. And start modifying it. Be warned: If you don't understand shaders, the code here will make little sense. It draws 2 cubes, one using GLKit, and one without - using custom shaders.
References to start learning:
Intro to shaders
Rendering to a Texture
Shader Toy - This should help you experiment with your post processing frag shader.
GLEssentials example - This shows how to render to texture using OpenGL ( a bit outdated.)
Finally, if you are really serious about using OpenGL ES to it's full potential, you really should invest the time to read through OpenGL ES 2.0 programming guide. Even though it is 6 years old, it is still relevant and the only book I've found that explains all the concepts correctly.
Your "Current Result" is additive color, which is how OpenGL is supposed to work. To work like mixing paint would be substractive color. You don't have control over this with OpenGL ES 1.1, but you could write a custom fragment shader for OpenGL ES 2.0 that would do substractive color. If you are blending textures images from iOS, you need to know if the image data has been premultiplied by alpha or not, in order to do blending. OpenGL ES expects the non-premultiplied format.
You need to write that code in the function which is called on color change.
and each time you need to set BlendFunc.
CGFloat red , green, blue;
// set red, green ,blue with desire color combination
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(red* kBrushOpacity,
green * kBrushOpacity,
blue* kBrushOpacity,
kBrushOpacity);
To do more things by using BlendFunc use this link
Please specify if it works or not. It work for me.
Platform is iPhone OpenGL ES 2.0
the framework already create an main fbo with renderbuffer as it's colorattachment.
And I have my own fbo with texture2D as colorattachment.
I want to copy main fbo's content to my fbo.
I tried common glCopyTexImage2D way, but it's too slow on my device(iPad1).
So I wonder if a faster solution is out there.
If main fbo uses texture2D as colorattachment, I know just draw fullscreen quad using that texture to my fbo, but how to draw it's renderbuffer to my fbo? google quite a while but no specific answer.
RenderBuffers are almost useless on most embedded systems. All you can do with them is read from them with glReadPixels(), which is too slow.
You should use a Texture attachment, as you said, then render with that texture. This artcile will help:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
I am running the boiler plate OpenGL example code that XCode creates for an OpenGL project for iOS. This sets up a simple ViewController and uses GLKit to handle the rest of the work.
All the update/draw functionality of the application is in C++. It is cross platform.
There is a lot of framebuffer creation going on. The draw phase renders to a few frame buffers and then tries to set it back to the default framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This generates an GL_INVALID_ENUM. Only on iOS.
I am completely stumped as to why. The code runs fine on all major platforms except iOS. I'm wanting to blame GLKit. Any examples of iOS OpenGL setup that do not use GLKit?
UPDATE
The following snippet of code lets me see the default framebuffer that GLKit is using. For some reason it comes out as "2". Sure enough if I use "2" in all my glBindFrameBuffer calls it works. This is very frustrating.
[view bindDrawable ];
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
LOGI("DEFAULT FBO: %d", defaultFBO);
What reason on earth would cause GLKit to not generate its internal framebuffer at 0? This is the semantic all other implementations of OpenGL use, 0 is the default FBO.
On iOS there is no default framebuffer. See Framebuffer Objects are the Only Rendering Target on iOS. I don't know much about GLKit, but on iOS to render something on screen you need to create framebuffer, and attach to it renderbuffer, and inform Core Animation Layer that this renderbuffer will be the "screen" or "default framebuffer" to draw to. Meaning - everything you'll draw to this framebuffer, will appear on screen. See Rendering to a Core Animation Layer.
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0);
does not return rendering to the main framebuffer although it would appear to work for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system buffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering.
On an iOS device, you should make the following call,
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);,
providing that you named your system framebuffer 'viewFramebuffer'. Look for through your initialization code for the following call,
glGenFramebuffers(1, &viewFramebuffer);
Whatever you have written at the end there is what you bind to when returning to your main system buffer.
If you are using GLKit then you can use the following call,
[((GLKView *) self.view) bindDrawable]; The 'self.view' may be slightly different depending on your particular startup code.
Also, for iOS, you could use, glBindFramebuffer(GL_FRAMEBUFFER, 2); but this is likely not going to be consistent across future devices released by Apple. They may change the default value of '2' to be '3' or something else in the future so you'd want to use the actual name instead of an integer value.
(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
//your use secondary/offscreen frame buffer to store render result
[self.shader drawOffscreenOnFBO];
//back to default frame buffer of GLKView
[((GLKView *) self.view) bindDrawable];
//draw on main screen
[self.shader drawinmainscreen];
}
refernece .....http://districtf13.blogspot.com/
It is possible to bind multiple framebuffers and renderbuffers in OpenGL ES? I'm rendering into an offscreen framebuffer/renderbuffer and would prefer to just use my existing render code.
Here's what I'm currently doing:
// create/bind framebuffer and renderbuffer (for screen display)
// render all content
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content again (would like to skip this)
Here's what I'd like to do:
// create/bind framebuffer and renderbuffer (for screen display)
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content (only once)
Is this possible?
You cannot render into multiple framebuffers at once. You might be able to use MRTs to render into multiple render targets (textures/renderbuffers) that belong the same FBO by putting out multiple colors in the fragment shader, but not into multiple FBOs, like an offscreen FBO and the default framebuffer. But if I'm informed correctly, ES doesn't support MRTs at the moment, anyway.
But in your case you still don't need to render the scene twice. If you need it in an offscreen renderbuffer anyway, why don't you just use a texture instead of a renderbuffer to hold the offscreen data (shouldn't make a difference). This way you can just render the scene once into the offscreen buffer (texture) and then display this texture to the screen framebuffer by drawing a simple textured quad with a simple pass-through fragment shader.
Though, in OpenGL ES it may make a difference if you use a renderbuffer or a texture to hold the offscreen data, as ES doesn't have a glGetTexImage. So if you need to copy the offscreen data to the CPU you won't get around glReadPixels and therefore need a renderbuffer. But in this case you still don't need to render the scene twice. You just have to introduce another FBO with a texture attached. So you render the scene once into the texture using this FBO and then render this texture into both the offsrceen FBO and the screen framebuffer. This might still be faster than drawing the whole scene twice, though only evaluation can tell you.
But if you need to copy the data to the CPU for processing, you can also just copy it from the screen framebuffer directly and don't need an offscreen FBO. And if you need the offscreen data for GPU-based processing only, then a texture is better than a renderbuffer anyway. So it might be usefull to reason if you actually need an additional offscreen buffer anyway, if it only contains the same data as the screen framebuffer. This might render the whole problem obsolete.