Artifacts when mapping screen coordinates to texture texels - directx

Alright guys so my issue is very peculiar. I am mapping a texture onto a quad. The texture contains font values generated with Free Type. When I render it to the screen it has artifacts. "Hola, elienay, y Khaleexy" is the rendered quad/texture.
However, when I go into shader debug mode and look at the texture to see why it has artifacts, I do not get artifacts. The quad is rendered perfectly and the back buffer has the final colors correctly without 1 single artifact; see below.
Alright, so you guys see what I'm talking about? The quad is being rendered and mapped perfectly to the texture but it seems that the issue is when the application presents the back buffer to the screen. Any ideas on what this could be caused by and how to fix it??
Below are a couple more screen shots with different random characters. To show how the artifacts are present only when not rendering the scene in debug mode.

As per 1st comment, the swap chain needs to be created at the size of the client window rectangle not the entire window including title bar and borders. When the window is created a few pixels too large Windows will need to rescale the swap chain by a very small factor which means you lose the 1:1 pixel accuracy you would expect.

Related

Is there a setting to prevent jaggy sides?

I searched about these jaggy sides and learned that multisampling and antialiasing is enabled in WebGL by default but it still seems too jaggy for me. Is there another setting which makes sides look smoother than this?
Also can you tell me is this picture normal? I am looking more far away than 1st one. This is MUCH more jaggy.
I am working on rendering to a texture. In the jaggy example, I was rendering to a texture of 512*512 but my canvas vas 400*300. When I changed my canvas to 512*512 as in texture to be rendered to, jagginess disappeared. Sides become much more smooth. When I set the texture size to 1024*1024 it became much more better. It seems that, texture size should be same as canvas size and both must be power of two because when I set both to 400*300, cube became jaggy. I do not know the reason though. Texture can not be sampled properly if sizes do not match I suppose.

Modify part of a Texture Map that is in use in OpenGL ES 2.0

I have a texture which I use as a texture map. Its a 2048 by 2048 texture divided in squares of 256 pixels each. So I have 64 "slots". This map can be empty, partly filled or full. On screen I am drawing simple squares with a slot of the sprite map each.
The problem is that I have to update this map from time to time when the asset for the slot becomes available. These assets are being downloaded from the internet but the initial information arrives in advance so I can tell how many slots I will use and see the local storage to check which ones are already available to be drawn at the start.
For example. My info says there will be 10 squares, from these 5 are available locally so when the sprite map is initialized these squares are already filled and ready to be drawn. On the screen I will show 10 squares. 5 of them will have the image stored in the texture map for those slots, the remaining 5 are drawn with a temporal image. As a new asset for a slot is downloaded I want to update my sprite map (which is bound and used for drawing) with the new corresponding texture, after the draw is finished and the sprite map has been updated I set up a flag which tells OpenGL that it should start drawing with that slot instead of the temporal image.
From what I have read, there are 3 ways to update a sprite map.
1) Upload a new one with glTextImage2D: I am currently using this approach. I will create another updater texture and then simply swap it. But i frequently run into memory warnings.
2) Modify the texture with glTextSubImage2D: I cant get this to work, I keep getting memory access errors or black textures. I believe its either because the thread is not the same or I am accessing a texture in use.
3) Use Frame Buffer Objects: I could try this but I am not certain if i can Draw on my texturebuffer while it is already being used.
What is the correct way of solving this?
This is meant to be used on an iPhone so resources are limited.
Edit: I found this post which talks about something related here.
Unfortunately I dont think its focused on modifying a texture that is currently being used.
the thread is not the same
OpenGL-ES API is absolutely not multi-threaded. Update your texture from main thread.
Because your texture must be uploaded on gpu, glTextSubImage2D is the fastest and simplest path. Keep this direction :)
Render on a Frame Buffer (attached on your texture) is very fast for rendering data which are already on gpu. (not your case). And yes you can draw on a frame buffer bound to a texture (= a frame buffer which use the texture as color attachment).
Just one contrain: You can't read and write the same texture in one draw call (The texture attached to the current frame buffer can't be bound to a texture unit)

ActionScript3.0 - How to get color (uint) of pixel at coordinates? (Stage3D, Flare3D)

Question is in the title:
[ActionScript3.0] How to get color (uint) of pixel at coordinates? (Stage3D, Flare3D)
I am using Flare3D library to render 3D scene on an iPad2. I need to get color values at 768 different coordinates every time screen is redrawn. Previously, on simple stage (2D), I could just draw it on 1x1 bitmaps translated to specified coordinates, now it does not work on stage3D. Plus, I am a bit worried weather it will kill the performance since I really need to do it as often as possible - actually every time screen is drawn.
It would be really nice if that currently displayed screen was like a bitmap somewhere, so I could access it like simple array...but yeah, I am not holding my breath:)
Since Stage3D renders to back-buffer and one can't directly access it, you also need to render to BitmapData using Context3D.drawToBitmapData() method. Rendering to a bitmap is very slow, especially if the viewport is large. As you only need to access those 768 pixels, you could use Context3D.setScissorRectangle to render scene 768 times with the size of scissor rectangle set to 1x1 along with needed coordinates. I haven't tested that myself so I don't know if rendering scene 700 times won`t be slower than rendering it once, but you may want to try that. :)

Three.js - What's the most effective way to render a WebGLRenderTarget texture?

Problem
I've gotten to a point in my project where I'm rendering to WebGLRenderTargets and using them as a textures in my main scene. It works, but it seems like I'm having it do a lot more work than it needs to. My generated textures only need to be 64x64, but because I'm using my main renderer (window width by window height) for both, it's unnecessarily rendering the WebGLRenderTargets at a much larger resolution.
I may be wrong, but I believe this increases both the processing required to draw to each RenderTarget and the processing required to then draw that large texture to the mesh.
I've tried using a second renderer, but I seem to get this error when trying to use a WebGLRenderTarget in renderer A after drawing to it from renderer B:
WebGL: INVALID_OPERATION: bindTexture: object not from this context
Example
For reference, you can see my abstracted page here (Warning: Due to the very issue I'm inquiring about, this page may lag for you). I'm running a simplex function on a plane in my secondary scene and chopping it up into sections using camera placement, then applying the segments to tile pieces via WebGLRenderTarget so that they're fluent but individual pieces.
Question
Am I correct in my assumption that using the same renderer size is much less efficient than rendering to a smaller renderer would be? And if so, what do you think the best solution for this would be? Is there currently a way to achieve this optimization?
The size parameters in renderer.setSize() are used by the renderer to set the viewport when rendering to the screen only.
When the renderer renders to an offscreen render target, the size of the texture rendered to is given by the parameters renderTarget.width and renderTarget.height.
So the answer to your question is that it is OK to use the same renderer for both; there is no inefficiency.

WebGL z-buffer artifacts?

We are working on a Three.js based WebGL project, and have trouble understanding how transparency is handled in WebGL. The image shows a doublesided surface drawn with alpha = 0.7, which behaves correctly on its right side. However closer to the middle strange artifacts appear, and on the left side the transparency does not seem to work at all.
http://emilaxelsson.se/sandbox/vis1/alpha.png
The problem can also be seen here:
http://emilaxelsson.se/sandbox/vis1/
Has anyone seen anything similar before? What could the reason be?
Your problem is that transparent objects needs to be sorted and rendered in a back-to-front order (if you try to change the opacity of your mesh from 0.7 (transparent) to 1.0 (opaque), you can see that the z-buffer works just fine).
See:
http://www.opengl.org/wiki/Transparency_Sorting
http://www.opengl.org/archives/resources/faq/technical/transparency.htm (15.050)
In your case it might be less trivial to solve, since I assume that you only have one mesh.
Edit: Just to summarize the discussion below. It is possible to achieve correct rendering of such a double-sided transparent mesh. To do this, you need to create 6 versions of the mesh, corresponding to 6 sides of a cube. Each version needs to be sorted in a back-to-front order based on the 'side of the cube' (front, back, left, right, top, bottom).
When rendering choose the correct mesh (based on the camera viewing direction) and render that single mesh.
The easy solution for your case (based on the picture you attached), without going to expensive sorting and multiple meshes, is to disable depth test and enable face culling. That produces acceptable results if you do not have any opaque objects in front of the mesh.

Resources