When rendering a mesh in WebGL with Regl I noticed visual artifacts start to appear as the mesh increases in size in terms of its coordinates.
See the live example at https://jsfiddle.net/yxzuo2aq/2/ or the screenshot below.
Visually nothing should change as you increase x since the camera and persective are updated accordingly but as x goes above 1000 the artifacts start to appear.
I assume this has something to do with WebGL?
Is the best way to avoid those kind of artifacts by downsizing the mesh's coordinates prior to rendering?
Related
Here's an example of artifact that I have an issue with (picture hopefully worth a thousand words -- I've scaled both images to make the detail more apparent):
64x64 native iOS crop (using setScale(2.0)) scaled to 128x128 using nearest neighbour in external application:
64x64 crop scaled to 128x128 using nearest neighbour in external application, the rotation is applied before the scale transform:
From the images, the first is the undesirable and the second is what I'd like. The first shows the "scaled pixels" are rotated. In my case (even though the pixel is actually a block of pixels), I want the rotation to happen before the scaling has happened (if there was some way to affect the order of the transformations).
I'm basically trying to get a chunkier/pixellated look (common enough), but I want all translations to sit on this new pixel grid (rotations are especially ugly). I've seen some applications do this, are they just creating their own game engine? or am I missing something in SpriteKit?
I think I got all the details required, if I missed any, ask.
I have a set of intensity-based volume data. That data is stored in a 3 column matrix V(i,j,k) and is roughly 1k by 1k by 100 pixels (X,Y,Z). That volume contains a 12 bit object. The region outside the object is set to 0. The spatial dimensions of each voxel are (1 by 1 by 10), with the larger dimension corresponding to the smaller number of pixels.
We are using DirectX to display this volume. There are 3 problems.
(1) No one in this location has any substantial experience with DirectX. (I've ordered a book...)
(2) When we display the volume, stairstep artifacts appear in the Z direction.
(3) Those stairstep artifacts appear to interact with the camera lighting to create an alternating pattern of dark and bright steps.
The most obvious step was to try changing the interpolation. We've tried POINT, LINEAR, and ANISOTROPIC. Those settings did not help with the stairstep pattern. The obvious steps are to try antialiasing and reduce the diffuse and specular contributions to the lighting model. Unfortunately, antialiasing is likely to be slow and changing the lighting model removes information regarding surface orientation. In particular, I suspect we only need to antialias in only one direction.
I can simply rebin the volume and smooth the resulting object, but that increases memory usage and slows down the rendering.
I suspect that we're missing something completely obvious. So, what method in DirectX would be recommended to remove stairstep artifacts from volume rendering of data with highly anisotropic voxels? Considerations include memory usage and speed. We already display blurred data when interacting with the data set, so low memory usage may be more useful than speed.
I uploaded my 2D FFT magnitude image here:
If you take a look at it, for high frequencies[right, left, top and bottom], only at around x and y-axis, there are some points with high power[yellow color]. These points shouldn't be in the resultant FFT2, since I know the original height image is isotropic and therefore the 2D FFT must look something like the example below(just note high frequencies):
Now, the question is, what could be the possible reasons for such behavior at high frequencies?
added:
Here is the magnitude power spectrum before windowing:
https://dl.dropboxusercontent.com/u/82779497/nowin.png
here is the original image, which is a height profile recorded by a profilometer:
https://dl.dropboxusercontent.com/u/82779497/asph5.jpg
By the way, I export data as a .txt file from profilometer software to Matlab.
The profilometer we use for capturing the surface image, uses fringe projection method which produces some artifacts along the projected stripes on the surface. So, the problem lies on the device we are capturing images with.
Thanks for comments Eddy.
I'm working on an Open GL app that uses 1 particularly large texture 2250x1000. Unfortunately, Open GL ES 2.0 doesn't support textures larger than 2048x2048. When I try to draw my texture, it appears black. I need a way to load and draw the texture in 2 segments (left, right). I've seen a few questions that touch on libpng, but I really just need a straight forward solution for drawing large textures in opengl es.
First of all the texture size support depends on device, I believe iPad 3 supports 4096x4096 but don't mind that. There is no way to push all those data as they are to most devices onto 1 texture. First you should ask yourself if you really need such a large texture, will it really make a difference if you resample it down to 2048x_. If the answer is NO you will need to break it at some point. You could cut it by half in width and append of the cut parts to the bottom of the texture resulting in 1125x2000 texture or simply create 2 or more textures and push to them certain parts of the texture image. In any of the cases you might have trouble with texture coordinates but this all heavily depends on what you are trying to do, what is on that texture (a single image or parts of a sophisticated model; color mapping or some data you can not interpolate; do you create it at load time or it is modified as it goes...). Maybe some more info could help us solve your situation more specifically.
I'm drawing planets in OpenGL ES, and running into some interesting performance issues. The general question is: how best to render "hugely detailed" textures on a sphere?
(the sphere is guaranteed; I'm interested in sphere-specific optimizations)
Base case:
Window is approx. 2048 x 1536 (e.g. iPad3)
Texture map for globe is 24,000 x 12,000 pixels (an area half the size of USA fits the full width of screen)
Globe is displayed at everything from zoomed in (USA fills screen) to zoomed out (whole globe visible)
I need a MINIMUM of 3 texture layers (1 for the planet surface, 1 for day/night differences, 1 for user-interface (hilighting different regions)
Some of the layers are animated (i.e. they have to load and drop their texture at runtime, rapidly)
Limitations:
top-end tablets are limited to 4096x4096 textures
top-end tablets are limited to 8 simultaneous texture units
Problems:
In total, it's naively 500 million pixels of texture data
Splitting into smaller textures doesn't work well because devices only have 8 units; with only a single texture layer, I could split into 8 texture units and all textures would be less than 4096x4096 - but that only allows a single layer
Rendering the layers as separate geometry works poorly because they need to be blended using fragment-shaders
...at the moment, the only idea I have that sounds viable is:
split the sphere into NxM "pieces of sphere" and render each one as separate geometry
use mipmaps to render low-res textures when zoomed out
...rely on simple culling to cut out most of them when zoomed in, and mipmapping to use small(er) textures when they can't be culled
...but it seems there ought to be an easier way / better options?
Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.
So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).
Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap
This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.
If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.