I have been trying to make an ImageViewer in OpenGL. But I don't know how to hide specific parts of my vectors/textures in OpenGL.
The ImageViewer should be an exact copy of an UIScrollView with paging enabled, where the images fill the whole screen.
The neat thing in UIScrollView is that you can set the actual frame of the UIScrollView, and set the content size, so when the image gets slided out of the frame, you won't be able to see the image anymore.
I need some guidelines so I can continue researching what to do.
Maybe you can setup your fragment shader to make pixels invisible when they are out of range.
You know the position of the 4 vertices (top-left,top-right,bottom-left,bottom-right) and the position of the texture. You can then upload a uniform vec4 to the fragment shader containing the minimum and maximum x and y sizes of the window. You then calculate if a pixel is inside or outside that area. If inside: actual color, if outside: gl_fragcolor=vec4(1,1,1,0);
Is this any help?
I found a solution myself.
I put an UIScrollView that fills the whole screen, and use that offset to move the texture coordinates.
I don't know if this is the optimal solution, but I don't have any performance problems etc. So it is good enough for now.
If you have better solutions, feel free to suggest them.
Related
I searched about these jaggy sides and learned that multisampling and antialiasing is enabled in WebGL by default but it still seems too jaggy for me. Is there another setting which makes sides look smoother than this?
Also can you tell me is this picture normal? I am looking more far away than 1st one. This is MUCH more jaggy.
I am working on rendering to a texture. In the jaggy example, I was rendering to a texture of 512*512 but my canvas vas 400*300. When I changed my canvas to 512*512 as in texture to be rendered to, jagginess disappeared. Sides become much more smooth. When I set the texture size to 1024*1024 it became much more better. It seems that, texture size should be same as canvas size and both must be power of two because when I set both to 400*300, cube became jaggy. I do not know the reason though. Texture can not be sampled properly if sizes do not match I suppose.
I am making a 2d platformer and I decided to use multiple tilemapnodes as my backgrounds. Even with 1 tile map, I get these vertical or horizontal lines that appear and disappear when I'm moving the player around the screen. See image below:
My tiles are 256x256 and I'm storing them in a tileset sks file. Not exactly sure why I'm getting this or how to get rid of this and it is quite annoying. Wondering if others experience this as well.
Considering to not use the tile maps, but I would prefer to use them if I can.
Thanks for any help with this!!!
I had the same issue and was able to solve it by "extruding" the tiled image a couple pixels. This provides a little cushion of pixels to use when the floating point issue occurs instead of displaying nothing (hence the gap). This video sums it up pretty well.
Unity: extruding tile map images
If you're using TexturePacker to generate your sprite atlas' there is an option to add this automatically without having to do it to your tile images yourself.
Hope that helps!
Sort of like the "extruding" suggested by #cheaze, I simply make the tile size in the drawing code a tiny amount larger than the required tile size. This means the assets themselves do not have to be changed.
Eg. if you assets are sized 256 x 256 and all of your calculations are based on that; draw the textures as 256.02 x 256.02 pixels in size:
[SKSpriteNode spriteNodeWithTexture:texture size:CGSizeMake(256.02, 256.02)];
Only adding .02 pixel per side will overlap your tiles automatically and remove the line glitches, depending on your camera speed and frame rate.
If the problem is really bad, you can even go so far as to add half a pixel (+0.5) or an entire pixel to remove the glitches, yet the user will not be able to see the difference. (Since a one pixel difference on a retina screen is hard to distinguish).
I was wondering what would be the best way to trim the "canvas" of an UIImage (pretty much like any image editor allows out there)
Now, the previous example is not a single UIImage. It's actually 2 UIViews. So clipping the superview against the blue box would do the trick, but I guess I am looking into the best possible way to do this. Given that there could be several blue boxes in the "canvas".
Is there a faster way than going through every pixel?
Thanks!
Thinking about it algorithmically, I would say no. You need to find the pixel that extends furthest to the left, right, top and bottom. Unless you look at every pixel from each direction you could miss non-transparent pixels.
You could speed things up if you figure out how to map your image into memory and then index into memory directly rather than using a high level function that fetches pixels. I would suggest searching from the top down (which would be sequential memory accesses) until you find a non-clear pixel. Then search from the end of the image backwards, which would give you the bottom-most pixel.
You would then want to limit your search from each side to only look starting at the first non-transparent pixel from the top and ending at the last non-transparent pixel on the bottom.
For anything other than a very large image this should take a fraction of a second.
Ok, I was being dumb. The union of the subviews is all I really needed, so its just a simple loop over the subviews and doing a CGRect union against their frames.
To best illustrate the issue I'm having, I created a short screen grab. Watch it here: http://cl.ly/1o3p3x2e2J1a1d3d2N1Q
Basically, the stars on the screen, as they're animated across the screen from right to left, are dimming and brightening on their own. I'm not intending on this happening. When you zoom in, the issue disappears.
My hunch is that this has to do with the size of the objects being drawn and the pixel boundaries. Is this correct? What is the best way to go about fixing this issue?
Thanks!
---Edit---
Here's how I'm loading the texture: http://pastebin.com/RDc8x7Te
And, here's how I'm setting up OpenGL ES: http://pastebin.com/SpvAqPqA
You use nearest and linear for scaling textures, which are both not very accurate. You might want to use linear for both, or build mipmaps. Also in case you use an orthogonal view, try aligning your geometry on pixels.
Greetings,
I'm working on an application inspired by the "ZoomingPDFViewer" example that comes with the iOS SDK. At some point I found the following bit of code:
// to handle the interaction between CATiledLayer and high resolution
// screens, we need to manually set the tiling view's
// contentScaleFactor to 1.0. (If we omitted this, it would be 2.0
// on high resolution screens, which would cause the CATiledLayer
// to ask us for tiles of the wrong scales.)
pageContentView.contentScaleFactor = 1.0;
I tried to learn more about contentScaleFactor and what it does. After reading everything of Apple's documentation that mentioned it, I searched Google and never found a definite answer to what it actually does.
Here are a few things I'm curious about:
It seems that contentScaleFactor has some kind of effect on the graphics context when a UIView's/CALayer's contents are being drawn. This seems to be relevant to high resolution displays (like the Retina Display). What kind of effect does contentScaleFactor really have and on what?
When using a UIScrollView and setting it up to zoom, let's say, my contentView; all subviews of contentView are being scaled, too. How does this work? Which properties does UIScrollView modify to make even video players become blurry and scale up?
TL;DR: How does UIScrollView's zooming feature work "under the hood"? I want to understand how it works so I can write proper code.
Any hints and explanation is highly appreciated! :)
Coordinates are expressed in points not pixels. contentScaleFactor defines the relation between point and pixels: if it is 1, points and pixels are the same, but if it is 2 (like retina displays ) it means that every point has two pixels.
In normal drawing, working with points means that you don't have to worry about resolutions: in iphone 3 (scaleFactor 1) and iphone4 (scaleFactor 2 and 2x resolution), you can use the same coordinates and drawing code. However, if your are drawing a image (directly, as a texture...) and just using normal coordinates (points), you can't trust that pixel to point map is 1 to 1. If you do, then every pixel of the image will correspond to 1 point but 4 pixels if scaleFactor is 2 (2 in x direction, 2 in y) so images could became a bit blurred
Working with CATiledLayer you can have some unexpected results with scalefactor 2. I guess that having the UIView a contentScaleFactor==2 and the layer a contentScale==2 confuse the system and sometimes multiplies the scale. Maybe something similar happens with Scrollview.
Hope this clarifies it a bit
Apple has a section about this on its "Supporting High-Resolution Screens" page in the iOS dev documentations.
The page says:
Updating Your Custom Drawing Code
When you do any custom drawing in your application, most of the time
you should not need to care about the resolution of the underlying
screen. The native drawing technologies automatically ensure that the
coordinates you specify in the logical coordinate space map correctly
to pixels on the underlying screen. Sometimes, however, you might need
to know what the current scale factor is in order to render your
content correctly. For those situations, UIKit, Core Animation, and
other system frameworks provide the help you need to do your drawing
correctly.
Creating High-Resolution Bitmap Images Programmatically If you
currently use the UIGraphicsBeginImageContext function to create
bitmaps, you may want to adjust your code to take scale factors into
account. The UIGraphicsBeginImageContext function always creates
images with a scale factor of 1.0. If the underlying device has a
high-resolution screen, an image created with this function might not
appear as smooth when rendered. To create an image with a scale factor
other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the
UIGraphicsBeginImageContext function:
Call UIGraphicsBeginImageContextWithOptions to create a bitmap
context (with the appropriate scale factor) and push it on the
graphics stack.
Use UIKit or Core Graphics routines to draw the content of the
image.
Call UIGraphicsGetImageFromCurrentImageContext to get the bitmap’s
contents.
Call UIGraphicsEndImageContext to pop the context from the stack.
For example, the following code snippet
creates a bitmap that is 200 x 200 pixels. (The number of pixels is
determined by multiplying the size of the image by the scale
factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0);
See it here: Supporting High-Resolution Screens