How to adjust GLCamera to show entire GLScene - delphi

I have a GLScene object of varying (but known) size. It is completely surrounded by a TGLDummyCube.
I want to position the GLCamera (with CameraStyle: glPerspective) so that the object is completely visible on screen. I got this running basically - the object is visible, but the distance is sometimes too far, or the object is larger than the screen and clipped.
How can I do that? I suppose that this can be done by a clever combination of camera distance and focal length, but I have not been successful so far.
This seems to be different in GLScene compared to OpenGL. I'm using GLScene and Delphi 2007.

Although varying the camera distance and focal length will change the object's visual size, it has the drawback of also changing the perspective, thus leading to a somewhat distorted view. I suggest to use the camera's SceneScale property instead.
Alas, I have no valid steps to calculate the correct value for that. In my case I have to scale to a cube with varying size, while the window size of the viewer is constant. So I placed two dummycubes at the position of the target cube, each sized to fit either the width or the height of the viewer with appropriate values for SceneScale, camera distance and FocalLength. During runtime I calculate the new SceneScale by the ratio of the target cube size in respect to the dummycube sizes. This works quite well in my case.
Edit: Here is some code I make for the calculations.
ZoomRefX and ZoomRefY are those DummyCubes
TargetDimX and TargetDimY give the size of the current object
DesignWidth and DesignHeight are the size of MyGLView at design time
DesignSceneScale is the camera's SceneScale at design time
The calculation code:
ScaleX := (ZoomRefX.CubeSize*MyGLView.Width)/(DesignWidth*TargetDimX);
ScaleY := (ZoomRefY.CubeSize*MyGLView.Height)/(DesignHeight*TargetDimY);
NewSceneScale := Min(ScaleX, ScaleY)*DesignSceneScale;
The DummyCubes ZoomRefX and ZoomRefY are sized so that they have a small margin to either the left-right or top-bottom edges of the viewing window. The are both positioned so that the front faces match. Also the target object is positioned to match its front face with those of these DummyCubes.
The formulas above allow the window size to be different from design time, but I actually didn't test this feature.

#Andreas if you've been playing with SceneScale (as you mentioned in comments) that means that you are looking for a proper way to fit object within camera view by either changing camera distance/focal length or by resizing object. If so, the easiest way to resize single object to fit the screen is to use its BoundingSphereRadius property like this:
ResizeMultiplier := 2; //play with it, it depends on your camera params
GLFreeForm1.Scale.Scale(ResizeMultiplier / GLFreeForm1.BoundingSphereRadius);
You can add GLDummyCube as root object for all other scene objects and then resize GLDummyCube with method mentioned above.

Related

Get SCNMaterial's frame

I have an iPhone 3D model in my SceneKit application, it has a material which I get. It is called iScreen (the image that "is on" the screen of the iPhone):
var iScreen: SCNMaterial!
iScreen = iphone.geometry?.materialWithName("Screen")!
I decided to somehow project a webView there instead of an image.
Therefore I need the frame / position / size of the screen where iScreen "draws" to set the UIWebView's frame. Is that possible?
Note
Of course I tried position, frame, size, etc. but that all was not available :/
you will first have to know to which SCNGeometryElement the material applies (an SCNGeometry is made of one or several SCNGeometryElement).
That geometry element is essentially a list of indices to retrieve vertex data contained in the geometry's SCNGeometrySources. Here you are interested in the position source (it gives you the 3D coordinates of the vertices).
By iterating over these positions you'll be able to find the element's width and height.

OpenGL ImageViewer

I have been trying to make an ImageViewer in OpenGL. But I don't know how to hide specific parts of my vectors/textures in OpenGL.
The ImageViewer should be an exact copy of an UIScrollView with paging enabled, where the images fill the whole screen.
The neat thing in UIScrollView is that you can set the actual frame of the UIScrollView, and set the content size, so when the image gets slided out of the frame, you won't be able to see the image anymore.
I need some guidelines so I can continue researching what to do.
Maybe you can setup your fragment shader to make pixels invisible when they are out of range.
You know the position of the 4 vertices (top-left,top-right,bottom-left,bottom-right) and the position of the texture. You can then upload a uniform vec4 to the fragment shader containing the minimum and maximum x and y sizes of the window. You then calculate if a pixel is inside or outside that area. If inside: actual color, if outside: gl_fragcolor=vec4(1,1,1,0);
Is this any help?
I found a solution myself.
I put an UIScrollView that fills the whole screen, and use that offset to move the texture coordinates.
I don't know if this is the optimal solution, but I don't have any performance problems etc. So it is good enough for now.
If you have better solutions, feel free to suggest them.

Corona SDK - object:scale() scales object but width does not change?

I've scaled my an object by .99 every frame for a certain amount of time. Then I scale it 1/.99 for the same amount of time. Due to rounding errors, the object ends up bigger/smaller instead of the same size. To fix this, I save the original width and height in variables and set object.contentWidth and contentHeigth equal to these variables whenever necessary. However, the object continues to grow or shrink and is never reset. When I print the original width and height variables, the content widths and heights, and the regular widths and heights, they're all the same value, as if the object was never scaled.
I assume the problem here is me misunderstanding the Corona SDK object functions and properties, so I didn't post any code. If it's not a misunderstanding, I'll post a simplified version of my code here; just let me know.
I assume:
If you're trying to scale back to original size.
You should scale it from 0.99 to 1.0, not to 1/.99.
Use function object:scale() for relative scaling
Use property object.xScale,object.yScale for absolute scaling.
Try it with absolute scaling.

What is the secret behind "contentScaleFactor" of UIView when used with CATiledLayer?

Greetings,
I'm working on an application inspired by the "ZoomingPDFViewer" example that comes with the iOS SDK. At some point I found the following bit of code:
// to handle the interaction between CATiledLayer and high resolution
// screens, we need to manually set the tiling view's
// contentScaleFactor to 1.0. (If we omitted this, it would be 2.0
// on high resolution screens, which would cause the CATiledLayer
// to ask us for tiles of the wrong scales.)
pageContentView.contentScaleFactor = 1.0;
I tried to learn more about contentScaleFactor and what it does. After reading everything of Apple's documentation that mentioned it, I searched Google and never found a definite answer to what it actually does.
Here are a few things I'm curious about:
It seems that contentScaleFactor has some kind of effect on the graphics context when a UIView's/CALayer's contents are being drawn. This seems to be relevant to high resolution displays (like the Retina Display). What kind of effect does contentScaleFactor really have and on what?
When using a UIScrollView and setting it up to zoom, let's say, my contentView; all subviews of contentView are being scaled, too. How does this work? Which properties does UIScrollView modify to make even video players become blurry and scale up?
TL;DR: How does UIScrollView's zooming feature work "under the hood"? I want to understand how it works so I can write proper code.
Any hints and explanation is highly appreciated! :)
Coordinates are expressed in points not pixels. contentScaleFactor defines the relation between point and pixels: if it is 1, points and pixels are the same, but if it is 2 (like retina displays ) it means that every point has two pixels.
In normal drawing, working with points means that you don't have to worry about resolutions: in iphone 3 (scaleFactor 1) and iphone4 (scaleFactor 2 and 2x resolution), you can use the same coordinates and drawing code. However, if your are drawing a image (directly, as a texture...) and just using normal coordinates (points), you can't trust that pixel to point map is 1 to 1. If you do, then every pixel of the image will correspond to 1 point but 4 pixels if scaleFactor is 2 (2 in x direction, 2 in y) so images could became a bit blurred
Working with CATiledLayer you can have some unexpected results with scalefactor 2. I guess that having the UIView a contentScaleFactor==2 and the layer a contentScale==2 confuse the system and sometimes multiplies the scale. Maybe something similar happens with Scrollview.
Hope this clarifies it a bit
Apple has a section about this on its "Supporting High-Resolution Screens" page in the iOS dev documentations.
The page says:
Updating Your Custom Drawing Code
When you do any custom drawing in your application, most of the time
you should not need to care about the resolution of the underlying
screen. The native drawing technologies automatically ensure that the
coordinates you specify in the logical coordinate space map correctly
to pixels on the underlying screen. Sometimes, however, you might need
to know what the current scale factor is in order to render your
content correctly. For those situations, UIKit, Core Animation, and
other system frameworks provide the help you need to do your drawing
correctly.
Creating High-Resolution Bitmap Images Programmatically If you
currently use the UIGraphicsBeginImageContext function to create
bitmaps, you may want to adjust your code to take scale factors into
account. The UIGraphicsBeginImageContext function always creates
images with a scale factor of 1.0. If the underlying device has a
high-resolution screen, an image created with this function might not
appear as smooth when rendered. To create an image with a scale factor
other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the
UIGraphicsBeginImageContext function:
Call UIGraphicsBeginImageContextWithOptions to create a bitmap
context (with the appropriate scale factor) and push it on the
graphics stack.
Use UIKit or Core Graphics routines to draw the content of the
image.
Call UIGraphicsGetImageFromCurrentImageContext to get the bitmap’s
contents.
Call UIGraphicsEndImageContext to pop the context from the stack.
For example, the following code snippet
creates a bitmap that is 200 x 200 pixels. (The number of pixels is
determined by multiplying the size of the image by the scale
factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0);
See it here: Supporting High-Resolution Screens

How to scale on-screen pixels?

I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.

Resources