ThreeJS orthographic camera: Adjusting size of scene to window - webgl

In ThreeJS, I'm using the OrthographicCamera because I have several objects which should be shown from the front, without any perspective distortion. This works fine with one issue: the bunch of objects I created should always roughly fill the screen, but not be much smaller or larger. Using e.g.
var factor = 4.5;
this.camera = new THREE.OrthographicCamera(-window.innerWidth / factor, window.innerWidth / factor, window.innerHeight / factor, -window.innerHeight / factor, 1, 1000);
... I can get it to run fine on 1920x1080 pixel resolution, but when I then resize the window to something smaller and reload the page, the objects will start to be too large, as they act as if they were fixed. (When using the CombinedCamera with perspective, the resizing is dynamic and works well.) How to fix this using the OrthographicCamera? Or is there a trick where I can turn off perspective in the CombinedCamera?
Thanks!

I am not sure if this will help with your particular issue or not. I am not sure if this is correct or not, or if there is a better or more standard way to handle this. But this is what I use in my orthographic code so far. It doesn't work perfect by any means, but you can try it out. Post back and let me know if you figure out anything better.
$(window).on('resize', function () {
// notify the renderer of the size change
renderer.setSize(window.innerWidth, window.innerHeight);
// update the camera
camera.left = -window.innerWidth / camFactor;
camera.right = window.innerWidth / camFactor;
camera.top = window.innerHeight / camFactor;
camera.bottom = -window.innerHeight / camFactor;
camera.updateProjectionMatrix();
});

Related

How can I adjust this positioning to follow a "zoom-to-point" effect?

This is my 3rd day of googling and tinkering with my code, and I'm hopeless, I honestly need help, so please bear with me here.
Explanation
I'll start with explaining the basic concept. There is a 'character' in the middle of the screen, always perfectly centered. Under that layer is a big square, which I will refer to as the canvas. As you move around, the game is actually just simulating the movement, a 'virtual camera' if you will, so the character doesn't actually move, the canvas just moves behind it.
My Problem
In this game, the camera needs to be able to 'zoom' in and out, so to do this, I adjust the size of the canvas and everything else depending on the zoom value. (1 is default) The width and height of the canvas is always 8000 * the zoom value. This is fine, however, I am having major difficulty positioning the canvas so that it remains in the same relative position as you scale in and out - or rather a 'zoom to point' or 'zoom from point' effect. I simply cannot create the math for this.
The Code
local oldScale = g.scale;
local newScale = n.scaleTest.Value;
g.scale = newScale;
g.x = g.x + (g.x * newScale);
g.y = g.y + (g.y * newScale);
g.scale represents the zoom value, everything will be adjusted to this size.
g.x and g.y are the coordinates of the canvas. The math in the code is completely incorrect.
n.scaleTest.Value is my temporary variable for the newly set scale. It can change at any time.
Thoughts / Other Important Information
I can retrieve the AbsolutePosition and AbsoluteSize of the canvas as well. I have considered implementing the distance from the canvas's edges to the character into my calculations, but I am not completely sure how to do this.
Thank you for any help, I would sincerely appreciate it, I am very stuck and don't know who to ask.

What is the right way to zoom the camera out from a scene view?

I have a scene where my gameplay happens. I'm looking for a way to slowly 'zoom-out' so more and more space becomes visible as the time passes. But I want the HUD and some other elements to stay the same size. It's like the mouse wheel functionality in top-down games.
I tried to do it with sticking everything into display groups and transitioning the xScale, yScale. It works visually of course but game functionality screws up. For example I have a spawner that spawns some objects, its graphics gets scaled but spawner still spawns the objects from its original position, not the new scaled position..
Do I need to keep track of every X, Y position I'm using during gameplay and transition them too? (Which is probably possible but too hard for me since I use a lot of touch events for aiming and path creating etc.) Or is there an easier way to achieve this? Please please say yes :P
I'm looking forward for your answers, thanks in advance! :)
The problem is you are scaling your image, but not your position coordinates.
You need to convert from 'original coordinates' to 'scaled coordinates'
If you scale your map size to half, you should scale the move amounts by half too.
You need to 'scale' your coordinates in the same amount your image is scaled.
For example lets assume your scale factor is 0.5, you have an image:
local scaleFactor = 0.5
image.xScale = scaleFactor
image.yScale = scaleFactor
Then you would need to scale your move amounts by this factor
thingThatIsMoved.x = thingThatIsMoved.x + (moveAmount * scaleFactor)
thingThatIsMoved.y = thingThatIsMoved.y + (moveAmount * scaleFactor)
I hope this helps, I am assuming you have a moveAmount variable and updating the position on the enterFrame() event.

iOS drag on UIView, cloth effect

Since I saw this menu drag concept, I have really been interested to find out how to accomplish it.
So I am wondering how I would go about dragging with a cloth-effect in a UIView?
I know how to drag items, but how do you give them the ripple effect?
(Better image: http://dribbble.com/shots/899177-Slide-Concept/attachments/98219)
In short: it’s really, really hard. The old Classics app achieved something along those lines using a series of pre-rendered smooth paper images under a simple transform of their view content, but as you can see from those screenshots (and the one below—note that the text at the bottom is still making a straight line, since it’s getting a basic perspective transform), the effect was fairly limited.
The effect shown in that Dribbble design is much more complicated, since it’s actually doing a scrunching-up warp of the view’s content, not just skewing it as Classics did; the only way I can think of to do that exact effect on iOS at present would be to drop into OpenGL and distort the content with a mesh there.
A simpler option would be to use UIPageViewController, which will at least you the nice iBooks-style curling paper effect—it ain’t fabric, but it’s a lot easier than the GL option.
There is an open source reimplementation of this already.
This blog post: Mesh Transforms covers the private CAMeshTransform. Rather than treating a CALayer as a simple quad, it allows CALayers to be turned into a mesh of connected faces. This class is how Apple has been able to implement the page curl and iBooks page turning effects.
However, the API doesn't tolerate malformed input at all well and Apple has kept it a private API.
If you keep reading that blog post though you'll come to this section just after the bit about it being private API.
In the spirit of CAMeshTransform I created a BCMeshTransform which copies almost every feature of the original class.
...
Without direct, public access to Core Animation render server I was forced to use OpenGL for my implementation. This is not a perfect solution as it introduces some drawbacks the original class didn’t have, but it seems to be the only currently available option.
In effect he renders the content view into an OpenGL texture and then displays that. This lets him mess around with it however he likes.
Including like this...
I encourage you to check out the demo app I made for BCMeshTransformView. It contains a few ideas of how a mesh transform can be used to enrich interaction, like my very simple, but functional take on that famous Dribbble.
What famous Dribbble? This one:
Here is what the example looks like:
Open source project: https://github.com/Ciechan/BCMeshTransformView
Example Implementation of the curtain effect: BCCurtainDemoViewController.m
How does it work?
It sets the BCMeshTransformView up with some lighting and perspective.
// From: https://github.com/Ciechan/BCMeshTransformView/blob/master/Demo/BCMeshTransformViewDemo/BCCurtainDemoViewController.m#L59
self.transformView.diffuseLightFactor = 0.5;
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0/2000.0;
self.transformView.supplementaryTransform = perspective;
Then using a UIPanGestureRecognizer it tracks the touches and uses this method to build a new mesh transform every time the users finger moves.
// From: https://github.com/Ciechan/BCMeshTransformView/blob/master/Demo/BCMeshTransformViewDemo/BCCurtainDemoViewController.m#L91
self.transformView.meshTransform = [BCMutableMeshTransform curtainMeshTransformAtPoint:CGPointMake(point.x + self.surplus, point.y) boundsSize:self.transformView.bounds.size];
// From: https://github.com/Ciechan/BCMeshTransformView/blob/master/Demo/BCMeshTransformViewDemo/BCMeshTransform%2BDemoTransforms.m#L14
+ (instancetype)curtainMeshTransformAtPoint:(CGPoint)point boundsSize:(CGSize)boundsSize
{
const float Frills = 3;
point.x = MIN(point.x, boundsSize.width);
BCMutableMeshTransform *transform = [BCMutableMeshTransform identityMeshTransformWithNumberOfRows:20 numberOfColumns:50];
CGPoint np = CGPointMake(point.x/boundsSize.width, point.y/boundsSize.height);
[transform mapVerticesUsingBlock:^BCMeshVertex(BCMeshVertex vertex, NSUInteger vertexIndex) {
float dy = vertex.to.y - np.y;
float bend = 0.25f * (1.0f - expf(-dy * dy * 10.0f));
float x = vertex.to.x;
vertex.to.z = 0.1 + 0.1f * sin(-1.4f * cos(x * x * Frills * 2.0 * M_PI)) * (1.0 - np.x);
vertex.to.x = (vertex.to.x) * np.x + vertex.to.x * bend * (1.0 - np.x);
return vertex;
}];
return transform;
}

Scaling entire screen in XNA

Using XNA, I'm trying to make an adventure game engine that lets you make games that look like they fell out of the early 90s, like Day of the Tentacle and Sam & Max Hit the Road. Thus, I want the game to actually run at 320x240 (I know, it should probably be 320x200, but shh), but it should scale up depending on user settings.
It works kind of okay right now, but I'm running into some problems where I actually want it to look more pixellated that it currently does.
Here's what I'm doing right now:
In the game initialization:
public Game() {
graphics = new GraphicsDeviceManager(this);
graphics.PreferredBackBufferWidth = 640;
graphics.PreferredBackBufferHeight = 480;
graphics.PreferMultiSampling = false;
Scale = graphics.PreferredBackBufferWidth / 320;
}
Scale is a public static variable that I can check anytime to see how much I should scale my game relative to 320x240.
In my drawing function:
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.NonPremultiplied, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullNone, null, Matrix.CreateScale(Game.Scale));
This way, everything is drawn at 320x240 and blown up to fit the current resolution (640x480 by default). And of course I do math to convert the actual coordinates of the mouse into 320x240 coordinates, and so forth.
Now, this is great and all, but now I'm getting to the point where I want to start scaling my sprites, to have them walk into the distance and so forth.
Look at the images below. The upper-left image is a piece of a screenshot from when the game is running at 640x480. The image to the right of it is how it "should" look, at 320x240. The bottom row of images is just the top row blown up to 300% (in Photoshop, not in-engine) so you can see what I'm talking about.
In the 640x480 image, you can see different "line thicknesses;" the thicker lines are how it should really look (one pixel = 2x2, because this is running at 640x480), but the thinner lines (1x1 pixel) also appear, when they shouldn't, due to scaling (see the images on the right).
Basically I'm trying to emulate a 320x240 display but blown up to any resolution using XNA, and matrix transformations aren't doing the trick. Is there any way I could go about doing this?
Render everything in the native resolution to a RenderTarget instead of the back buffer:
SpriteBatch targetBatch = new SpriteBatch(GraphicsDevice);
RenderTarget2D target = new RenderTarget2D(GraphicsDevice, 320, 240);
GraphicsDevice.SetRenderTarget(target);
//perform draw calls
Then render this target (your whole screen) to the back buffer:
//set rendering back to the back buffer
GraphicsDevice.SetRenderTarget(null);
//render target to back buffer
targetBatch.Begin();
targetBatch.Draw(target, new Rectangle(0, 0, GraphicsDevice.DisplayMode.Width, GraphicsDevice.DisplayMode.Height), Color.White);
targetBatch.End();

Direct3D Camera aspect ratio/scaling problem

I'm using SlimDX/C# to write a Direct3D application. I configured the camera as per textbox way:
private float cameraZ = 5.0f;
camera = new Camera();
camera.FieldOfView =(float)(Math.PI/2);
camera.NearPlane = 0.5f;
camera.FarPlane = 1000.0f;
camera.Location = new Vector3(0.0f, 0.0f, cameraZ);
camera.Target = Vector3.Zero;
camera.AspectRatio = (float)InitialWidth / InitialHeight;
The drawing and rotational method are all decent Matrix.RotationYawPitchRoll and mesh.DrawSubset(0). Everything else appear normal.
My Problem is that my 3d mesh (thin square box), when look from the side, and stand vertically, it appear thicker than when it's horizontal. I've tried to change the AspectRatio to 1, it's worse. So through trial and error, I found out that it's looks much normal when the AspectRatio is around 2.6. Why is that and what could be wrong?
I've figured out the problem and answer already.
Apparently I did scale the mesh, and to match the aspect ratio, and I apply the Matrix.Scaling after Matrix.RotationYawPitchRoll. When I rotate the mesh facing forward only I realize that it looks the same no matter vertically or horizontally, the scaling is stretching it sideway no mather how I rotate. Swap the 2 matrix does fix my problem.
Thanks anyway

Resources