Currently I have the camera following an image, but now decided I want to display some text on the top of the screen.
I have found using the following code it makes the text move around the screen as the location of 20, 20 is changing. (which makes sense as the camera is following an object, position 20, 20 is static).
spriteBatch.DrawString(font, "test", new Vector2(20, 20), Color.White);
The camera is being updated with the following code.
_viewMatrix = Matrix.CreateTranslation(new Vector3(-this.Position.X, -this.Position.Y, 0)) *
Matrix.CreateRotationZ(this.Rotation) *
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) *
Matrix.CreateTranslation(new Vector3(viewPort.Width * 0.5f, viewPort.Height * 0.5f, 0));
Its late, and feel I am missing something obvious, but if I want to always display "Test" of the screen regardless of where the camera is, how do you go about it?
Simple: Start another sprite batch (ie: call Begin), without passing a view matrix.
Related
I currently work on a little Project, where I render a CubeMap with WebGL and then apply some sounds with the "web audio API" Web Audio API
Since the project is very large, I just try to explain what I am looking for. When I load an audio file, the sounds gets visualized (looks like a cube). The audio listener position is ALWAYS at position 0,0,0. What I have done so far is that I have created "Camera" (gl math library) with lookAt and perspective and when I rotate the camera away from the audio emitting cube, the audio played should sound different.
How am I doing this?
Every Frame I set the the orientation of the PannerNode (panner node set orientation) to the upVector of the camera. Here is the every frame update-Method (for the sound):
update(camera) {
this._upVec = vec3.copy(this._upVec, camera.upVector);
//vec3.transformMat4(this._upVec, this._upVec, camera.vpMatrix);
//vec3.normalize(this._upVec, this._upVec);
this._sound.panner.setOrientation(this._upVec[0], this._upVec[1], this._upVec[2]);
}
And here is the updateViewProjectionMethod-Methof from my Camera class, where I update the Orientation of the listener:
updateViewProjMatrix() {
let gl = Core.mGetGL();
this._frontVector[0] = Math.cos(this._yaw) * Math.cos(this._pitch);
this._frontVector[1] = Math.sin(this._pitch);
this._frontVector[2] = Math.sin(this._yaw) * Math.cos(this._pitch);
vec3.normalize(this._lookAtVector, this._frontVector);
vec3.add(this._lookAtVector, this._lookAtVector, this._positionVector);
mat4.lookAt(this._viewMatrix, this._positionVector, this._lookAtVector, this._upVector);
mat4.perspective(this._projMatrix, this._fov * Math.PI / 180, gl.canvas.clientWidth / gl.canvas.clientHeight, this._nearPlane, this._farPlane);
mat4.multiply(this._vpMatrix, this._projMatrix, this._viewMatrix);
Core.getAudioContext().listener.setOrientation(this._lookAtVector[0], this._lookAtVector[1], this._lookAtVector[2], 0, 1, 0);
}
Is this the right way? I can hear that the sound is different if I rotate the camera, but I am not sure. And do I have to multiply the resulting upVector with the current viewProjectionMatrix?
I'm programmatically using SKTileMapNode. The code is C# (Xamarin.iOS) but should be readable by every Swift/ObjC developer.
The problem is that the sorting of tiles in isometric projection seems to be incorrect and I cannot see why. To test, the map has 1 row and 5 columns.
See the screenshot:
The tiles at 0|0 and 2|0 are in front of the others. The pyramid styled tile at 4|0 however, is drawn correctly in front of the one at 3|0.
I'm using two simple tiles:
The first one has a resolution of 133x83px and the second one is 132x131px
This is what it looks like in Tiled and what I am trying to reproduce:
The tile map is setup and added to the scene using the following code:
var tileDef1 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_014"));
var tileDef2 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_036"));
var tileGroup1 = new SKTileGroup (tileDef1);
var tileGroup2 = new SKTileGroup (tileDef2);
var tileSet = new SKTileSet (new [] { tileGroup1, tileGroup2 }, SKTileSetType.Isometric);
var tileMap = SKTileMapNode.Create(tileSet, 5, 2, new CGSize (128, 64));
tileMap.Position = new CGPoint (0, 0);
tileMap.SetTileGroup (tileGroup1, 0, 0);
tileMap.SetTileGroup (tileGroup2, 1, 0);
tileMap.SetTileGroup (tileGroup1, 2, 0);
tileMap.SetTileGroup (tileGroup2, 3, 0);
tileMap.SetTileGroup (tileGroup2, 4, 0);
tileMap.AnchorPoint = new CGPoint (0, 0);
Add (tileMap);
If first suspected an incorrect tile size. The tile size used to initialise the tile map (128|64) is the size of the diamond shaped base of the tile. If using a flat tile, this is identical to the texture size. For tiles with a height, it differs. However, changing the tile size affects the alignment of the tiles an the size I'm using is the same as in Tiled and it's giving the correct result, so that cannot be the culprit.
What am I doing wrong or where am I thinking wrong?
I don't have an answer for the main issue, but the tile size you use to initialise the map (128|64) is the size of the base tile, used to convert isometric coordinates into the the actual orthogonal space.
If you change this size in Tiled, it'll affect the size of the positioning grid. You can see a similar effect in Xcode's built-in tile map editor (or in any other isometric tile map editor I guess).
I have been hardly coding on a Direct3D9 based game. Everything went excellent util I hit a big problem. I created a class that wraps the process of loading a mesh from a .x file. I successfully loaded a cube with only one face visible. In theory, that face should look like a square but it is actually rendered as a rectangle. I am quite sure that there is something wrong with the D3DPRESENT_PARAMETERS structure. Down bellow are only the most important lines of my application's initialization.
First part to be created is the focus window:
HWND hWnd = CreateWindowEx(0UL, L"NewFrontiers3DWindowClass", Title.c_str(), WS_POPUP | WS_EX_TOPMOST, 0, 0, 1280, 1024, nullptr, (HMENU)false, hInstance, nullptr);
Then I fill out the D3DPRESENT_PARAMETERS structure.
D3DDISPLAYMODE D3DMM;
SecureZeroMemory(&D3DMM, sizeof(D3DDISPLAYMODE));
if(FAILED(hr = Direct3D9->GetAdapterDisplayMode(Adapter, &D3DMM)))
{
// Error is processed here
}
PresP.BackBufferWidth = D3DMM.Width;
PresP.BackBufferHeight = D3DMM.Height;
PresP.BackBufferFormat = BackBufferFormat;
PresP.BackBufferCount = 1U;
PresP.MultiSampleType = D3DMULTISAMPLE_NONE;
PresP.MultiSampleQuality = 0UL;
PresP.SwapEffect = D3DSWAPEFFECT_DISCARD;
PresP.hDeviceWindow = hWnd;
PresP.Windowed = false;
PresP.EnableAutoDepthStencil = EnableAutoDepthStencil;
PresP.AutoDepthStencilFormat = AutoDepthStencilFormat;
PresP.Flags = D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL;
PresP.FullScreen_RefreshRateInHz = D3DMM.RefreshRate;
PresP.PresentationInterval = PresentationInterval;
Then the Direct3D9 device is created, followed by the SetRenderState functions.
Next, the viewport is assigned.
D3DVIEWPORT9 D3D9Viewport;
SecureZeroMemory(&D3D9Viewport, sizeof(D3DVIEWPORT9));
D3D9Viewport.X = 0UL;
D3D9Viewport.Y = 0UL;
D3D9Viewport.Width = (DWORD)D3DMM.Width;
D3D9Viewport.Height = (DWORD)D3DMM.Height;
D3D9Viewport.MinZ = 0.0f;
D3D9Viewport.MaxZ = 1.0f;
if(FAILED(Direct3D9Device->SetViewport(&D3D9Viewport)))
{
// Error is processed here
}
After this initialization, I globally declare some parameters that will be used later.
D3DXVECTOR3 EyePt(0.0f, 0.0f, -5.0f), Up(0.0f, 1.0f, 0.0f), LookAt(0.0f, 0.0f, 0.0f);
D3DXMATRIX View, Proj, World;
The update function looks like this:
Mesh.Render(Direct3D9Device);
D3DXMatrixLookAtLH(&View, &EyePt, &LookAt, &Up);
Direct3D9Device->SetTransform(D3DTS_VIEW, &View);
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, 1.0f, 1.0f, 1000.f);
Direct3D9Device->SetTransform(D3DTS_PROJECTION, &Proj);
D3DXMatrixTranslation(&World, 0.0f, 0.0f, 0.0f);
Direct3D9Device->SetTransform(D3DTS_WORLD, &World);
The device is not a null pointer.
I recently realized that there is no difference between declaring and setting up a view port and not doing so.
If there is anybody who can point me to the right answer, please help me solve this annoying problem.
If you don't set any transformation matrices, so the identity transformation is applied to your mesh, then face of the cube will be stretched to the same shape of the viewport. If your viewport isn't square (eg. it's the same size as the screen) then your cube's face also won't be square.
You can use a square viewport to workaround this problem, but that will limit your rendering to just that square on the screen. If you want to render to the entire screen you'll need to set a suitable projection matrix. You can calculate a normal perspective perspective matrix using D3DXMatrixPerspectiveFovLH. If you want an orthogonal perspective, where everything is the same size regardless of the distance from the camera, then use D3DXMatrixOrthoLH to calculate the perspective matrix. Note that if you use your viewport's width and height with the later function it will shrink your cube. A unit size cube will be rendered as a single pixel on the screen. You can either use a world or view transform to scale it up again, or use something like width/height and 1 as your width and height parameters to D3DXMatrixOrthoLH.
If you go with D3DXMatrixPerspectiveFovLH then you want something like this:
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, (double) D3DMM.Width / D3DMM.Height,
1.0f, 1000.f);
I think your problem not in D3DPP parameters but in your projective matrix. If you use D3DXMatrixPerspectiveFovLH, check aspect ratio to be 1280 / 1024 = 1.3333f
I'm pretty new to the sdk so forgive me. I want an object to float/transition from the bottom of the screen to the top and keep going until it is out of the device. How do I do that without hard coding the values since all screens have different heights?
First place your object on the bottom of the screen:
object.y = (display.contentHeight + display.screenOriginY * -2) + object.contentHeight * 0.5
//if starting outside of the screen
object.y = (display.contentHeight + display.screenOriginY * -2) - object.contentHeight * 0.5
//if starting at the bottom of the screen
then perform transition.to
transition.to(object, { time = 500, y = 0 - display.screenOriginY })
I wrote it from my memory, so it may not work by copy + paste, but idea stays the same.
object - this is your object you want to transform
display.screenOriginY - this is the distance from the top of the actual screen to the top of the content area (more info here: https://docs.coronalabs.com/api/library/display/screenOriginY.html )
You may also need to read about transitions: http://docs.coronalabs.com/api/library/transition/to.html
I try to draw an image on the screen with render target.
I used this code:
_renderTarget = new RenderTarget2D(
this._graphicsDevice,
this._graphicsDevice.PresentationParameters.BackBufferWidth,
this._graphicsDevice.PresentationParameters.BackBufferHeight,
false,
this._graphicsDevice.PresentationParameters.BackBufferFormat,
DepthFormat.None, 0, RenderTargetUsage.PreserveContents);
_graphicsDevice.SetRenderTarget(_renderTarget);
_spriteBatch.Begin();
_spriteBatch.Draw(texture, drawPoint, null, Color.Red, 0.0f
, new Vector2(texture.Width / 2, texture.Height / 2), 0.5f, SpriteEffects.None, 0 .0f);
_spriteBatch.End();
_graphicsDevice.SetRenderTarget(null);
But, the result image is always black!
Could you help me to change the color of this image.
Thanks.
From the code shown, _spriteBatch.Draw is only rendering content to _renderTarget.
Next you need to render the resulting RenderTarget2D to your screen so you can see it.
You already have _graphicsDevice.SetRenderTarget(null) in place. You then just need to make a separate SpriteBatch.Draw call passing in your _renderTarget.
You can do this because RenderTarget2D extends Texture2D.