I am developing a game in Xamarin and are using Monogame. With the exact same code I get two different results when I play the game on Android and iOS. Here is a picture to compare the two:
The upper line is from iOS and the one under it is from Android. Do anyone know why we get two so different results when we draw the lines with the exact same code? I got a Line class which draws a line, the draw method in the line class is below.
public void Draw (SpriteBatch spritebatch)
{
float angle = (float)Math.Atan2 (PointB.Y - PointA.Y,
PointB.X - PointA.X);
int length = (int)Vector2.Distance (PointA, PointB);
LineRect = new Rectangle ((int)PointA.X,
(int)PointA.Y,
(int)Texture.Width,
(int)Texture.Height);
spritebatch.Draw (Texture,
PointA,
LineRect,
Color,
angle,
Vector2.Zero,
new Vector2 (length, Width),
SpriteEffects.None,
0);
}
If i change the Texture.Width=1 and Texture.Height=2 and width of the line to 3, it works perfectly on both Android and iOS.
But if i want to have a thicker line, and the Texture.Width and Texture.Height is 3 and the width of the line is 12, it works like a charm on Android, but then it gets buggy on iOS.
Texture.Width=2 and Texture.Height = 1 and Width = 13 works on iOS.
So the question is how come something like different Texture.Height, Texture.Width and Width values can bug the line? And doesn't give the same output on iOS and Android when its just XNA?
Related
Above is an example of my problem. I have two alpha masks that are exactly the same, just a circle white gradient with transparent background.
I am drawing to a RenderTexture2D that is rendered above the screen to creating lighting. It clears a semi transparent black color, and then the alpha masks are drawn in the correct position to appear like lights..
On their own it works fine, but if two clash, like the below "torch" against the blue glowing mushrooms, you can see the bounding box transparency is overwriting the already drawn orange glow.
Here is my approach:
This is creating the render target:
RenderTarget2D = new RenderTarget2D(Global.GraphicsDevice, Global.Resolution.X+4, Global.Resolution.Y+4);
SpriteBatch = new SpriteBatch(Global.GraphicsDevice);
This is drawing to the render target:
private void UpdateRenderTarget()
{
Global.GraphicsDevice.SetRenderTarget(RenderTarget2D);
Global.GraphicsDevice.Clear(ClearColor);
// Draw textures
float i = 0;
foreach (DrawableTexture item in DrawableTextures)
{
i += 0.1f;
item.Update?.Invoke(item);
SpriteBatch.Begin(SpriteSortMode.Immediate, item.Blend,
SamplerState.PointClamp, DepthStencilState.Default,
RasterizerState.CullNone);
SpriteBatch.Draw(
item.Texture,
(item.Position - Position) + (item.Texture.Size() / 2 * (1 - item.Scale)),
null,
item.Color,
0,
Vector2.Zero,
item.Scale,
SpriteEffects.None,
i
);
SpriteBatch.End();
}
Global.GraphicsDevice.SetRenderTarget(null);
}
I have heard about depth stencils etc.. and I feel like I have tried so many combinations of things but I am still getting the issue. I haven't had any troubles with this while building all the other graphics in my game.
Any help is greatly appreciated thanks! :)
Ah, this turned out to be a problem with the BlendState itself rather than the SpriteBatch. I had created a custom BlendState "Multiply" which I picked up online that was causing the issue.
"whats causing" the problem was the real question here.
This was the solution to get my effect without "overlapping":
public static BlendState Lighting = new BlendState
{
ColorSourceBlend = Blend.One,
ColorDestinationBlend = Blend.One,
AlphaSourceBlend = Blend.Zero,
AlphaDestinationBlend = Blend.InverseSourceColor
};
This allows the textures to overlap, and also "subtracts" from the "darkness" layer. It would be easier to see if the darkness was more opaque.
I have answered this just incase some other fool mistakes a blend state problem with the sprite batch itself.
I have tried creating a game in unity and build it on ios on any apple devices there were no problem except on iphone x. Screenshot is below.enter image description here. It was covered by the iphone x notch and then when when the character is on the left or the right side it was cut it half.Is there any other solution or a plugin we can use to solve the issue ?. Is there a unity settins or xcode settings to that ? .Thank You
About the iPhone X notch, you can use this:
Screen.safeArea
It is a convenient way to determine the screen actual "Safe Area". Read more about it in this thread.
About the character cutting in half, this is probably something you need to take care of manually based on your game logic. By getting the Screen.width - you should be able to either adjust the Camera (zoom out) or limit the character movement in a way that it will not get past the screen edge.
For the iPhone X and other notched phones you can use the generic Screen.safeArea provided by Unity 2017.2.1+. Attach the script below to a full screen UI panel (Anchor 0,0 to 1,1; Pivot 0.5,0.5) and it will shape itself to the screen safe.
Also recommended to have your Canvas set to "Scale With Screen Size" and "Match (Width-Height)" = 0.5.
public class SafeArea : MonoBehaviour
{
RectTransform Panel;
Rect LastSafeArea = new Rect (0, 0, 0, 0);
void Awake ()
{
Panel = GetComponent<RectTransform> ();
Refresh ();
}
void Update ()
{
Refresh ();
}
void Refresh ()
{
Rect safeArea = GetSafeArea ();
if (safeArea != LastSafeArea)
ApplySafeArea (safeArea);
}
Rect GetSafeArea ()
{
return Screen.safeArea;
}
void ApplySafeArea (Rect r)
{
LastSafeArea = r;
Vector2 anchorMin = r.position;
Vector2 anchorMax = r.position + r.size;
anchorMin.x /= Screen.width;
anchorMin.y /= Screen.height;
anchorMax.x /= Screen.width;
anchorMax.y /= Screen.height;
Panel.anchorMin = anchorMin;
Panel.anchorMax = anchorMax;
}
}
For a more in depth breakdown, I've written a detailed article with screenshots here: https://connect.unity.com/p/updating-your-gui-for-the-iphone-x-and-other-notched-devices. Hope it helps!
Sorry for the late answer but I have adjusted the camera's viewport rectangle for iOS devices and it works correctly. Check if it works for your camera as well.
I have tried safeArea and other script based solutions which do not seem to work.
#if UNITY_IPHONE
mainCamera.rect = new Rect(0.06f, 0.06f, 0.88f, 1);
#endif
I'm programmatically using SKTileMapNode. The code is C# (Xamarin.iOS) but should be readable by every Swift/ObjC developer.
The problem is that the sorting of tiles in isometric projection seems to be incorrect and I cannot see why. To test, the map has 1 row and 5 columns.
See the screenshot:
The tiles at 0|0 and 2|0 are in front of the others. The pyramid styled tile at 4|0 however, is drawn correctly in front of the one at 3|0.
I'm using two simple tiles:
The first one has a resolution of 133x83px and the second one is 132x131px
This is what it looks like in Tiled and what I am trying to reproduce:
The tile map is setup and added to the scene using the following code:
var tileDef1 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_014"));
var tileDef2 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_036"));
var tileGroup1 = new SKTileGroup (tileDef1);
var tileGroup2 = new SKTileGroup (tileDef2);
var tileSet = new SKTileSet (new [] { tileGroup1, tileGroup2 }, SKTileSetType.Isometric);
var tileMap = SKTileMapNode.Create(tileSet, 5, 2, new CGSize (128, 64));
tileMap.Position = new CGPoint (0, 0);
tileMap.SetTileGroup (tileGroup1, 0, 0);
tileMap.SetTileGroup (tileGroup2, 1, 0);
tileMap.SetTileGroup (tileGroup1, 2, 0);
tileMap.SetTileGroup (tileGroup2, 3, 0);
tileMap.SetTileGroup (tileGroup2, 4, 0);
tileMap.AnchorPoint = new CGPoint (0, 0);
Add (tileMap);
If first suspected an incorrect tile size. The tile size used to initialise the tile map (128|64) is the size of the diamond shaped base of the tile. If using a flat tile, this is identical to the texture size. For tiles with a height, it differs. However, changing the tile size affects the alignment of the tiles an the size I'm using is the same as in Tiled and it's giving the correct result, so that cannot be the culprit.
What am I doing wrong or where am I thinking wrong?
I don't have an answer for the main issue, but the tile size you use to initialise the map (128|64) is the size of the base tile, used to convert isometric coordinates into the the actual orthogonal space.
If you change this size in Tiled, it'll affect the size of the positioning grid. You can see a similar effect in Xcode's built-in tile map editor (or in any other isometric tile map editor I guess).
I'm creating a custom popover background, so I subclassed the UIPopoverBackground abstract class. While creating the layout function I came across a problem with placing and rotating the arrow for the background.
The first picture shows the arrow at the desired position. In the following code I calculated the origin I wanted but the rotation seemed to have translated the new position of the image off to the side about 11 points. As you can see, I created a hack solution where I shifted the arrow over 11 points. But that still doesn't cover up the fact that I have a gapping hole in my math skills. If someone would be so kind as to explain to me what's going on here I'd be eternally grateful. What also would be nice is a solution that would not involve magic numbers, so that I could apply this solution to the cases with the up, down and right arrow
#define ARROW_BASE_WIDTH 42.0
#define ARROW_HEIGHT 22.0
case UIPopoverArrowDirectionRight:
{
width -= ARROW_HEIGHT;
float arrowCenterY = self.frame.size.height/2 - ARROW_HEIGHT/2 + self.arrowOffset;
_arrowView.frame = CGRectMake(width,
arrowCenterY,
ARROW_BASE_WIDTH,
ARROW_HEIGHT);
rotation = CGAffineTransformMakeRotation(M_PI_2);
//rotation = CGAffineTransformTranslate(rotation, 0, 11);
_borderImageView.frame = CGRectMake(left, top, width, height);
[_arrowView setTransform:rotation];
}
break;
Well, if the rotation is applied about the center of the arrow view (as it is), that leaves a gap of (ARROW_BASE_WIDTH - ARROW_HEIGHT) / 2 to the post-rotation left of the arrow, which is what you have to compensate for, it seems. By offsetting the center of the arrow view by this much, it should come back into alignment.
So I have a XNA application set up. The camera is in first person mode, and the user can move around using the keyboard and reposition the camera target with the mouse. I have been able to load 3D models fine, and they appear on screen no problem. Whenever I try to draw any primitive (textured or not), it does not show up anywhere on the screen, no matter how I position the camera.
In Initialize(), I have:
quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2);
quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements);
In LoadContent(), I have:
quadTexture = Content.Load<Texture2D>(#"Textures\brickWall");
quadEffect = new BasicEffect(this.GraphicsDevice, null);
quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f);
quadEffect.LightingEnabled = true;
quadEffect.World = Matrix.Identity;
quadEffect.View = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
quadEffect.Projection = this.Projection;
quadEffect.TextureEnabled = true;
quadEffect.Texture = quadTexture;
And in Draw() I have:
this.GraphicsDevice.VertexDeclaration = quadVertexDecl;
quadEffect.Begin();
foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes)
{
pass.Begin();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(
PrimitiveType.TriangleList,
quad.Vertices, 0, 4,
quad.Indexes, 0, 2);
pass.End();
}
quadEffect.End();
I think I'm doing something wrong in the quadEffect properties, but I'm not quite sure what.
I can't run this code on the computer here at work as I don't have game studio installed. But for reference, check out the 3D audio sample on the creator's club website. They have a "QuadDrawer" in that project which demonstrates how to draw a textured quad in any position in the world. It's a pretty nice solution for what it seems you want to do :-)
http://creators.xna.com/en-US/sample/3daudio