I need help to draw map created from Tiled (.tmx)
version info
flame: 1.5.0
flame_tiled: 1.9.0
what I want is to draw background first, then player, then foreground.
I have 4 layer for now,
foreground (tile layer) (top layer).
spawn (object layer).
housing (tile layer).
land (tile layer).
already working with drawing background and player and foreground with this code. but I need to save 2 file of map data.
final currentMap = await TiledComponent.load(
'$mapName.tmx',
Vector2.all(16),
);
add(currentMap);
final spawnPointObject = currentMap.tileMap.getLayer<ObjectGroup>('spawn');
for (final spawnPoint in spawnPointObject!.objects) {
final positions = Vector2(
spawnPoint.x + (spawnPoint.width / 2),
spawnPoint.y + (spawnPoint.height / 2),
);
switch (spawnPoint.class_) {
case 'player':
_player = MyPlayer(
anchor: Anchor.center,
current: 'idle',
position: positions,
size: Vector2.all(16),
name: name,
);
add(_player);
break;
}
}
final currentForeground = await TiledComponent.load(
'${mapName}_foreground.tmx',
Vector2.all(16),
);
add(currentForeground);
I can draw from object layer, but take soo much case will be hard for update later..
so is there any way to draw only 1 layer with flame_tiled.?
this is sample image, I want my player to draw behing the roof when played.
image
- already try with layer object and drawing base on object id, one by one. but take so much effort.
- try with 2 save file, but still hard to maintain (used now)
My personal conclusion about this problem is that flame_tiled is not flexible enough, so the best thing it can do for you is to parse map files. If you need a flexible rendering, you going to implement it on your side, because flame_tiled renders everything as a big flat batch of sprites.
Probably you can do a fast hack by rendering the RenderableTiledMap twice. At first pass you disable "roof" map's layers (see "setLayerVisibility" function) and renders everything into a Picture / Image and wraps it into a component with "ground" priority.
Than you enable "roof" layer and disable "ground", then do the same rendering into another Picture / Image and wraps it into another component with "roof" priority.
Trying to solve this problem, I have made two solutions. One is simpler, another is more complicated and still in development / debug stage:
https://pub.dev/packages/flame_tiled_utils - with this you can render every map's tile as a component into separate layer with given priority. Exactly what you want, but you need to create some additional classes to describe your map's tile types.
https://github.com/ASGAlex/flame_spatial_grid - allows to do the same, but with better abstraction level. Also helps to avoid problems of the previous library (slow rendering on large maps). But it is still in heavy development, sometimes I broke something, sometimes fix...
Sorry for such "longread" answer =)
Related
I'm on Yosemite 10.10.5 and Xcode 7, using Swift to make a game targeting iOS 8 and above.
EDIT: More details that might be useful: This is a 2D puzzle/arcade game where the player moves stones around to match them up. There is no 3D rendering at all. Drawing is already too slow and I haven't even gotten to explosions with debris yet. There is also a level fade-in, very concerning. But this is all on the simulator so far. I don't yet have an actual iPhone to test with yet and I'm betting the actual device will be at least a little faster.
I have my own Draw2D class, which is a type of UIView, set up as in this tutorial. I have a single NSTimer which initiates the following chain of calls in Draw2D:
[setNeedsDisplay]; // which calls drawRect, which is the master draw function of Draw2D
drawRect(rect: CGRect)
{
scr_step(); // the master update function, which loops thru all objects and calls their individual update functions. I put it here so that updating and drawing are always in sync
CNT = UIGraphicsGetCurrentContext(); // get the curret drawing context
switch (Realm) // based on what realm im in, call the draw function for that realm
{
case rlm.intro: scr_draw_intro();
case rlm.mm: scr_draw_mm();
case rlm.level: scr_draw_level(); // this in particular loops thru all objects and calls their individual draw functions
default: return;
}
var i = AARR.count - 1; // loop thru my own animation objects and draw them too, note it's iterating backwards because sometimes they destroy themselves
while (i >= 0)
{
let A = AARR[i];
A.scr_draw();
i -= 1;
}
}
And all the drawing works fine, but slow.
The problem is now I want to optimize drawing. I want to draw only in the dirty rectangles that need drawing, not the whole screen, which is what setNeedsDisplay is doing.
I could not find any tutorials or good example code for this. The closest I found was apple's documentation here, but it does not explain, among other things, how to get a list of all dirty rectangles so far. It does not also explicitly state if the list of dirty rectangles is automatically cleared at the end of each call to drawRect?
It also does not explain if I have to manually clip all drawing based on the rectangles. I found conflicting info about that around the web, apparently different iOS versions do it differently. In particular, if I'm gonna hafta manually clip things then I don't see the point of apple's core function in the first place. I could just maintain my own list of rectangles and manually compare each drawing destination rectangle to the dirty rectangle to see if I should draw anything. That would be a huge pain, however, because I have a background picture in each level and I would hafta draw a piece of it behind every moving object. What I'm really hoping for is the proper way to use setNeedsDisplayInRect to let the core framework do automatic clipping for everything that gets drawn on the next draw cycle, so that it automatically draws only that piece of the background plus the moving object on top.
So I tried some experiments: First in my array of stones:
func scr_draw_stone()
{
// the following 3 lines are new, I added them to try to draw in only dirty rectangles
if (xvp != xv || yvp != yv) // if the stone's coordinates have changed from its previous coordinates
{
MyD.setNeedsDisplayInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // MyD.swc is Draw2D's current square width in points, maintained to softcode things for different screen sizes.
}
MyD.img_stone?.drawInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // draw the plain stone
img?.drawInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // draw the stone's icon
}
This did not seem to change anything. Things were drawing just as slow as before. So then I put it in brackets:
[MyD.setNeedsDisplayInRect(CGRectMake(x, y, MyD.swc, MyD.shc))];
I have no idea what the brackets do, but my original setNeedsDisplay was in brackets just like they said to do in the tutorial. So I tried it in my stone object, but it had no effect either.
So what do I need to do to make setNeedsDisplayInRect work properly?
Right now, I suspect there's some conditional check I need in my master draw function, something like:
if (ListOfDirtyRectangles.count == 0)
{
[setNeedsDisplay]; // just redraw the whole view
}
else
{
[setNeedsDisplayInRect(ListOfDirtyRecangles)];
}
However I don't know the name of the built-in list of dirty rectangles. I found this saying the method name is getRectsBeingDrawn, but that is for Mac OSX. It doesn't exist in iOS.
Can anyone help me out? Am I on the right track with this? I'm still fairly new to Macs and iOS.
You should really avoid overriding drawRect if at all possible. Existing view/technologies take advantage of any hardware capabilities to make things a lot faster than manually drawing in a graphics context could, including buffering the contents of views, using the GPU, etc. This is repeated many times in the "View Programming Guide for iOS".
If you have a background and other objects on top of that, you should probably use separate views or layers for those rather than redraw them.
You may also consider technologies such as SpriteKit, SceneKit, OpenGL ES, etc.
Beyond that, I'm not quite sure I understand your question. When you call setNeedsDisplayInRect, it will add that rect to those that need to be redrawn (possibly merging with rectangles that are already in the list). drawRect: will then be called a bit later to draw those rectangles one at a time.
The whole point of the setNeedsDisplayInRect / drawRect: separation is to make sure multiple requests to redraw a given part of the view are merged together, and drawing only happens once per redraw cycle.
You should not call your scr_step method in drawRect:, as it may be called multiple times in a cycle redraw cycle. This is clearly stated in the "View Programming Guide for iOS" (emphasis mine):
The implementation of your drawRect: method should do exactly one
thing: draw your content. This method is not the place to be updating
your application’s data structures or performing any tasks not related
to drawing. It should configure the drawing environment, draw your
content, and exit as quickly as possible. And if your drawRect: method
might be called frequently, you should do everything you can to
optimize your drawing code and draw as little as possible each time
the method is called.
Regarding clipping, the documentation of drawRect states that:
You should limit any drawing to the rectangle specified in the rect
parameter. In addition, if the opaque property of your view is set to
YES, your drawRect: method must totally fill the specified rectangle
with opaque content.
Not having any idea what your view shows, what the various method you call do, what actually takes time, it's difficult to provide much more insight into what you could do. Provide more details into your actual needs, and we may be able to help.
I'm working on a graphing application which I wrote using Core Graphics. I have a buffer which accumulates data, and I render it on the screen. It's super slow and I want to avoid going to openGL if possible. According to the profiler, drawing my graph data is what's killing me (which consists of a number of points which are converted to a path, followed by the calls AddPath, DrawPath)..
This is what I want to do, my question is how to best implement it using layers / views / etc..
I have a grid and some text. I want this to be rendered in a CALayer (or some other layer/view?) and only update when required (the graph is rescaled).
Only a portion of the data needs to be refreshed. I want to take the previous screen buffer, erase a rectangle's worth of data (or cover it with a white box) and then draw only the portion of the graphs that have changed.
I then want to merge the background layer with the foreground graphs to generate the composite image. This requires the graph layer to have a transparent background so as not to obscure the grid.
I've looked at using CAlayer as a sublayer, but it doesn't seem to provide a simple way to draw a line. CAShapeLayer seems a bit better, but it looks like it can only draw a single line. I want the grid to be composed of multiple lines.
What's the best approach and combination of objects to allow me to do this?
Thanks,
Reza
I'd have a CGLayerRef that was used for drawing the path into. For each new point I'd draw just the new segment. When the graph got to full width I'd create a new CGLayerRef and start drawing the new line segments into that.
What happens to the previous layer as it's drawn over by the new layer depends on how your graph is displayed, but you could clear the section which is now underneath the new layer (using CGContextSetBlendMode(context, kCGBlendModeClear);) or you could choose to blend them together in some other way.
Drawing the layers each time you make a change to the lines they contain is relatively cheap compared to drawing all of the line segments themselves.
Technically, there would also be CALayers used to manage the drawing of the CGLayerRefs to the screen (via the delegate relationship drawLayer:inContext:), but all of the line drawing is done using the CGLayerRefs context and then the CGLayerRef is drawn as a whole into the CALayers context (CGContextDrawLayerInRect(context, frame, backingCGLayer);).
I currently have a free hand drawing iPad app, that adds lines to a mutable path via quad curves in the touches methods then calls setNeedsDisplayInRect on the new area.
Problem is when the drawing (path) gets rather large, it takes longer to redraw, and begins to bog down. As well as whenever the user changes the brush size or color, it applies this to overlapping parts of the previously drawn path on redraw.
To counter this, I call renderInContext in a background thread in touchesEnded, and merge this with another UIImage in an imageview behind the draw view. Then clear the draw view.
This also helps so when the user hits save, the drawing is usually already rendered in a single UIImage - ready to go.
This works fine on other devices, but on he iPad 3 retina display, the performance is really awful and tends to crash whenever the user lifts his finger multiple times when drawing quickly.
I am seeking any type of advice for best practice in handling this type of situation? Aside from adding additional views to render off of in the background to prevent the main and background thread from accessing the same view at a time - which sounds rather hack-ish - I feel like I'm beating a dead horse?
In my current app, I made a working implementation that works fine on iPad 2 as well as 3, regardless of path length or number of paths. It seems that the graphics card is better at drawing lots of small paths then a few large paths, and either one is faster than rendering an image into a context. So, what I do is even if the user is continuously drawing, I break the path into many smaller paths and add those to an array. This approach gives me one advantage, and one disadvantage.
Advantage: The ability to zoom and redraw the image crisply
Disadvantage: Can't do pixel perfect erasing
As far as multiple colors, I made a subclass of UIBezierPath that includes a color property. Since colors are now serializable via NSCoding, they are easily saveable. In addition, I have a "stroke" object, which holds all of the paths the user created in one continuous stroke. This way I can handle undo / redo correctly.
Hope this info helps.
I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.
I am using XNA for a 2D project. I have a problem and I don't know which way to solve it. I have a texture (an image) that is drawn to the screen for example:
|+++|+++|
|---|---|
|+++|+++|
Now I want to be able to destroy part of that structure/image so that it looks like:
|+++|
|---|---|
|+++|+++|
so that collision now will work as well for the new image.
Which way would be better to solve this problem:
Swap the whole texture with another texture, that is transparent in the places where it is destroyed.
Use some trickery with spriteBatch.Draw(sourceRectangle, destinationRectangle) to get the desired rectangles drawn, and also do collision checking with this somehow.
Split the texture into 4 smaller textures each of which will be responsible for it's own drawing/collision detection.
Use some other smart-ass way I don't know about.
Any help would be appreciated. Let me know if you need more clarification/examples.
EDIT: To clarify I'll provide an example of usage for this.
Imagine a 4x4 piece of wall that when shot at, a little 1x1 part of it is destroyed.
I'll take the third option:
3 - Split the texture into 4 smaller
textures each of which will be
responsible for it's own
drawing/collision detection.
It's not hard do to. Basically it's just the same of TileSet struct. However, you'll need to change your code to fit this approach.
Read a little about Tiles on: http://www-cs-students.stanford.edu/~amitp/gameprog.html#tiles
Many sites and book said about Tiles and how to use it to build game worlds. But you can use this logic to everything which the whole is compost from little parts.
Let me quick note the other options:
1 - Swap the whole texture with
another texture, that is transparent
in the places where it is destroyed.
No.. have a different image to every different position is bad. If you need to change de texture? Will you remake every image again?
2- Use some trickery with
spriteBatch.Draw(sourceRectangle,
destinationRectangle) to get the
desired rectangles drawn, and also do
collision checking with this somehow.
Unfortunately it's don't work because spriteBatch.Draw only works with Rectangles :(
4 Use some other smart-ass way I don't
know about.
I can't imagine any magic to this. Maybe, you can use another image to make masks. But it's extremely processing-expensive.
Check out this article at Ziggyware. It is about Deformable Terrain, and might be what you are looking for. Essentially, the technique involves settings the pixels you want to hide to transparent.
Option #3 will work.
A more robust system (if you don't want to be limited to boxes) would use per-pixel collision detection. The process basically works as follows:
Calculate a bounding box (or circle) for each object
Check to see if two objects overlap
For each overlap, blit the sprites onto a hidden surface, comparing pixel values as you go. If a pixel is already set when you try to draw the pixel from the second sprite, you have a collision.
Here's a good XNA example (another Ziggyware article, actually): 2D Per Pixel Collision Detection
Some more links:
Can someone explain per-pixel collision detection
XNA 2-d per-pixel collision
I ended up choosing option 3.
Basically I have a Tile class that contains a texture and dimention. Dimention n means that there are n*n subtiles within that tile. I also have an array that keeps track of which tiles are destroyed or not. My class looks like this in pseudo code:
class Tile
texture
dimention
int [,] subtiles; //0 or 1 for each subtile
public Tile() // constructor
subtiles = new int[dimention, dimention];
intialize_subtiles_to(1);
public Draw() // this is how we know which one to draw
//iterate over subtiles
for(int i..
for(int j ...)
if(subtiles[i,j] == 1)
Vector2 draw_pos = Vector2(i*tilewidth,
j*tileheight)
spritebatch.Draw(texture, draw_pos)
In a similar fashion I have a collision method that will check for collision:
public bool collides(Rectangle rect)
//iterate over subtiles
for i...
for j..
if(subtiles[i,j]==0) continue;
subtile_rect = //figure out the rect for this subtile
if(subtile_rect.intersects(rect))
return true;
return false;
And so on. You can imagine how to "destroy" certain subtiles by setting their respective value to 0, and how to check if the whole tile is destroyed.
Granted with this technique, the subtiles will all have the same texture. So far I can't think of a simpler solution.