Drawing in Metal using Mesh - ios

I want to create the feature as mentioned below in Picture. The number tells the touch order in the screen and dot tells the position. I want to create as same effect.
We can do this using normal drawing index primitive method. But I want to know whether Is it possible to create this effect using MTKMesh ? Please suggest/give some ideas to perform this in better way ?

You probably shouldn't use a MTKMesh in this case. After all, if you have all of the vertex and index data, you can just place it directly in one or more MTLBuffer objects and use those to draw. Using MetalKit means you'll need to create all kinds of intermediate objects (a MDLVertexDescriptor, a MTKMeshBufferAllocator, one or more mesh buffers, a submesh, and an MDLMesh) only to turn around and iterate all of those superfluous objects to get back to the underlying Metal buffers. MTKMesh exists to make it easy to import 3D content from model files via Model I/O.

Related

Tile Game Map Storage in SpriteKit

I'm making a tile-based adventure game in SpriteKit and I'm trying to figure out a good way to store my maps. A typical map might have 100x100 tile dimensions. Currently I have a very small 8x16 map which I'm storing in a 2x2 Swift array. However, making maps in arrays seems like bad practice as the map size increases. What would be the best way to store this map data?
There is nothing wrong with using 2 dimensional arrays, in fact, if you use arrays, then you can save them into plists to make things easier for you.
I would personally write my own class that wraps around the 2D array so that it suits my needs (E.G. if I am adding a column, it will add the column to every row)
I don't know if this suits your need but you could use the editor Tiled which offers you a visual way to create your maps.
Maps are saved as .tmx file (basically an XML file). Then you can import them in your game using one of the listed solutions. You can event create your own solution pretty easily (second answer in the given link)
This solution makes creating/modifying maps easier, but you have to use an external software.

Opencv with blender or opengl?

I am new to opencv,I did a project with opencv,
My project is tracking object with stereo camera,so I find where is the object and I want to represent it in (blender or with opengl or another one),so my situation is that I have point 3d in YML file and I want to represent them . I dont know what I will use ,can any one help me ??
Its possible to do in Blender, but for your simple purpose Opengl should be enough. To get started with Modern Opengl check this list of contents: link
In opengl, before drawing anything you must "send" data (vertices) to your GPU. One part of this process is called Vertex Buffer Object. (its very simple after your program by yourself). When you use VBO, you can specify what type of data you have: STATIC or DYNAMIC. Dynamic means that you (the artist) will change the data over time, the position of each vertex might change. And that is what you want.

Keeping state in a Picasso Transformation

I need to apply different border thicknesses in a transformation. That is, the left side could be 10dp and the top 8dp, etc. for each cell in a GridView.
I have a Transformation with local variables for the thicknesses which I apply in transform using Canvas drawing primitives. This all works and the drawing is happening.
My question: because each transformation has different parameters it means I have to create a new transformation for each cell (in my adapter), set its properties and pass it to the Picasso builder.
I read elsewhere that transformations should not be created multiple times and they can be re-used. But thats not really possible in my scenario since each transformation has different state.
Am I doing this right and/or whats the best way to achieve what I'm trying to do?
Thanks.
If the values are truly dynamic you will have to create a new instance for every call. It's not the end of the world to do this, it's only a single, small allocation. Most transformers are completely stateless and it makes sense to re-use the same instance.
You could also pool these objects, but it's needlessly complicated. You have to deal with request joining, canceling, and the asynchronous nature of how they're used. Unless it becomes a problem, just pay the cost of the allocation.
If the range of values is limited or you are using the same values over and over you could cache those instances in a map.

Reusing a VertexBuffer or make new VertexBuffer object?

I'm trying to render bitmap fonts in directX10 at the moment, and I want to do this as efficiently as possible. I'm having a hard time getting a start on my design because of this question though.
So should I reuse a single VertexBuffer or make multiple VertexBuffer objects?
Currently I allocate one dynamic VertexBuffer per Quad object in my program. This way I wouldn't have to map/unmap a VertexBuffer if nothing moves on my screen. For fonts I can implement a similar method on where I allocate one buffer per text box, or something similar.
After searching I read about reusing a single VertexBuffer for all objects. Vertex caching came up also. What is the advantage/disadvantage of this, and is it faster than my previous method?
For last, is there any other method I should look into rendering many 2d quads in the screen?
Thank you in advance.
Using a single dynamic Vertex Buffer with the proper combinations of DISCARD and NO_OVERWRITE is the best way to handle this kind of dynamic submission. The driver will perform buffer renaming with DISCARD to minimize GPU stalls.
This is the mechanism used by SpriteBatch/SpriteFont and PrimitiveBatch in the DirectX Tool Kit. You can check that source for details, and if really needed you could adopt it to Direct3D 10.x. Of course, moving to Direct3D 11 is probably the better choice.

Creating a Geometric Path from 2D points without a graphics context. Possible in iOS?

I am needing a way to test out some heavy mathematical functionality in my code and have come to the point where I need to verify that such code is working properly. I would like to be able to create a path based on an array of points and use this path for testing without a graphics context.
As an example, Java has various classes such as the Path2D class that is completely independent on any kind of context or view unless you need to display the information in some kind of graphics context.
It looks like that Apple doesn't provide any methods that allow you to create, manipulate and change arbitrary geometric shapes but I wanted to come here and make sure.
CGPath and UIBezierPath can both be created without having a current context. But it depends what you want to do as to how much use they will be because their purpose is really for drawing. As such it isn't really easy to get the points back from the path once added.

Resources