iOS Cocos2D optimization [closed] - ios

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I´m building a game that reads a bidimensional array so that it creates the map, but the walls are all separated from the corners and floors, each wall, each corner and each floor is an individual image, and this is consuming a lot of CPU, but I really want to create a random feeling of the map, and that´s why I´m using an image for each corner and wall.
I was thinking that maybe I could generate a texture built by merging 2 or more different textures, to enhance performance.
Does anyone know how that I could do that? Or maybe another solution? Does converting the images to PVR would make any difference?
Thanks

For starters, you should use a texture atlas, created with a tool like TexturePacker, grouping as much of your 'images' onto a single atlas. Basically load it once and create as many sprites from it as you want without having to reload. Using PVR will speed up the load, and benefit your bundle size.
Secondly, especially for the map background, you should use a CCSpriteBatchNode that you init with the above sprite sheet. Then, when you create a tile, just create the sprite and add it to the batch node. Add the batch node to your scene. The benefit of this is that regardless of the number of sprites (tiles) contained in the batch node, this will all be drawn in a single GL call. Now, that is where you will gain the most benefit from a performance standpoint.
Finally, dont rely on the FPS information when running with the simulator. The simulator does not make use of the host's GPU, and its performance is well (much) below what you get on a device. So before posting a question about performance, make certain you measure on a device.

Related

Paint Bucket in iOS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm stuck on a problem and needed some help or guide for a possible solution.
Basically in my application there will be a map with several zones.
The user can select any of these areas, at that time this area is filled with a color.
Imagine a map like this one, so i need to be able to change the color of only one country.
Something like what happens in the books of paintings (https://itunes.apple.com/pt/app/colorfly-best-coloring-book/id1020187921?mt=8), or Paint Bucket command in the Photoshop .
Any idea how to get something like this on iOS ?
Thanks in advance
The paint bucket technique you're looking for is a set of graphics algorithms usually called "flood fill". There are different approaches to the implementation depending on the circumstances and performance needs. (There is more at that wikipedia link.)
I have no experience with it, but here is a library from GitHub that purports to implement this for iOS given a UIImage object: https://github.com/Chintan-Dave/UIImageScanlineFloodfill
Re: your question about doing this without user touch: yes, you'll want to keep a map of countries to (x,y) points so you can re-flood countries when required. That said, the intricacies of the county borders might make an algorithmic fill inexact without more careful normalization of the original source. If your overall map only contains a small set of possible states, there are other ways of achieving this goal, like keeping a complete set of possible images (created in ie Photoshop) and switching them out, or keeping a set of per-country "overlay" images that you swap in as needed. (But if the flood fill is accurate on that source image, and performant for your needs, then great.)

Create linear motion with Swift in iOS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What I'm trying to do seems very simple, yet I'm struggling to make it work. I'm creating a simple SpriteKit game, similar to FlappyBird. The main character stays vertically stationary while the user controls its horizontal motion. That part is no problem. At the same time, other game elements should be moving vertically at a constant speed. I need to detect contact, but do not need to react with physics to the contact (the moving elements either disappear on contact, or the contact causes game to end).
I've tried using physicsBody.velocity, but the results are erratic. Conceptually, this is my desired approach because I want to control the velocity, speeding or slowing as the game progresses.
I've also tried using Actions, and this works ok but it's challenging to create constant motion, and it's difficult to imagine how to speed and slow the motion with Actions. My best results are with SKAction.sequence, but have difficulty coordinating multiple elements to move in sync.
Any clues would be most appreciated.
You can create the host node – world: SKNode to control all the moving objects – just add them as childs. And use that node to rule the world – set world.speed = 0.0 to stop all SKActions, world.speed = 2.0 to move them twice as faster

Need some tips which localization method to use for my NAO robot (robotic) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
for university I'm working on a project in which I have to teach a robot(Nao-robot) play nine men's morris. Unfortunately I'm fairly new to the area of robotics and I need some tips how to solve some problems. Currently I'm working on the localization/orientation of the robot and I'm wondering which approach of localization would fit best in my project.
A short explanation of the project:
The robot has a fixed starting position and has to walk around on a boardwhich has a size of about 3x3 meter ( I will post a picture of the board when i reach 10 reputation). There are no obstacles on the field except the game tokens and the game lines are marked yellow on the board. For orientation I use the two camera devices the robot has.
I found some approaches like
Monte Carlo Localization
SLAM (Simultaneous Localization and Mapping)
but these approaches seem to be quite complex for a beginner like me and I would really appreciate if some has some good ideas what would be a simpler way to solve this problem. Functionality has for me a far higher priority than performance.
I have vague knowledge about the nine men's morris game as such, but I will try to give you my simpler idea.
First thing first, you need to have a map of your board. This should be easy in your case, cause your environment is static. There are few technique to do this mapping from your board. For your case I would suggest to have a metric map, which is an occupancy grid. Assign coordinates to each cell in the grid. This will be helpful in robot navigation.
As you have mentioned, your robot starts from a fixed position. On start up, initialize your robot with this reference location and orientation (with respect to X-Y axes of the grid, may be you don't need the cameras, I am not sure!!). By initialization I mean, mark your position on the grid.
Use Dead Reckoning for localization and keep updating position and orientation of your robot as it move through the board. I would hope that your robot get some feedback from the servos, like number of rotations and so forth. Do that math and update the position coordinates of your robot as it move into different cell in the grid.
You can use A-Star algorithm to find a path for your robot. You need to do the path planning before you want to navigate. You also have to mark those game tokens on the grid, to avoid collisions in planning the path.

ios8 w/swift iPad only UX/UI design [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i'm implementing an app, iPad, iOS8.1+ in swift/metal, landscape only
the main view has 3 containers with a left pullout/slideout/drawer of icons for launching subprocesses
the left slide out will only contain icons no text for things like database access, microphone, stencil overlay, video record, airplay, iTunes, Dropbox, user config, etc
the 3 main containers,
view 1 will hold a 3d rendered model this will take up 75% horz/vert
view 2 will hold a 2d side projection of the rendered model in view 1 ( aka side or top view)
and
view 3 will hold a either detailed subview of something that is choosen in view 1 or view 2
or a pdf document
or a webcontainer
i am concerned about threading as this app will be asynchronously pulling in large amount of data, rendering via gpu buffer and then pushing results via airplay to a video screen.
that being said there are no "Metal View Containers" but there is GLKView, SceneKit for 3d/2d.
do i need to define 3 generic container views and build them up? or is this another way to chop up the existing GLview for Metal?
does anyone have such a metal container already built?
thanks for any positive help.
No, GLKView and GLKViewController are not meant to work with Metal even though they both execute on the GPU. If using Metal, must create your own Metal View and Metal ViewController. This is because OpenGL is done with CAEAGLLayer and Metal is done with CAMetalLayer. I don't know if anyone has done this. Most likely Apple will create these classes on the next iteration of the SDK.
For the 3 container, you can create 3 separate layers, but its more efficient to manually tell metal to draw 3 separate sections. However, this is not an easy task.
I don't think you have to worry about the threading as long as you don't mess with Metal buffer data during execution. Metal's buffer data are not copied (you could) when pass to the GPU. OpenGL buffer data are copied.

iOS fastest way to draw many rectangles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to display audio meters on the iPad consisting of many small green, red or black rectangles. They don't need to be fancy but there may be a lot of them. I am looking for the best technique to draw them quickly. Which of the following techniques is better: text atlas in CALayers or OpenGLES or another?
Thank you for your answers before the the question was closed for being too broad. Unfortunately I couldn't make the question narrow because I didn't know which technology to use. If I had known the answer I could have made the question very narrow.
The fastest drawing would be to use OpenGLES in a custom view.
An alternative method would be to use a texture atlas in CALayers. You could draw 9 sets of your boxes into a single image to start with (0-8 boxes on), and then create the 300 CALayers on screen all using that as their content. During each frame, you switch each layer to point at the part of the texture atlas it needs to use. I've never done this with 300 layers before, so I don't know if that may become a problem - I've only done it with a half dozen or so digits that were updating every frame, but that worked really well. See this blog post for more info:
http://supermegaultragroovy.com/2012/11/19/pragma-mark-calayer-texture-atlases/
The best way to draw something repeatedly is to avoid drawing it if it is already on the screen. Since audio meters tend to update frequently, but most of their area stay the same, because audio signals are relatively smooth, you should track what's drawn, and draw only the differences.
For example, if you have drawn a signal meter with fifty green squares in a previous update, and now you need to draw forty eight green squares, you should redraw only the two squares that are different from the previous update. This should save you a lot of quartz calls.
Postpone rendering to the point where it's absolutely necessary, i. e. assuming you're drawing with CoreGraphics, use paths, and only stroke/fill the path when you have added all the rectangles to it.

Resources