ios8 w/swift iPad only UX/UI design [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i'm implementing an app, iPad, iOS8.1+ in swift/metal, landscape only
the main view has 3 containers with a left pullout/slideout/drawer of icons for launching subprocesses
the left slide out will only contain icons no text for things like database access, microphone, stencil overlay, video record, airplay, iTunes, Dropbox, user config, etc
the 3 main containers,
view 1 will hold a 3d rendered model this will take up 75% horz/vert
view 2 will hold a 2d side projection of the rendered model in view 1 ( aka side or top view)
and
view 3 will hold a either detailed subview of something that is choosen in view 1 or view 2
or a pdf document
or a webcontainer
i am concerned about threading as this app will be asynchronously pulling in large amount of data, rendering via gpu buffer and then pushing results via airplay to a video screen.
that being said there are no "Metal View Containers" but there is GLKView, SceneKit for 3d/2d.
do i need to define 3 generic container views and build them up? or is this another way to chop up the existing GLview for Metal?
does anyone have such a metal container already built?
thanks for any positive help.

No, GLKView and GLKViewController are not meant to work with Metal even though they both execute on the GPU. If using Metal, must create your own Metal View and Metal ViewController. This is because OpenGL is done with CAEAGLLayer and Metal is done with CAMetalLayer. I don't know if anyone has done this. Most likely Apple will create these classes on the next iteration of the SDK.
For the 3 container, you can create 3 separate layers, but its more efficient to manually tell metal to draw 3 separate sections. However, this is not an easy task.
I don't think you have to worry about the threading as long as you don't mess with Metal buffer data during execution. Metal's buffer data are not copied (you could) when pass to the GPU. OpenGL buffer data are copied.

Related

How to read a video file from disk in real time in iOS [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I would like to read a movie file frame by frame, run some image processing algorithms on each frame, and display the resultant frames sequentially in an iOS app. There are mechanisms for doing this with the live camera feed, such as using an AVCaptureSession, and I would like to do something similar with movies already saved to disk.
I am trying to do this with an AVAssetReader and AVAssetReaderTrackOutput, however in this document, it clearly states that "AVAssetReader is not intended for use with real-time sources, and its performance is not guaranteed for real-time operations".
So my question is: what is the right way to read movie frames in real-time?
Edit:
To further clarify what I mean by "real-time", consider the fact that it possible to capture frames from the camera feed, run some computer vision algorithms on each frame (i.e. object detection, filtering, etc), and display the filtered frames at a reasonable frame rate (30 - 60 FPS). I would like to do the exact same thing, except that the input source is a video already saved on disk.
I don't want to read the entire movie, process the whole thing, and display the result only once the entire processing pipeline is finished (I would consider that non-real time). I want the processing to be done frame by frame, and in order to do that the file has to be read in frame by frame in real time.
In order to playback and process a video in real-time you can use the AVPlayer class. The simplest way to live-process video frames is through a custom video composition on the AVPlayerItem.
You might want to check out this sample project from Apple where they highlight HDR parts in a video using Core Image filters. It shows the whole setup required for real-time processing and playback.

Paint Bucket in iOS [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm stuck on a problem and needed some help or guide for a possible solution.
Basically in my application there will be a map with several zones.
The user can select any of these areas, at that time this area is filled with a color.
Imagine a map like this one, so i need to be able to change the color of only one country.
Something like what happens in the books of paintings (https://itunes.apple.com/pt/app/colorfly-best-coloring-book/id1020187921?mt=8), or Paint Bucket command in the Photoshop .
Any idea how to get something like this on iOS ?
Thanks in advance
The paint bucket technique you're looking for is a set of graphics algorithms usually called "flood fill". There are different approaches to the implementation depending on the circumstances and performance needs. (There is more at that wikipedia link.)
I have no experience with it, but here is a library from GitHub that purports to implement this for iOS given a UIImage object: https://github.com/Chintan-Dave/UIImageScanlineFloodfill
Re: your question about doing this without user touch: yes, you'll want to keep a map of countries to (x,y) points so you can re-flood countries when required. That said, the intricacies of the county borders might make an algorithmic fill inexact without more careful normalization of the original source. If your overall map only contains a small set of possible states, there are other ways of achieving this goal, like keeping a complete set of possible images (created in ie Photoshop) and switching them out, or keeping a set of per-country "overlay" images that you swap in as needed. (But if the flood fill is accurate on that source image, and performant for your needs, then great.)

iOS fastest way to draw many rectangles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to display audio meters on the iPad consisting of many small green, red or black rectangles. They don't need to be fancy but there may be a lot of them. I am looking for the best technique to draw them quickly. Which of the following techniques is better: text atlas in CALayers or OpenGLES or another?
Thank you for your answers before the the question was closed for being too broad. Unfortunately I couldn't make the question narrow because I didn't know which technology to use. If I had known the answer I could have made the question very narrow.
The fastest drawing would be to use OpenGLES in a custom view.
An alternative method would be to use a texture atlas in CALayers. You could draw 9 sets of your boxes into a single image to start with (0-8 boxes on), and then create the 300 CALayers on screen all using that as their content. During each frame, you switch each layer to point at the part of the texture atlas it needs to use. I've never done this with 300 layers before, so I don't know if that may become a problem - I've only done it with a half dozen or so digits that were updating every frame, but that worked really well. See this blog post for more info:
http://supermegaultragroovy.com/2012/11/19/pragma-mark-calayer-texture-atlases/
The best way to draw something repeatedly is to avoid drawing it if it is already on the screen. Since audio meters tend to update frequently, but most of their area stay the same, because audio signals are relatively smooth, you should track what's drawn, and draw only the differences.
For example, if you have drawn a signal meter with fifty green squares in a previous update, and now you need to draw forty eight green squares, you should redraw only the two squares that are different from the previous update. This should save you a lot of quartz calls.
Postpone rendering to the point where it's absolutely necessary, i. e. assuming you're drawing with CoreGraphics, use paths, and only stroke/fill the path when you have added all the rectangles to it.

iOS Cocos2D optimization [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I´m building a game that reads a bidimensional array so that it creates the map, but the walls are all separated from the corners and floors, each wall, each corner and each floor is an individual image, and this is consuming a lot of CPU, but I really want to create a random feeling of the map, and that´s why I´m using an image for each corner and wall.
I was thinking that maybe I could generate a texture built by merging 2 or more different textures, to enhance performance.
Does anyone know how that I could do that? Or maybe another solution? Does converting the images to PVR would make any difference?
Thanks
For starters, you should use a texture atlas, created with a tool like TexturePacker, grouping as much of your 'images' onto a single atlas. Basically load it once and create as many sprites from it as you want without having to reload. Using PVR will speed up the load, and benefit your bundle size.
Secondly, especially for the map background, you should use a CCSpriteBatchNode that you init with the above sprite sheet. Then, when you create a tile, just create the sprite and add it to the batch node. Add the batch node to your scene. The benefit of this is that regardless of the number of sprites (tiles) contained in the batch node, this will all be drawn in a single GL call. Now, that is where you will gain the most benefit from a performance standpoint.
Finally, dont rely on the FPS information when running with the simulator. The simulator does not make use of the host's GPU, and its performance is well (much) below what you get on a device. So before posting a question about performance, make certain you measure on a device.

Extracting images from PSD for use in iOS app [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm not sure if this is the best forum for this, because it's not a programming question per se, but here goes.
I am the developer for an iOS application, and we contracted the design out to a third-party. They delivered to us a massive PhotoShop file with all of the the individual pieces of artwork done on individual layers, at double resolution. To get the artwork into XCode, my workflow is as follows:
Show only the layers containing a particular unit of artwork
Select all
Copy Merged
Create New image (fortunately, the dimensions are taken care of automatically)
Paste
Deselect pasted layer and delete Background, to preserve transparency
Save image as x.psd
Save copy as x#2x.png
Set image size to 50% of original dimensions
Save copy as x.png
Discard changes
This app is pretty large, so it's quite tedious to do this process for every little image. I'm not very Photoshop savvy, so I'm wondering if there is a better way. It seems to me that it should be easy enough to combine steps 3-11 into one macro or script or something. The only thing that changes in each iteration over these steps is the output name. Any suggestions?
Normal workflow is exactly as you described. You can write a Photoshop script to do the layer exporting and Apple provides an Automator tool that will allow you to resize those graphics from 2x down 50%. Great tutorial here. This can help get your graphics to scale quickly.
There are solutions to automate what your trying to accomplish. This video tutorial allows you to take your PSD or PNG and port it into an Xcode with all of the layers properly placed in a view for you, create view controllers, and segues.
Disclaimer - I am associated with the JUMPSTART Platform as mentioned in the video.
You can script Photoshop with Javascript and I've written scripts in the past to perform similar series of steps, it wasn't too hard to figure out even for someone like me who'd never written any Javascript before. Photoshop also has 'Actions' which are like macros and you can probably do something simple like this with Actions as well but it's not something I've personally tried. Check out the Adobe docs on scripting Photshop: Adobe Photoshop Scripting.

Resources