I have been doing research on what the best way to achieve undo/redo functionality is for a painting app. I am using OpenGL ES 2.0 on iOS. The most popular approach seems to be to save a list of commands and VBOs to re-generate the painting to its previous state (Memento design structure). The other approach is to take graphical snapshots after each drawing action and revert to these snapshots on undo.
I have a problem with both approaches:
1) Memento - after a long list of actions, especially computationally intensive flood fill algorithms, the undo/redo functionally will get very slow and intensive.
2) Snapshots - after a long list of actions these snapshot will start to take up a lot of memory, especially if in raw state.
I was wondering if anybody has found a solution that works well for this situation, or perhaps somebody here has an idea how to optimize the above approaches.
Thanks.
I don't think there's a way around limiting the number of steps that are undoable. You will always need some amount of memory to capture either the previous state, or the state change, for each undoable operation.
The Command pattern actually seems like the much more natural fit than the Memento to handle undo/redo. Using this, you will only store information about the specific changes for each operation. Which can still be substantial depending on the operation, but I think it can be much more targeted than blindly saving entire object states with a Memento.
I have decided to try a hybrid approach, where I save a bitmap snapshot every 10-15 actions, and use command lines to restore individual actions past the snapshots. A more in depth answer is offered here: https://stackoverflow.com/a/3944758/2303367
Related
I am looking for the best tool to achieve something like this (this is Blender's game engine, no real reflections, etc.) in an webgl viewer.
http://youtu.be/9-n12ZH5O6k
The idea is to prepare several basic scenes like this and then for the user to upload his design and have it previewed on a car (or other far more basic objects).
While p3d is nice, I don't think it does the job. There's no API for these cases yet. What are some options to pull this off? The requirement would be to have a library that doesn't have a too large footprint, since the feature/product is planned for the Asian market, so internet speed has to be considered.
you should look into three.js/babylon.js maybe? But you surely won't achieve that app just by a fingersnap, so read the tutorials as well, but these libs will surely ease your task by much.
'm working on a little app that uses some GPUImage filters. After playing with it for a while I discovered that if I stack too many of those filters, I run into performance issues.
So now I have two questions:
1.: How to properly stack filters?
The way I am using it currently is to loop over my GPUImageFilters and let each produce an output and then use that output is the input for the next filter.
I found out that there are FilterGroups and I tried to use them, but they didn't calculate my output image correctly any more and also produced some crashes that were not reproducible all the times.
So what is the way to go here? I guess my way I'd be wasting a lot of time converting from UIImageto whatever GPUImage is using internally? Any ideas on how to improve that?
2.: Is there a way to do it in background?
I read that GPUImage is not thread save?!? But how would you f.e. filter one image with let's say 100 filters and display them in a UICollectionView? Any change to do this off thread? Currently I am doing the processing in the main thread and just wait a short amount of time during each image-filtering-process to give the UI some free time...
Any way how to improve that?
Cheers,
Georg
If you're going to and from UIImages at each step, you're doing this about the slowest way possible. There's huge overhead in extracting images from a UIImage and converting them back to that, because you're making a roundtrip through Core Graphics and going to and from the CPU.
Instead, if you want to perform multiple operations on a single image, you need to chain your filters using -addTarget:. Adding a filter as a target of another will cause its output to be fed on the GPU from one stage to the next. This avoids the horribly expensive UIImage-and-back you're doing now.
At the last stage, if you are displaying to the screen, I'd direct your last target to a GPUImageView. Otherwise, you can extract your UIImage from that final stage.
GPUImage cannot currently be used in an iOS background process, due to its need for OpenGL ES access. However, that restriction lightened up in iOS 8, so it may be possible to do this now. Haven't taken the time to check on this.
If you want to filter one image using many filters, you can take one image input and target it to multiple filters simultaneously. Upon processing that image, all filters will act as they can. The GPU is the limiting part of that process, and I already use a multithreaded internal dispatch queue, so you don't need to worry about running that one multiple threads.
I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison..
http://www.highperformancegraphics.org/previous/www_2011/media/Papers/HPG2011_Papers_Laine.pdf
http://research.nvidia.com/sites/default/files/publications/laine2011hpg_paper.pdf
My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..
We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..
For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc
But it looks like all of them suffer at high resolution and/or large number of triangles..
Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..
So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one (cuda/opencl) for the transparent elements and then combining the two results..
Or maybe a simple (no complex visual effect required, just position, color, simple light and properly transparency) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?
I did not find anything on the net regarding this... maybe is there any particular obstacle?
I would like to know every single think/tips/idea/suggestion that you have regarding this
Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster
http://webstaff.itn.liu.se/~jonun/web/teaching/2009-TNCG13/Siggraph09/content/talks/062-liu.pdf
I might suggest that you look at OpenRL, which will let you have hardware-accelerated raytracing?
I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.
Can anyone comment on the decision to use sprites for images or not? I see the following benefits/trade-offs (some of which can be mitigated):
Sprites over individual images
Pros:
Fewer images to manage
Easier to implement themed images
Image swaps (JS/CSS) happen faster (because they do not require additional image loads)
Faster image loads due to fewer HTTP requests
Fewer images to cache (although virtually no difference in overall KB)
Cons:
More background positions to manage
Image payload may be over-inflated (sprite may contain unused images), causing page load may be slower
Slower images loads because they cannot be downloaded synchronously
I don't think there's one definitive answer to this. Opinions will differ according to need and individual preference.
My guideline is to always evaluate the benefit for the end user vs. the benefit for the developers. ie. what is the real value of the work you're doing as a developer.
Reducing the number of HTTP requests is always one of the first things to fix when optimizing a web page. Proper usage of caching can achieve much of the same thing as using sprites does. After all, very often graphics can be cached for a really long time.
There might be more benefit from minimizing scripts and stylesheets rather than adding graphics into a sprite.
Your code for managing sprites might increase complexity and developer overhead, especially as number of developers increases.
Learning proper use of cache-headers and configure your web-server or code properly might often be a more robust way of improving performance in my opinion.
If you've got a decent amount of menu entries in which you want to do roll-over images I'd recommend going to a sprite system as opposed to doing multiple images, all of which need to be downloaded separately. My reasons for so are pretty much inline with what you have mentioned in your post with a couple modifications:
The image swaps wouldn't be done with javascript; most of the sprites I've seen just use the :hover on the link itself within an unordered list.
Depending on the filetype/compression the download of the image file itself will be negligible. Downloading one image as opposed to multiple is generally faster in overall download and load.