I am having strange issues with tracing bitmaps that have been imported into flash pro CS6
I import large HD bitmaps into flash and then trace them to vector and then shrink the vector down about 1/15th the size of the original. This allow me to used bitmap images without the grainy pixelization look.
I have done this for quite some time but on my current project the traced vectors are causing the flash program to lag really bad and the published ios version is lagging horribly as well
Not sure if Im missing some thing, please help
This is likely because your vectors contain too many points.
You can use the smooth tool to go from thisc:
to thatc:
Or optimize curves to get a similar effect like soc:
c images under CC BY-NC-SA 3.0, click them for source
Related
I've written an app with an object detection model and process images when an object is detected. The problem I'm running into is when an object is detected with 99% confidence but the frame I'm processing is very blurry.
I've considered analyzing the frame and attempting to detect blurriness or detecting device movement and not analyzing frames when the device is moving a lot.
Do you have any other suggestions to only process un-blurry photos or solutions other than the ones I've proposed? Thanks
You might have issues detecting "movement" when for instance driving in car. In that case looking at something inside your car is not considered as movement while looking at something outside is (if it's not far away anyway). There can be many other cases for this.
I would start by checking if camera is in focus. It is not the same as checking if frame is blurry but it might be very close.
The other option I can think of is simply checking 2 or more sequential frames and see if they are relatively the same. To do something like that it is bast to define a grid for instance 16x16 on which you evaluate similar values. You would need to mipmap your photos which manually means resizing it by half till you get to 16x16 image (2000x1500 would become 1024x1024 -> 512x512 -> 256x256 ...). Then grab those 16x16 pixels and store them. Once you have enough frames (at least 2) you can start comparing these values. GPU is perfect for resizing but those 16x16 values are probably best evaluated on the CPU. What you need to do is basically find an average pixel difference in 2 sequential 16x16 buffers. Then use that to evaluate if detection should be enabled.
This procedure may still not be perfect but it should be relatively feasible from performance perspective. There may be some shortcuts as some tools maybe already do resizing so that you don't need to "halve" them manually. From theoretical perspective you are creating sectors and compute their average color. If all the sectors have almost same color between 2 or more frames there is a high chance the camera did not move in that time much and the image should not be blurry from movement. Still if camera is not in focus you can have multiple sequential frames that are exactly the same but in fact they are all blurry. Same happens if you detect phone movement.
I hear a lot that power of 2 textures are better for performance reasons, but I couldn't find enough solid information about if it's a problem when using XNA. Most of my textures have random dimensions and I don't see much of a problem, but maybe VS profiler doesn't show that.
In general, pow 2 textures are better. But most graphics cards should allow non pow 2 textures with a minimal loss of performance. However, if you use XNA reach profile, only pow 2 textures are allowed. And some small graphics cards only support the reach profile.
XNA is really a layer built on top of DirectX. So any performance guidelines that goes for that will also apply for anything using XNA.
The VS profiler also won't really apply to the graphics specific things you are doing. That will need to be profiled separately by some tool that can check how the graphic card itself is doing. If the graphics card is struggling it won't show up as a high resource usage on your CPU, but rather as a slow rendering speed.
Our product contains a kind of software image decoder that essentially produces full-frame pixel data that needs to be rapidly copied the screen (we're running on iOS).
Currently we're using CGBitmapContextCreate and we access the memory buffer directly, then for each frame we call CGBitmapContextCreateImage, and then draw that bitmap to the screen. This is WAY too slow for full-screen refreshes on the iPad's retina display at a decent framerate (but it was okay for non-Retina-devices).
We've tried all kinds of OpenGL ES-based approaches, including the use of glTexImage2D and glTexSubImage2D (essentially rendering to a texture), but CPU usage is still high and we can't get more than ~30 FPS for full-screen refreshes on the iPad 3. The problem is that with 30 FPS, CPU usage is nearly at %100 just for copying the pixels to the screen, which means we don't have much to work with for our own rendering on the CPU.
We are open to using OpenGL or any iOS API that would give us maximum performance. The pixel data is formatted as a 32-bit-per-pixel RGBA data but we have some flexibility there...
Any suggestions?
So, the bad news is that you have run into a really hard problem. I have been doing quite a lot of research in this specific area and currently the only way that you can actually blit a framebuffer that is the size of the full screen at 2x is to use the h.264 decoder. There are quite a few nice tricks that can be done with OpenGL once you have image data already decoded into actual memory (take a look at GPUImage). But, the big problem is not how to move the pixels from live memory onto the screen. The real issue is how to move the pixels from the encoded form on disk into live memory. One can use file mapped memory to hold the pixels on disk, but the IO subsystem is not fast enough to be able to swap out enough pages to make it possible to stream 2x full screen size images from mapped memory. This used to work great with 1x full screen sizes, but now the 2x size screens are actually 4x the amount of memory and the hardware just cannot keep up. You could also try to store frames on disk in a more compressed format, like PNG. But, then decoding the compressed format changes the problem from IO bound to CPU bound and you are still stuck. Please have a look at my blog post opengl_write_texture_cache for the full source code and timing results I found with that approach. If you have a very specific format that you can limit the input image data to (like an 8 bit table), then you could use the GPU to blit 8 bit data as 32BPP pixels via a shader, as shown in this example xcode project opengl_color_cycle. But, my advice would be to look at how you could make use of the h.264 decoder since it is actually able to decode that much data in hardware and no other approaches are likely to give you the kind of results you are looking for.
After several years, and several different situations where I ran into this need, I've decided to implement a basic "pixel viewer" view for iOS. It supports highly optimized display of a pixel buffer in a wide variety of formats, including 32-bpp RGBA, 24-bpp RGB, and several YpCbCr formats.
It also supports all of the UIViewContentMode* for smart scaling, scale to fit/fill, etc.
The code is highly optimized (using OpenGL), and achieves excellent performance on even older iOS devices such as iPhone 5 or the original iPad Air. On those devices it achieves 60FPS on all pixel formats except for 24bpp formats, where it achieves around 30-50fps (I usually benchmark by showing a pixel buffer at the device's native resolution, so obviously an iPad has to push far more pixels than the iPhone 5).
Please check out EEPixelViewer.
CoreVideo is most likely the framework you should be looking at. With the OpenGL and CoreGraphics approaches, you're being hit hard by the cost of moving bitmap data from main memory onto GPU memory. This cost exists on desktops as well, but is especially painful on iPhones.
In this case, OpenGL won't net you much of a speed boost over CoreGraphics because the bottleneck is the texture data copy. OpenGL will get you a more efficient rendering pipeline, but the damage will have already been done by the texture copy.
So CoreVideo is the way to go. As I understand the framework, it exists to solve the very problem you're encountering.
The pbuffer or FBO can then be used as a texture map for further rendering by OpenGL ES. This is called Render to Texture or RTT. its much quicker search pbuffer or FBO in EGL
Due to performance bottlenecks in Core Graphics, I'm trying to use OpenGL EL on iOS to draw a 2D scene, but OpenGL is rendering my images at an incredibly low resolution.
Here's part of the image I'm using (in Xcode):
And OpenGL's texture rendering (in simulator):
I'm using Apple's Texture2D to create a texture from a PNG, and then to draw it to the screen. I'm using an Ortho projection to look straight down on the scene as was recommended in Apress' Beginning iPhone Development book. The image happens to be the exact size of the screen and is drawn as such (I didn't take the full image in the screenshots above). I'm not using any transforms on the model, so drawing the image should cause no sub-pixel rendering.
I'm happy to post code examples, but thought I'd start without in the case that there's a simple explanation.
Why would the image lose so much quality using this method? Am I missing a step in my environment setup? I briefly read about textures performing better as sizes of powers of two -- does this matter on the iPhone? I also read about multisampling, but I wasn't sure if that was related.
Edit: Updated screen shots so as to alleviate confusion.
I still don't fully understand why I was having quality issues with my code, but I managed to work around the problem. I was originally using the OpenGLES2DView class provided by the makers of the Apress book "Beginning iPhone 4 Development", and I suspect the problem lied somewhere within this code. I then came across this tutorial which suggested I use GLKit instead. GLKit allowed me to do exactly what I wanted!
A few lessons learned:
Using GLKit and the sprites class in the tutorial linked above, my textures don't need to be powers of two. GLKit seems to handle this fine.
Multisampling was not the solution to the problem. With GLKit, using multisampling seemingly has no effect (to my eye) when rendering 2D graphics.
Thanks all who tried to help.
I'm doing real-time frame-by-frame analysis of a video stream in iOS.
I need to assign a score to each frame for how in focus it is. The method must be very fast to calculate on a mobile device and should be fairly reliable.
I've tried simple things like summing after using an edge detector, but haven't been impressed by the results. I've also tried using the focus scores provided in the frame's metadata dictionary, but they're significantly affected by the brightness of the image, and much more device-specific.
What are good ways to calculate a fast, reliable focus score?
Poor focus means that edges are not very sharp, and small details are lost. High JPEG compression gives very similar distortions.
Compress a copy of your image heavily, unpack and calculate the difference with the original. Intense difference, even at few spots, should mean that the source image had sharp details that are lost in compression. If difference is relatively small everywhere, the source was already fuzzy.
The method can be easily tried in an image editor. (No, I did not yet try it.) Hopefully iPhone has an optimized JPEG compressor already.
A simple answer that human visual system probably uses is to implemnt focusing on top of edge
Tracking. Thus if a set of edges can be tracked across a visual sequence one can work with intensity profile
Of these edges only to detrmine when it the steepest.
From a theoretical point of view, blur manifests as a lost of the high frequency content. Thus, you can just use do a FFT and check the relative frequency distribution. iPhone uses ARM Cortex chips which have NEON instructions that can be used for an efficient FFT implementation.
#9000's suggestion of heavily compressed JPEG has the effect of taking a very small number of the largest wavelet coefficients will usually result in what's in essence a low pass filter.
Consider different kind of edges: e.g. peaks versus step edges. The latter will still be present regardless of focus. To isolate the former use non max suppression in the direction of gradient. As a focus score use the ratio of suppressed edges at two different resolutions.