I just generated a gradient with transparency programmatically by adding a solid color and a gradient to an image mask. I then applied the resulting image to my UIView.layer.content. The visual is fine, but when I scroll object under the transparency, the app gets chunky. Is there a way to speed up?
My minital thought was caching the resulting gradient. Another thought was to create a gradient that is only one pixel wide and stretch it to cover the desired area. Will either of these approaches help the performance?
Joe
I recall reading (though I don't remember where) that core graphics gradients can have a noticeable effect on performance. If you can, using a png for your gradient instead should resolve the issue that you are seeing.
Related
I have been stuck on attempting to remove the background from a borehole log image for a week or so (new to image processing). I want to eventually develop a code which can automatically detect the horizontal sinusoidal features in the image (attached). I think for this I can use a Hough transform. However, all of the algorithms I have used (Hough transform, edge detections, thresholding) do not work because of the background of the image which has this 'wood grain' appearance. I also tried recreating a mask through finding the image gradient, but because the color values of the features I want (horizontal sinusiodal shapes) are so similar to the background I want to remove I am having a difficult time. The ultimate goal is to take two images taken at different times (before and after a scientific experiment) and to subtract them to see where the sinusoidal patterns differ. If I can get rid of this background that should be easier.
I so far have worked the image to better quality through taking the FFT and applying a high-pass filter. This at least homogenizes the image and leaves me with the attached result. However, I am not having much luck to remove this vertical 'wood grain'. Does anyone have a thought about how it could be done? This is driving me a little crazy.
Thank you so much!
I am using OpenCV to blend a set of pre-warped images. As input I have some 4-channel images (*.png or *.tif) from where I can extract a bgr image and an alpha mask with the region related to the image (white) and the background (black). Both image and mask are the inputs of the Blender module cv::detail::Blender::blend.
When I use feather (alpha) blending the result is ok, however, I would like to avoid ghosting effects. When I use multi-band, some artifacts are appearing on the edges of the images:
The problem is similar to the one raised here, and solved here. The thing is, if the solution is creating a binary mask (that I already extract from the alpha channel), it does not work for me. If I add padding to the ovelap between both images, it takes pixels from the background and messes up even more the result.
I guess that probably it has to do with the functions pyrUp and pyrDown, because maybe the blurring to create the Gaussian and Laplacian pyramids is applied to the whole image, and not only to the positive alpha region. In any case, I don't know how to fix the problem using these functions, and I cannot find another efficient solution.
When I use another implementation of multiresolution blending, it works perfectly, however, I am very interested in integrating the multi-band implementation of OpenCV. Any idea of how to fix this issue?
Issue has been already reported and solved here:
http://answers.opencv.org/question/89028/blending-artifacts-in-opencv-image-stitching/
i research about merge many image into a image in iphone. But i have some problem about that. I want to detect transparent areas, which has a white background. I think it's possible to get a CGRect rectangle around the area during this and after i will drag my image into transparent area, but I do not know how I can identify it. So if i detected all transparent area in this image, i will have a CGRect Array.
You can see my image:
Please help me, thank you very much!!
In terms of detecting transparent pixels, you can access the pixel buffer as described in Technical Q&A QA1509 and then iterate through the pixel buffer looking for pixels with an alpha channel value of less than 1.0.
But to extrapolate from that to the programmatic building an array of CGRect corresponding to contiguous transparent pixels is non-trivial. If you make simplifying assumptions about the nature of the transparent regions (e.g. circular), it's quite a tractable little problem, though your thin rounded rectangle that intersects many of the circles complicates the problem.
If your image with transparent areas is predefined, though, I'd probably just define them manually rather than determining it programmatically.
I am working on images that are partially blur on some sections. These are noises that should be taken care of, but here is the problem:
Are there methods to detect whether an image is blur or partially blur at some sections of an image? For instance, take a look at sample image below:
You can see in the image that there are 3 sections that are visually blur: bottom-left, near center region and top-right. Now, is it possible to detect that any portion of an image is blur programming-wise or mathematically?
As lain_b pointed out, with an image like this you can use an edge detector and look for an absence of edges. I tried it on your image and it seems to work pretty well. First I used the kernel
[0,1,0,
1,-4,1,
0,1,0]
Which is a simple edge detector. Its result was
Then I used a threshold to get
Then I closed the image and opened it to get
This is obviously not a finished version, the top right portion did not recognize well at all. Perhaps you could improve it by blurring before performing thresholding, or by choosing better values for the threshold and the radii of the opening and closing operations. A lot of the decisions you will need to make depend on the constraints you can put on your problem. I think this technique will work for you though.
Edit
If you are looking for blur detection of arbitrary images you are going to have to investigate a wide variety of techniques. Things are much easier if you can make assumptions about your set of input images. Without any assumptions I don't know what will work best for you. Here is some reading on the topic
Image Blur Metrics
Reserach paper on using the Harr wavelet transform
Similar SO Question and look at the question that question links to
Blur detection is a very active research field, there is no one answer. You will just need to try all the methods you can find (these were found by googling detect blur in image).
This paper may be of some help. It does blur estimation (mostly for out of focus, but I think it also does blur) to recreate a similarly blurred object in the image.
I think you should be able to use it to detect the blurred areas, and how blurred they are. It should be especially relevent to your problem as it is designed to work with real-world images.
I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)
I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.
For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.
You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.
instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.