I have an image with a yellow vase in the foreground and transparent background:
I'm drawing it on a CGContext:
CGContextDrawImage(context, CGRectMake(0, 0, 100, 100), myImage.CGImage);
I can draw a shadow around it by using the following statement before CGContextDrawImage:
CGContextSetShadowWithColor(context, CGSizeMake(0,0), 5, [UIColor blueColor].CGColor);
But I want to put a stroke around the image, so that it'll looks like following:
If I did this:
CGContextSetRGBStrokeColor(shadowContext, 0.0f, 0.0f, 1.0f, 1.0f);
CGContextSetLineWidth(shadowContext, 5);
CGContextStrokeRect(shadowContext, CGRectMake(0, 0, 100, 100));
It (obviously) draws a rectangular border around the whole iamge like this:
Which is not what I need.
But what's the best way to draw the border as in the third image?
Please note that it's not possible to use UIImageView in this case, so using the properties of CALayer of UIImageView is not applicable.
One way to do this is to use the mathematical morphology operator of dilation to "grow" the alpha channel of the image outward, then use the resulting grayscale image as a mask to simulate a stroke. By filling the dilated mask, then drawing the main image on top, you get the effect of a stroke. I've created a demo showing this effect, available on Github here: https://github.com/warrenm/Morphology (all source is MIT licensed, should it prove useful to you).
And here's a screenshot of it in action:
Note that this is staggeringly slow (dilation requires iteration of a kernel over every pixel), so you should pick a stroke width and precompute the mask image for each of your source images in advance.
I would try either setting the stroke color and line width before your call to CGContextDrawImage, or tweaking the shadow (opacity, blur, etc) so that it looks like a stroke around the image. Let me know if this works!
Related
I wanna make viewport backgroundcolor clear,transparent.
But
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
do not work,it is just all black.
I am using OpenGLES2.0 and iOS platform。
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT);
So how to do it if I want to make viewport backgroundcolor transparent?
I think the glview cannot be transparent.
Here is an example, you have a UIImageView which contain a background image.
You want to add a transparent glview upon the UIImageView, make it like watermark.
Unfortunately, the glview cannot be transparent, so the Alternative way is:
Make two gl_textures, one texture use to draw your background image. Another one draw you're watermark.
The gl_texture can be transparent.
I am trying to add some blur brush stroke with OpenCV
If I use cv.add or cv.addweighted, I only got half of the red color (look pink),
but I want the red to cover the underlying picture, not blend.
If I use copyto or clone, I can't get the blur edge, so how should I do it ??
The background of your brush image is black. If you were in photoshop and set that brush layer's blend mode to screen the red gradient would show through and the black background would become transparent.
Here are a couple of relevant posts:
How does photoshop blend two images together?
Trying to translate formula for blending mode
Alternatively, if you're using OpenCV 3.0 you try using seamlessClone.
I got a trick to do it simply copy the area to a new mat, then draw a circle and blur it on the new mat, and use addWeighted to blend it to the image with whatever alpha I need.
Mat roi = myImage.submat(y-20, y+20, x-20, x+20);
Mat mix = roiB.clone();
Imgproc.circle(mix, new Point(20, 20), 12, color, -1);
Core.addWeighted(mix, alpha, roiB, beta, 0.0, mix);
Imgproc.GaussianBlur(mix, mix, new Size(21, 21), 0);
mix.copyTo(roiB);
I have a green button with a white icon and title. I am trying to use the GPUImage library that I just learned about to change the green to blue, but keep the white as white. Here is my code:
UIImage *inputImage = [UIImage imageNamed:#"pause-button"];
GPUImageFalseColorFilter *colorSwapFilter = [[GPUImageFalseColorFilter alloc] init];
colorSwapFilter.firstColor = (GPUVector4){0.0f, 0.0f, 1.0f, 1.0f};
colorSwapFilter.secondColor = (GPUVector4){1.0f, 1.0f, 1.0f, 1.0f};
UIImage *filteredImage = [colorSwapFilter imageByFilteringImage:inputImage];
There are 2 problems:
The resulting image is a pale purple instead of blue. Almost as though the blue is being overlaid with a 50% opacity or something and the original green was set to white.
The button isn't a rectangle (more of an oval), and the transparent areas of the PNG (the corners) are now filled in with a semi-transparent blue (well, pale purple actually). Basically the button is now a rectangle with a darker oval in the middle.
Am I using this filter incorrectly? Do I have to do some pre-processing before using this filter?
The GPUImageFalseColorFilter is probably not what you want to use to alter the hue of something. It's a reimplementation of the filter by the same name in Core Image, which first converts an image to its luminance and then replaces white with one color and black with another. Instead of a grayscale, you get a variable mix between these colors. I also don't think it respects alpha channels at present.
You might want something more like a GPUImageHueFilter (again, not sure if it respects alpha) or a GPUImageLookupFilter. You might need to build a custom filter to locate a color within a certain threshold (look at the chroma keying ones for that) and to replace that with your given color. Hue changes might do the job, though.
I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?
It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.
I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?
I'm drawing a graph on a CALayer in its delegate method drawLayer:inContext:.
Now I want to support Retina Display, as the graph looks blurry on the latest devices.
For the parts that I draw directly on the graphics context passed by the CALayer, I could nicely draw in high resolution by setting the CALayer's contentScale property as follows.
if ([myLayer respondsToSelector:#selector(setContentsScale:)]) {
myLayer.contentsScale = [[UIScreen mainScreen] scale];
}
But for the parts that I use CGLayer are still drawn blurry.
How do I draw on a CGLayer in high resolution to support Retina Display?
I want to use CGLayer to draw the same plot shapes of the graph repeatedly, as well as to cut off the graph lines exceeding the edge of the layer.
I get CGLayer by CGLayerCreateWithContex with the graphics context passed from the CALayer, and draw on its context using CG functions such as CGContextFillPath or CGContextAddLineToPoint.
I need to support both iOS 4.x and iOS 3.1.3, both Retina and legacy display.
Thanks,
Kura
This is how to draw a CGLayer correctly for all resolutions.
When first creating the layer, you need to calculate the correct bounds by multiplying the dimensions with the scale:
int width = 25;
int height = 25;
float scale = [self contentScaleFactor];
CGRect bounds = CGRectMake(0, 0, width * scale, height * scale);
CGLayer layer = CGLayerCreateWithContext(context, bounds.size, NULL);
CGContextRef layerContext = CGLayerGetContext(layer);
You then need to set the correct scale for your layer context:
CGContextScaleCTM(layerContext, scale, scale);
If the current device has a retina display, all drawing made to the layer will now be drawn twice as large.
When you finally draw the contents of your layer, make sure you use CGContextDrawLayerInRect and supply the unscaled CGRect:
CGRect bounds = CGRectMake(0, 0, width, height);
CGContextDrawLayerInRect(context, bounds, layerContext);
That's it!
I decided not to use CGLayer and directly draw on the graphics context of the CALayer, and now it's drawn nicely in high resolution on retina display.
I found the similar question here, and found that there is no point of using CGLayer in my case.
I used CGLayer because of the Apple's sample program "Using Multiple CGLayer Objects to Draw a Flag" found in the Quartz 2D Programming guide. In this example, it creates one CGLayer for a star and uses it multiple times to draw 50 stars. I thought this was for a performance reason, but I didn't see any performance difference.
For the purpose of cutting off the graph lines exceeding the edge of the layer, I decided to use multiple CALayers.