Under IOS I am trying to create an image by drawing into a context created via CGBitmapContextCreate. The following is the code that I am using (which is similar to some examples that I found), but when I call it, nothing is painted. Note that 'context' in this example is obtained via a call to UIGraphicsGetCurrentContext and I can successfully paint other things into this context. Is there something that I am missing?
Thanks
CGContextRef bmContext;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
bmContext = CGBitmapContextCreate(nil, 200, 200, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGFloat Color[4] = {
(CGFloat) 1,
(CGFloat) 0,
(CGFloat) 0,
(CGFloat) 1 };
CGRect Rect = { { 0, 0 }, { 200, 200 } };
CGContextSetFillColor(bmContext, Color);
CGContextFillRect (bmContext, Rect);
CGImageRef image = CGBitmapContextCreateImage(bmContext);
CGContextDrawImage(context, CGRectMake(100, 100, 200, 200), image);
CGImageRelease(image);
CGContextRelease(bmContext);
CFRelease(colorSpace);
After much experimentation I have found the problem. It appears that I need to call:
CGContextSetFillColorSpace(bmContext, colorSpace);
Otherwise it appears to be a grey scale context and is expecting only 2 values when I set the fill color, an intensity and an alpha. This means that in my example above, the alpha was always zero resulting in nothing being drawn. Setting the context to a RGB colorspace fixes the problem.
Thanks to all who took time to look at my problem.
Related
I'd like to draw a bimap picture on IOS platform based on a set of RGB data but I got some problems when referring to this official web: https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-TPXREF101
The biggest problem here for me is that The tutorial drew a bitmap picture on MAC OS while not on IOS.
SO, I added a subview to the view and intended to draw the bitmap picture on the subview.
UIImageView *imgview = [[UIImageView alloc] initWithFrame:CGRectMake(15, 355, 384, 288)];
[self drawBitmap:384 andPixelshigh:288];
[self.view addSubview:imgview];
The drawBitmap function is a self-written function used to draw the bitmap following the tutorial.
- (void)drawBitmap:(NSInteger)pixelsWide andPixelshigh: (NSInteger)pixelsHigh{
CGRect myBoundingBox;
CGContextRef myBitmapContext;
CGImageRef myImage;
myBoundingBox = CGRectMake (33, 366, 384, 288);
myBitmapContext = [self MyCreateBitmapContext:pixelsWide andPixelsHigh:pixelsHigh];
CGContextSetRGBFillColor (myBitmapContext, 1, 0, 0, 1);
CGContextFillRect (myBitmapContext, CGRectMake (100, 100, 200, 100 ));
CGContextSetRGBFillColor (myBitmapContext, 0, 0, 1, .5);
CGContextFillRect (myBitmapContext, CGRectMake (100, 100, 100, 200 ));
myImage = CGBitmapContextCreateImage (myBitmapContext);
CGContextDrawImage(myBitmapContext, myBoundingBox, myImage);
char *bitmapData = CGBitmapContextGetData(myBitmapContext);
CGContextRelease (myBitmapContext);
if (bitmapData)
free(bitmapData);
CGImageRelease(myImage);
}
and myBitmapContext is the context of the bitmap picture generated by the following function.
- (CGContextRef) MyCreateBitmapContext: (NSInteger) pixelsWide andPixelsHigh: (NSInteger) pixelsHigh{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
NSInteger * bitmapData;
NSInteger bitmapByteCount;
NSInteger bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
bitmapData = (NSInteger *)calloc( bitmapByteCount, sizeof(uint8_t) );
if (bitmapData == NULL)
{
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
I think there's nothing wrong with my code but I just can't draw the bitmap figure I want.
How about UIImage(data: imageData) in swift or [UIImage imageWithData:imageData] in Objective-C
I suddenly realized that I made a very stupid mistake. The function CGBitmapContextCreateImage() has its return value, which is actually a UIImage. So the solution to this problem is just to assign the return value to myView.image to show the image.
I'm trying to fill a CGBitmapContext with a solid red color and get CGImgRef, but image is all transparent.
Here's the code
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
1,
1,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGFloat components[] = {1.0,0.0,0.0,1.0};
CGContextSetFillColor(context, components);
CGContextFillRect(context, CGRectMake(0, 0, 1, 1));
CGImageRef imgRef = CGBitmapContextCreateImage(context);
Please don't recommend using UIBezierPaths and other UIKit things. I need to use CoreGraphics.
The problem is that you forgot to set a fill color space. You must do that in order for CGContextSetFillColor to work correctly (because that tells it how many components to expect and how to interpret them). This works:
CGFloat components[] = {1,0,0,1};
CGContextSetFillColorSpace(context, colorSpace); // <-- this was missing
CGContextSetFillColor(context, components);
// and now fill ...
problem was solved using CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor); Btw, it would be interesting to know what i was doing wrong. Also Apple documentation on CGContextSetFillColor says "Note that the preferred API to use is now CGContextSetFillColorWithColor."
I'm having an issue with rendering my own sprites in Cocos2D. If I create a context, even with supposedly a premultiplied alpha, I get black artefacts. The following code should produce a completely white circle:
- (void)renderTest
{
CGRect rect = CGRectMake(0, 0.0, 100.0, 100.0);
unsigned char *data = calloc((int)rect.size.width * (int)rect.size.height, 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, (int)rect.size.width, (int)rect.size.height, 8, (int)rect.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextAddArc(context, rect.size.width / 2, rect.size.height / 2, rect.size.width * 0.4, 0.0, 2 * M_PI, 0);
CGContextFillPath(context);
CCTexture2D *tex = [[CCTexture2D alloc] initWithData:data pixelFormat:kCCTexture2DPixelFormat_RGBA8888 pixelsWide:rect.size.width pixelsHigh:rect.size.height contentSize:rect.size];
self.texture = tex;
self.textureRect = rect;
free(data);
CGContextRelease(context);
}
When placed on a white background, this should be invisible. However, in reality I get this:
If I do exactly the same thing in UIKit using UIGraphicsGetCurrentContext() then it works flawlessly without the black hairline.
Can anyone please tell me how to get it to render the alpha channel without dodgy artefacts?
It looks like rendering is done with a glBlendFunc that is not suitable for premultiplied colors.
There are two (and a half) options to fix this:
Set the blend function to glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). I have no idea about this Cocos2D thing and if you can influence the blend function or tell it otherwise to use premultiplied colors.
Don't use premultiplied colors in your bitmap context. I'm not sure if the resulting pixel format is supported on iOS, though.
If everything is white you could just set the RGB parts of the data to 0xFFFFFF and just leave the alpha in place before creating the texture.
Edit: Here's code that paint's everything in your texture white:
uint32_t *ptr = (uint32_t *)data;
void *end = data + CGBitmapContextGetBytesPerRow(context) * CGBitmapContextGetHeight(context);
while (ptr != end)
*ptr++ |= 0xffffff; // or 0xffffff00, depending on endianess and pixel format in context
is it possible to add an alpha property on a region in the same image?
For example
The easiest solution would be to break the image apart and save the alpha as part of a png, then organize the imageviews to be flush against each other.
Otherwise, I wrote this quick code in a regular view that does the same with an image (I'm relatively new to Core Graphics so I'm sure there are better ways of doing this - also, my example the images are side by side):
-(void) drawRect {
// GET THE CONTEXT, THEN FLIP THE COORDS (my view is 189 ponts tall)
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flip = CGAffineTransformMake(1, 0, 0, -1, 0, 189);
CGContextConcatCTM(context, flip);
// GET THE IMAGE REF
UIImage *targetImage = [UIImage imageNamed:#"test.jpg"];
CGImageRef imageRef = targetImage.CGImage;
// SET THE COORDS
CGRect imageCoords = CGRectMake(0, 0, 116, 189);
CGRect imageCoordsTwo = CGRectMake(116, 0, 117, 189);
// CUT UP THE IMAGE INTO TWO IMAGES
CGImageRef firstImage = CGImageCreateWithImageInRect(imageRef, imageCoords);
CGImageRef secondImage = CGImageCreateWithImageInRect(imageRef, imageCoordsTwo);
// DRAW FIRST IMAGE, SAVE THE STATE, THEN SET THE TRANSPARENCY AMOUNT
CGContextDrawImage(context, imageCoords, firstImage);
CGContextSaveGState(context);
CGContextSetAlpha(context, .4f);
// DRAW SECOND IMAGE, RESTORE THE STATE
CGContextDrawImage(context, imageCoordsTwo, secondImage);
CGContextRestoreGState(context);
// TIDY UP
CGImageRelease(firstImage);
CGImageRelease(secondImage);
}
I have taken screenshot programmatically for a scroll view in xcode. But the problem i am facing is, i need to take screenshot along with present date in it. Please help me.
Before taking screenshot, put one label with current date, and remove it after taking screenshot.
Or you could use the EXIF meta data to record the date (it's probably already in there). I guess an embedded date in the image is harder to fake, though.
This is how you can add a text to an image. Try this with the date as string. Once you taken the screenshot pass it to the below method and use the output image. Modify the below method for your convenience.
//Add text to UIImage
-(UIImage *)addText:(UIImage *)img text:(NSString *)text1{
int w = img.size.width;
int h = img.size.height;
//lon = h - lon;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];// "05/05/09";
CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 255, 255, 255, 1);
//rotate text
CGContextSetTextMatrix(context, CGAffineTransformMakeRotation( -M_PI/4 ));
CGContextShowTextAtPoint(context, 4, 52, text, strlen(text));
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
Source