I am trying to fill cells in a calendar with green boxes, all with different heights. I create an UIImage object by doing
- (UIImage *)imageWithColor:(UIColor *)color width:(float)w height:(float)h {
CGRect rect = CGRectMake(0, 0, w, h);
// Create a w by H pixel context
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
[color setFill];
UIRectFill(rect); // Fill it with your color
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
and set cells' backgrounds with
for (int i = 0; i < numDatesSelected; i++) {
UIButton *myButton = (UIButton *)[self.smallCalendarView viewWithTag:[selectedDates[i] timeIntervalSince1970]];
float btnWidth = myButton.frame.size.width;
float btnHeight = (myButton.frame.size.height * 0.1 * i);
UIImage *img = [self imageWithColor:[UIColor greenColor] width:btnWidth height:btnHeight];
[myButton setBackgroundImage:img forState:(UIControlStateNormal)];
}
However, like you see in the screenshot, the heights are not very different from one another and I don't understand the reason. If I set width and height as 1 and 1 the rectangles still fill the entire cell whereas I think it should only fill a tiny portion (1x1) of the table cell because it has 53x40 size. Does anyone have an idea?
Make sure the contentVerticalAlignment and contentHorizontalAlignment properties of your UIButton are NOT set to UIControlContentHorizontalAlignmentFill and UIControlContentVerticalAlignmentFill
Also, check this answer out https://stackoverflow.com/a/13093769/4171538. It might explain your problem
Related
I am trying to place UIColor on particular (X,Y) position of UIImage,
But not able to get it,
Here, My code looks like below,
Below method return me a UIColor from particular (X,Y) position of UIImage
- (UIColor *)isWallPixel:(UIImage *)image xCoordinate:(int)x yCoordinate:(int)y {
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
const UInt8* data = CFDataGetBytePtr(pixelData);
int pixelInfo = ((image.size.width * y) + x ) * 4; // The image is png
UInt8 red = data[pixelInfo]; // If you need this info, enable it
UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
CFRelease(pixelData);
UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The
return color;
}
//Original Image, I get image pixels from this image.
UIImage* image = [UIImage imageNamed:#"plus.png"];
//Need convert Image
UIImage *imagedata = [[UIImage alloc]init];
//size of the original image
int Width = image.size.width;
int height = image.size.height;
//need to get this size of another blank image
CGRect rect = CGRectMake(0.0f, 0.0f, Width, height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (int i = 0; i <Width; i++) {
for (int j = 0; j < height; j++) {
//Here I got the Image pixel color from below highlighted method
UIColor *colordatra = [self isWallPixel:image xCoordinate:i yCoordinate:j];
CGContextSetFillColorWithColor(context, [colordatra CGColor]);
rect.origin.x = i;
rect.origin.y = j;
CGContextDrawImage(context, rect, imagedata.CGImage);
imagedata = UIGraphicsGetImageFromCurrentImageContext();
}
}
UIGraphicsEndImageContext();
Please note that I want to functionality like get the UIColor from particular position and placed that UIColor on another blank image at same position.
With above code I am not able to get this, Please let me know where I have done the mistake.
Hope, for favourable reply,
Thanks in Advance.
By using uiView you can do this one like that.Actually you are getting x,y,width and height
uiview *view = [uiview alloc]initwithframe:CGRectMake(x,y,width,height);
view.backgroundcolor =[UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];
[self.YourImageView addsubview:view];
I wasn't sure how to name this, sorry for the title.
I have an image that I want to fill more or less depending on a variable. For this, I created an image made of black and white (below). The goal is to use it as a mask and fill it the way I want, based on this documentation.
The issue: the image is drawn properly BUT its dimensions are way too high. I test it on iPhone 6+ with a #3x image, the image asset size is correct but the image that is returned by my function is way too big. When I put constraints on my UIImageView*, I only view part of the returned filled image; when I don't, the image is way too big. See below for a screenshot (iPhone 6+)
I subclassed a UIView with the following code:
- (void)drawRect:(CGRect)rect
{
CGFloat width = self.frame.size.width;
NSLog(#"%0.f", width);
CGFloat height = self.frame.size.height;
NSLog(#"%0.f", height);
CGFloat fillHeight = 0.9 * height;
NSLog(#"%0.f", fillHeight);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect fillRect = CGRectMake(0, height - fillHeight, width, fillHeight);
CGContextAddRect(context, fillRect);
CGContextSetFillColorWithColor(context, [UIColor yellowColor].CGColor);
CGContextFillRect(context, fillRect);
CGRect emptyRect = CGRectMake(0, 0, width, height - fillHeight);
CGContextAddRect(context, emptyRect);
CGContextSetFillColorWithColor(context, [UIColor blueColor].CGColor);
CGContextFillRect(context, emptyRect);
}
- (UIImage *)renderAsImage
{
// setup context
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0); // use same scale factor as device
CGContextRef c = UIGraphicsGetCurrentContext();
// render view
[self.layer renderInContext:c];
// get reslting image
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
NSLog(#"result size = %d width, %d height", result.size.width, result.size.height);
UIGraphicsEndImageContext();
return result;
}
And in my ViewController:
- (void) setUpMaskedImage
{
// Image View set in Storyboard that will contain the final image. Bounds are set in Storyboard (constraints)
UIImageView* imageView = self.containingImageView;
// Custom View (see methods above) that will draw a rectangle filled with color
UIView* view = [[CustomView alloc] initWithFrame: imageView.frame];
// Mask Image used along with the custom view to create final view (see image above)
UIImage* maskImage = [UIImage imageNamed: #"maskImage"];
[view setNeedsDisplay];
UIImage* fillImage = [view renderAsImage];
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage* imageToDisplay = [UIImage imageWithCGImage:masked];
[imageView setImage:imageToDisplay];
}
I just don't get it. I used similar code elsewhere in my app and it works just fine. I will do a sample project soon if necessary.
Thank you!
I'm trying to apply GPUImageMotionBlurFilter to a snapshot of UIView. I want to blur the top and bottom edges of the view too, so I'm leaving a transparent space (insetY) above and below the rect passed to drawViewHierarchyInRect:afterScreenUpdates:. GPUImage seems to ignore the transparency and acts like it was filled with a color a little lighter than -lightGrayColor.
Here's my code (it's in a method in UIView subclass):
CGFloat insetX = 0;
CGFloat insetY = 10;
CGRect snapshotFrame = CGRectInset(self.bounds, -insetX, -insetY);
UIGraphicsBeginImageContextWithOptions(snapshotFrame.size, NO, 0.0f);
[self drawViewHierarchyInRect:CGRectMake(insetX, insetY, CGRectGetWidth(self.frame), CGRectGetHeight(self.frame)) afterScreenUpdates:YES];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
GPUImageMotionBlurFilter *motionBlurFilter = [[GPUImageMotionBlurFilter alloc] init];
motionBlurFilter.blurAngle = 90;
motionBlurFilter.blurSize = 20.0f;
UIImage *blurredImage = [motionBlurFilter imageByFilteringImage:snapshotImage];
And here's what I'm getting:
snapshotImage:
blurredImage:
Is what I'm trying to do possible?
I want to change UISlider thumb image during the slide.
Basically to set thumbImage acording to the value.
The idea was to rotate the image according to the value and set it to the thumb.
So I tried to set the thumb image by overriding
-(BOOL)continueTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event
[self setThumbImage:[self imageRotatedByDegrees:50] forState:UIControlStateHighlighted];
and I also tried the same thing when I added an action to my slider.
Unfortunately in both implementations the image simply disappears.
Do you think it possible to achieve in the way I do it?
If NO please explain and suggest an alternative way (hopefully not one that will customize and replace the whole slider)
If Yes I will really appreciate code sample.
The more close answer that I found here
Thanks a lot.
I found a mistake in my Code. Apparently I was releasing one of the image before calling CGContextDrawImage.
(I still need to improve my GUI appearance to make it more nice, for instance to make the track image as I planed I made the original one transparent and added the as subview the one I need.)
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
self.thumbImage = [self imageWithImage:[UIImage imageNamed:#"IconForSlider.png"] scaledToSize:CGSizeMake(frame.size.height, frame.size.height)];
size = self.thumbImage.size.height;
[self setThumbImage:[self imageRotatedByDegrees:0] forState:UIControlStateNormal];
[self addTarget:self action:#selector(changeValue) forControlEvents:UIControlEventValueChanged];
}
return self;
}
-(void)changeValue{
NSLog(#"ChangeValue");
[self setThumbImage:(UIImage *)[self imageRotatedByDegrees:(self.value * 100*(10/9))] forState:UIControlStateHighlighted];
[self setThumbImage:(UIImage *)[self imageRotatedByDegrees:(self.value * 100*(10/9))] forState:UIControlStateNormal];
}
#pragma mark ImageResize
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
#pragma mark Rotate
- (UIImage *)imageRotatedByDegrees:(float)degrees
{
NSLog(#"ImageRotateByDegres");
static float Prc = 0.6;
degrees = (degrees > 90)? 90: degrees;
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,size,size)];
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContextWithOptions(rotatedSize , NO, 0.0);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
UIColor *color = [self.colorsArray objectAtIndex:lround(self.value*100)];
[color setFill];
CGContextRef bitmap = UIGraphicsGetCurrentContext();
[[UIColor colorWithRed:((139 + (116/100)*self.value*100)/255.0) green:141/255.0f blue:149/255.0f alpha:1.0f] setFill];
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// Rotate the image context
CGContextRotateCTM(bitmap,DEGREES_TO_RADIANS(degrees));
CGContextClipToMask(bitmap, CGRectMake(-1*size/ 2, -1*size / 2, size, size), [self.thumbImage CGImage]);
CGContextAddRect(bitmap, CGRectMake(-1*size/ 2, -1*size / 2, size, size));
CGContextDrawPath(bitmap,kCGPathFill);
//Prc and 1.07 are for better view
CGContextDrawImage(bitmap, CGRectMake(-1.07*size/2*Prc, -1*size/2*Prc,size*Prc,size*Prc), [[UIImage imageNamed:#"ActiveIcon.png"]CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
with some help from this link I create an array of colours for my slider Get Pixel Color of Uiimage
- (UIColor *)GetColorAtValue:(float)value{
long pixelInfo = (long)floorf(_sliderImage.size.width * _sliderImage.size.height/2 + _sliderImage.size.width * value) * 4 ;
UInt8 red = data[pixelInfo]; // If you need this info, enable it
UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
UInt8 blue = data[pixelInfo + 2]; // If you need this info, enable it
//UInt8 alpha = data[pixelInfo + 3]; // I need only this info for my maze game
//CFRelease(pixelData);
UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:1]; // The pixel color info
return color;
}
I needed to flip and image and then add it to a button. I used the following code
UIButton *playButton = [UIButton buttonWithType:UIButtonTypeRoundedRect];
playButton.frame = CGRectMake(xing, ying, dx,dy);
[playButton setTitle:text forState:UIControlStateNormal];
[playButton addTarget:self action:function forControlEvents:UIControlEventTouchUpInside];
UIImage *buttonImageNormal = [UIImage imageNamed:#"Begin-Set-Button.png"];
UIImage * other = [UIImage imageWithCGImage:buttonImageNormal.CGImage
scale:1.0 orientation: UIImageOrientationUpMirrored];
UIImage * awesome = [other stretchableImageWithLeftCapWidth:12 topCapHeight:0];
[playButton setBackgroundImage:awesome forState:UIControlStateNormal];
It displays the image correctly, yet when I click on it, the buttons displays the unflipped image.
In my attempt to fix this I added the following line of code
[playButton setBackgroundImage:awesome forState:UIControlStateHighlighted];
Yet when I click on the button, it does not darken the color like it does for buttons that I create with the unflipped image.
How do I code it so that when I use a flipped image, it shows the flipped image darkened when it is pressed? I know how to manually darken the image, but I was wondering if there was a way to automatically do it with a simple function call?
Instead of flipping the image. Have you tried flipping the imageView?
UIImage *img = [UIImage imageNamed:#"path/to-image.png"];
UIButton * button = [UIButton buttonWithType:UIButtonTypeCustom];
button.imageView.transform = CGAffineTransformMakeRotation(M_PI); // flip the image view
[button setImage:img forState:UIControlStateNormal];
This is what I did to keep the selection state highlighting and not have to deal with the image flipping.
I had the same problem as well and so I did a search to see this post. I suspected that perhaps the image is not "solidified" and so the mirror operation is not permanent.
Checking out the docs for "imageWithCGImage:" shows the following:
Discussion
This method does not cache the image object. You can use the methods of the Core Graphics framework to create a Quartz image reference.
And so, the mirroring of the image does not hold for events unless a new image is created in a new context. I've also been working on a method to draw an image on an arbitrary size canvas to create buttons with the same background. You can use this method to create an image that maintains CG operations, such as mirroring:
+ (UIImage *)imageWithCanvasSize:(CGSize)canvasSize withImage:(UIImage *)anImage
{
if (anImage.size.width > canvasSize.width ||
anImage.size.height > canvasSize.height)
{
// scale image first
anImage = [anImage scaleWithAspectToSize:canvasSize];
}
CGRect targetRect = CGRectZero;
CGFloat xOrigin = (canvasSize.width - anImage.size.width) / 2;
CGFloat yOrigin = (canvasSize.height - anImage.size.height) / 2;
targetRect.origin = CGPointMake(xOrigin, yOrigin);
targetRect.size = anImage.size;
UIGraphicsBeginImageContextWithOptions(canvasSize, NO, anImage.scale);
[anImage drawInRect:targetRect];
UIImage *canvasedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return canvasedImage;
}
- (UIImage*)scaleWithAspectToSize:(CGSize)newSize
{
CGSize scaledSize = newSize;
float scaleFactor = 1.0;
if (self.size.width > self.size.height)
{
scaleFactor = self.size.width / self.size.height;
scaledSize.width = newSize.width;
scaledSize.height = newSize.height / scaleFactor;
}
else
{
scaleFactor = self.size.height / self.size.width;
scaledSize.height = newSize.height;
scaledSize.width = newSize.width / scaleFactor;
}
UIGraphicsBeginImageContextWithOptions(scaledSize, NO, 0.0);
CGRect scaledImageRect = CGRectMake(0.0, 0.0, scaledSize.width, scaledSize.height);
[self drawInRect:scaledImageRect];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}