Creating an image view by pixels - ios

I want to create an iOS app that contains a uiimageview and a button so that when a user hits a button the image view is generated by a set of 2 nested while loops that set the pixels for the uiimageview. i can do this in C with a bitmap quite easily but I'm not sure how to approach this for iOS could I save a bitmap to NSUserDefaults and load it from there?
Not sure, thanks for the help.

UIImageView works with UIImage, which is a UIKit's wrapper for CGImage. In any case you should have either a CGImage or UIImage. What can you do? Draw an image dynamically using CoreGraphics and/or UIKit's drawing methods (take a look at Quartz2D Programming Guide). Or if you can have a raw byte data of your image you can directly create an UIImage instance:
NSData *imgData = [[NSData alloc] initWithBytes:(const void*)myByteArray length:sizeof(myByteArray)];
UIImage *img = [[UIImage alloc] initWithData:imgData];
then just set your UIImageView's image property:
self.myImageView.image = img;

Related

Image loses quality when scaled

I know that when scaling down an image you have to expect some loss of quality, but when I assign an image to a UIButton of size (75,75) it has great quality.
When I scale the image to size (75,75) for copy/paste using UIPasteboard it has really bad quality.
Background: My app is a keyboard extension, so I have buttons with assigned images and when they are clicked, I get the image from the button, scale it to be the right size, copy it to UIPasteboard, then paste.
Code:
Here is my code for detecting a button click and copying an image:
- (IBAction) clickedImage:(id)sender {
UIButton *btn = sender;
UIImage *scaledImage = btn.imageView.image;
UIImage *newImage = [scaledImage imageWithImage:scaledImage andSize:CGSizeMake(75, 75)];
NSData *imgData = UIImagePNGRepresentation(newImage);
UIPasteboard *pasteboard = [UIPasteboard generalPasteboard];
[pasteboard setData:imgData forPasteboardType:[UIPasteboardTypeListImage objectAtIndex:0]];
}
And I have a UIImage category with the imageWithImage:andSize: method for scaling the image. This is the scaling method:
- (UIImage*)imageWithImage:(UIImage*)image andSize:(CGSize)newSize {
// Create a bitmap context.
UIGraphicsBeginImageContextWithOptions(newSize, NO, image.scale);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
What doesn't make sense is that when I put the image in the UIButton it is scaled down to the exact same size as when I scale using code, but the quality is way better for the UIButton than when I return the scaled image. Is there something wrong with my scaling code? Does anyone know why there is such a drop in quality between the two images?
A better way to do this is to use ImageIO to resize your images. It takes a little bit longer, but it is far better for scaling images than redrawing into a graphics context.
Did you try this https://github.com/mbcharbonneau/UIImage-Categories ?
There is an interesting method in the Resize category
- (UIImage *)resizedImage:(CGSize)newSize
interpolationQuality:(CGInterpolationQuality)quality;
Setting quality to kCGInterpolationHigh seems to give a good result (a little bit slower)

How to pixelate an image in iOS?

I'm to create a simple app with the following features:
First page of app will display a list of images from server (when we display these images we should pixelate it).
Once user clicks on any pixelated image then it will open in detail view (opens that pixelated image in a new ViewController).
When the user does a single touch on the detail view controller image, then it will reduce its pixelation level, and after some clicks the user can see the real image.
My problem is I am not able to find out a way to pixelate all these things dynamically. Please help me.
The GPUImage Framework has a pixellate filter, since it uses the GPUAcceleration applying the filter on an image is very fast and you can vary the pixellate level at runtime.
UIImage *inputImage = [UIImage imageNamed:<#yourimageame#>];
GPUImagePixellateFilter *filter = [[GPUImagePixellateFilter alloc] init];
UIImage *filteredImage = [filter imageByFilteringImage:inputImage];
An easy way to pixellate an image would be to use the CIPixellate filter from Core Image.
Instructions and sample code for processing images with Core Image filters can be found in the Core Image Programming Guide.
UIImage *yourImage = [UIImage imageNamed:#"yourimage"];
NSData *imageData1 = UIImageJPEGRepresentation(yourImage, 0.2);
NSData *imageData2 = UIImageJPEGRepresentation(yourImage, 0.3);
and so on upto
NSData *imageDataN = UIImageJPEGRepresentation(yourImage, 1);
show the imageData with the help of the below:
UIImage *compressedImage = [UIImage imageWithData:imageData1];
try this. Happy coding

Creating single image by combining more than one images in iOS

I am using UIImageView and I have to set more than one image as a background.
All the images have transparent background and contains any one symbol at their corners. Images are saved based on the conditions. Also there is possibility that there can be more than one images too.
Currently I am setting images, but I can view only the last image. So I want that all the images should be displayed together.
Please do let me know if there is any other way through which I can convert multiple images into single image.
Any help will be appreciated
Thanks in advance
You can draw the images with blend modes. For example, if you have a UIImage, you can call drawAtPoint:blendMode:alpha:. You'd probably want to use kCGBlendModeNormal as the blend mode in most cases.
I had created a function which gets array of images and will return single image. My code is below:
-(UIImage *)blendImages:(NSMutableArray *)array{
UIImage *img=[array objectAtIndex:0];
CGSize size = img.size;
UIGraphicsBeginImageContext(size);
for (int i=0; i<array.count; i++) {
UIImage* uiimage = [array objectAtIndex:i];
[uiimage drawAtPoint:CGPointZero blendMode:kCGBlendModeNormal alpha:1.0];
}
return UIGraphicsGetImageFromCurrentImageContext();
}
Hope this will help others too.
You should composite your images into one -- especially because they have alpha channels.
To do this, you could
use UIGraphicsBeginImageContextWithOptions to create the image at the destination size (scale now, rather than when drawing to the screen and choose the appropriate opacity)
Render your images to the context using CGContextDrawImage
then call UIGraphicsGetImageFromCurrentImageContext to get the result as a UIImage, which you set as the image of the image view.
You can use:
typedef enum _imageType{
image1,
image2,
...
imageN
}imageType;
and declare in #interface
imageType imgType;
in .h file.
And in the.m file
-(void)setImageType:(imageType)type{
imgType = type;
}
and then you can use function setImageType: to set any images what you want.

iOS: How to get a piece of a stretched image?

The generic problem I'm facing is this:
I have a stretchable 50x50 PNG. I'm stretching it to 300x100. I want to get three UIImages of size 100x100 cut from the stretched image, A, B & C in the picture below:
I'm trying to do it like this:
// stretchedImage is the 50x50 UIImage, abcImageView is the 300x100 UIImageView
UIImage *stretchedImage = [abcImageView.image stretchableImageWithLeftCapWidth:25 topCapHeight:25];
CGImageRef image = CGImageCreateWithImageInRect(stretchedImage.CGImage, bButton.frame);
UIImage *result = [UIImage imageWithCGImage:image];
[bButton setBackgroundImage:result forState:UIControlStateSelected];
CGImageRelease(image);
I'm trying to crop the middle 100 ("B") using CGImageCreateWithImageInRect, but this is not right, since stretchedImage is 50x50, not 300x100. How do I get the 300x100 image to crop from? If the original image was 300x100 there would be no problem, but then I would lose the advantage of stretchable image.
I guess to generalize the problem even more, the question would be as simple as: if you scale or stretch an image to a bigger image view, how do you get the scaled/stretched image?
Background for the specific task I'd like to apply the solution for (if you can come up with an alternative solution):
I'm trying to implement a UI that's similar to the one you see during a call in native iPhone call application: a plate containing buttons for mute, speaker, hold, etc. Some of them are toggle type buttons with a different background color for selected state.
I have two graphics for the whole plate, for non-selected and selected states. I'm stretching both images to the desired size. For the buttons in selected state I want to get a piece of the stretched selected graphic.
You should be able to do this by rendering abcImageView to a UIImage
UIGraphicsBeginImageContextWithOptions(abcImageView.bounds.size, NO, 0.f);
[abcImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then, you can crop the image like this (given cropRect):
CGImageRef cgImage = CGImageCreateWithImageInRect(image.CGImage, cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:cgImage];
// Do something with the image.
CGImageRelease(cgImage);

Instantiate an image to appear at several different points on the screen?

I have an image called Empty.png, it is a small-ish square tile, how could I instantiate the image to appear at several different points on the screen?
Thank you for any help in advance :)
You can can place UIImageView's wherever you want the image to appear.And then set the image property of each image view as this image. (UIImage object).
If you are using interface builder then you just have to type in the name of the file in the attributes inspector of the imageview in the interface builder.
Or you could do this:
UIImage *img = [UIImage imageName:#"Empty.png"];
imageView.image = img; //Assuming this is your utlet to the image view.
It depends on how you want to use it.
If you just draw it with core graphics, let's say in drawInRect: or so, then you simply draw it several times.
If you want to display it within one or a number of image views, then instanciate your UIImageViews and assign the same object to all of them. Or let the Interface Builder do the instanciation for you. But you cannot add a single UIView object several times to one or a number of subview-hierarchies. If you add a UIView as subview to a view then it will disappear from the position where it was before.
10 UIImageView may use the same UIView but you need 10 UIImageViews to display all of them.
The same applies to UIButtons and every UI-thing that has an image or background image.
this will get you one image into some view
CGPoint position = CGPointMake(x,y);
UIImageView *img = [[UIImageView alloc] init];
img.image = [UIImage imageNamed:#"Empty.png"];
img.frame = CGRectMake(position.x, position.y, img.image.size.width, img.image.size.height);
[someUIView addSubview:img];
if you make an array for the positions (x,y) of all the images, then you can just run it in a for loop and it will place the images into the view at the positions you want
note: CGPoints cant be stored in an NSArray since its not an NSObject type, either use a C/C++ array or use something else that can fit into a NSArray

Resources