How to pixelate an image in iOS? - ios

I'm to create a simple app with the following features:
First page of app will display a list of images from server (when we display these images we should pixelate it).
Once user clicks on any pixelated image then it will open in detail view (opens that pixelated image in a new ViewController).
When the user does a single touch on the detail view controller image, then it will reduce its pixelation level, and after some clicks the user can see the real image.
My problem is I am not able to find out a way to pixelate all these things dynamically. Please help me.

The GPUImage Framework has a pixellate filter, since it uses the GPUAcceleration applying the filter on an image is very fast and you can vary the pixellate level at runtime.
UIImage *inputImage = [UIImage imageNamed:<#yourimageame#>];
GPUImagePixellateFilter *filter = [[GPUImagePixellateFilter alloc] init];
UIImage *filteredImage = [filter imageByFilteringImage:inputImage];

An easy way to pixellate an image would be to use the CIPixellate filter from Core Image.
Instructions and sample code for processing images with Core Image filters can be found in the Core Image Programming Guide.

UIImage *yourImage = [UIImage imageNamed:#"yourimage"];
NSData *imageData1 = UIImageJPEGRepresentation(yourImage, 0.2);
NSData *imageData2 = UIImageJPEGRepresentation(yourImage, 0.3);
and so on upto
NSData *imageDataN = UIImageJPEGRepresentation(yourImage, 1);
show the imageData with the help of the below:
UIImage *compressedImage = [UIImage imageWithData:imageData1];
try this. Happy coding

Related

How can parse all the images form Website in IOS SDK

I like to implement image upload functionality in my app.For that I like to give some option for user to choose image.
One of the option is choose image from website.
If the user enter an url link in the text box and click the get images button means. I like to display all the images in that particular link
But I am not having any idea about this.
I have explain my requirement below
1)Enter the url in a text box
2)Click the get images button
3)Parse all the images from that particular url
4)Display all the images in the Imageview
And also to illustrate my requirement I have attached the screenshot below
If anybody work around this functionality means please suggest me some idea
Thanks
Like this ?
- (IBAction)buttonClicked:(id)sender
{
NSURL *url = [NSURL URLWithString:#"http://www.yourwebvsite.com/image.png"];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData: data];
UIImageView *displayImageView = [[UIImageView alloc] initWithImage:image];
[displayImageView setFrame:CGRectMake(0, 0, 300, 300)];
[displayImageView setContentMode:UIViewContentModeScaleAspectFit];
[self.view addSubview:displayImageView];
}
Or if you want all the images from a web directory. See here and save all the images in an array then display the images in a UIImageView like my code above.

GPUImage output image is missing in screen capture

I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?
First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).

GPUImage blend filters

I'm trying to apply a blend filters to 2 images.
I've recently updated GPUImage to the last version.
To make things simple I've modified the example SimpleImageFilter.
Here is the code:
UIImage * image1 = [UIImage imageNamed:#"PGSImage_0000.jpg"];
UIImage * image2 = [UIImage imageNamed:#"PGSImage_0001.jpg"];
twoinputFilter = [[GPUImageColorBurnBlendFilter alloc] init];
sourcePicture1 = [[GPUImagePicture alloc] initWithImage:image1 ];
sourcePicture2 = [[GPUImagePicture alloc] initWithImage:image2 ];
[sourcePicture1 addTarget:twoinputFilter];
[sourcePicture1 processImage];
[sourcePicture2 addTarget:twoinputFilter];
[sourcePicture2 processImage];
UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];
The image returned is nil.Applying some breakpoints I can see that the filter fails inside the method - (CGImageRef)newCGImageFromCurrentlyProcessedOutput the problem is that the framebufferForOutput is nil.I'm using simulator.
I don't get why it isn't working.
It seems that I was missing this command, as written in the documentation for still image processing:
Note that for a manual capture of an image from a filter, you need to
set -useNextFrameForImageCapture in order to tell the filter that
you'll be needing to capture from it later. By default, GPUImage
reuses framebuffers within filters to conserve memory, so if you need
to hold on to a filter's framebuffer for manual image capture, you
need to let it know ahead of time.
[twoinputFilter useNextFrameForImageCapture];

Replace MKMapView with static image

Is it possible to display a static image instead of the default MapView, where the current location is always the center of the image?
I want to display an image where the center is current position and add pins on it depending on coordinates (distance and direction). I want to calculate distance between too and maybe rotate the image/pins depending on which direction the phone is pointing.
I thought it might be easiest to do with a MKMapView and replace it with a static image, as I can use all the build-in functionality, but right now it seems impossible to change the map to a static image?
I could also just paint directly on an image, but how would that work, and should I do that? I guess it would be something with polar coordinates.
With iOS7, you have MKMapSnapshotter which can render a snapshot of a map region in a background thread. You can then take an image of that snapshot.
MKMapSnapshotOptions* options = [MKMapSnapshotOptions new];
//Setup options here:
options.camera = ...
options.region = ...
MKMapSnapshotter* snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) completionHandler:^(MKMapSnapshot* snapshot, NSError* error) {
// get the image associated with the snapshot
UIImage *image = snapshot.image;
dispatch_async(dispatch_get_main_queue(), ^ {
//Make sure to access UIKit UI elements on the main thread!
[self.imageView setImage: image];
});
}
You could use Google's static map API if you want. That's pretty straightforward. Here is a static image from somewhere in Copenhagen, DK:
NSData *data = [NSData dataWithContentsOfURL:#"http://maps.googleapis.com/maps/api/staticmap?center=55.675861+12.584574&zoom=15&size=400x400&sensor=false"];
UIImage *img = [UIImage imageWithData:data];
You can then add markers as you want - take a look here on how to add them. Here is a test URL for adding a red marker with the text "M" in the middle:
http://maps.googleapis.com/maps/api/staticmap?center=55.675861+12.584574&zoom=15&size=400x400&sensor=false&markers=color:red%7Clabel:M%7C55.675861+12.584574
Decoding the marker part of the URL:
markers=color:red%7Clabel:M%7C55.675861+12.584574
You get this:
markers=color:red|label:M|55.675861 12.584574
Edit:
Here is an approach that scrapes an image of the map control. If we extract the important part of the answer this is basically how you could do it:
UIGraphicsBeginImageContextWithOptions(map.bounds.size, map.opaque, 0.0);
[map.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that map is required to derive from UIView, which means you can use this trick on a variety of controls.
Edit 2:
You should also take a look at this article. Really well written an covers a lot of topics in relation to this with overlays, pins and so on.
I have written a handy category on UIImageView that allows for easy map preview creation based on MKMapView - it's supposed to work exactly the same way as AFNetworking for async image downs. Hope you will find it useful.
BGMapPreview

Creating an image view by pixels

I want to create an iOS app that contains a uiimageview and a button so that when a user hits a button the image view is generated by a set of 2 nested while loops that set the pixels for the uiimageview. i can do this in C with a bitmap quite easily but I'm not sure how to approach this for iOS could I save a bitmap to NSUserDefaults and load it from there?
Not sure, thanks for the help.
UIImageView works with UIImage, which is a UIKit's wrapper for CGImage. In any case you should have either a CGImage or UIImage. What can you do? Draw an image dynamically using CoreGraphics and/or UIKit's drawing methods (take a look at Quartz2D Programming Guide). Or if you can have a raw byte data of your image you can directly create an UIImage instance:
NSData *imgData = [[NSData alloc] initWithBytes:(const void*)myByteArray length:sizeof(myByteArray)];
UIImage *img = [[UIImage alloc] initWithData:imgData];
then just set your UIImageView's image property:
self.myImageView.image = img;

Resources