Replace MKMapView with static image - ios

Is it possible to display a static image instead of the default MapView, where the current location is always the center of the image?
I want to display an image where the center is current position and add pins on it depending on coordinates (distance and direction). I want to calculate distance between too and maybe rotate the image/pins depending on which direction the phone is pointing.
I thought it might be easiest to do with a MKMapView and replace it with a static image, as I can use all the build-in functionality, but right now it seems impossible to change the map to a static image?
I could also just paint directly on an image, but how would that work, and should I do that? I guess it would be something with polar coordinates.

With iOS7, you have MKMapSnapshotter which can render a snapshot of a map region in a background thread. You can then take an image of that snapshot.
MKMapSnapshotOptions* options = [MKMapSnapshotOptions new];
//Setup options here:
options.camera = ...
options.region = ...
MKMapSnapshotter* snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) completionHandler:^(MKMapSnapshot* snapshot, NSError* error) {
// get the image associated with the snapshot
UIImage *image = snapshot.image;
dispatch_async(dispatch_get_main_queue(), ^ {
//Make sure to access UIKit UI elements on the main thread!
[self.imageView setImage: image];
});
}

You could use Google's static map API if you want. That's pretty straightforward. Here is a static image from somewhere in Copenhagen, DK:
NSData *data = [NSData dataWithContentsOfURL:#"http://maps.googleapis.com/maps/api/staticmap?center=55.675861+12.584574&zoom=15&size=400x400&sensor=false"];
UIImage *img = [UIImage imageWithData:data];
You can then add markers as you want - take a look here on how to add them. Here is a test URL for adding a red marker with the text "M" in the middle:
http://maps.googleapis.com/maps/api/staticmap?center=55.675861+12.584574&zoom=15&size=400x400&sensor=false&markers=color:red%7Clabel:M%7C55.675861+12.584574
Decoding the marker part of the URL:
markers=color:red%7Clabel:M%7C55.675861+12.584574
You get this:
markers=color:red|label:M|55.675861 12.584574
Edit:
Here is an approach that scrapes an image of the map control. If we extract the important part of the answer this is basically how you could do it:
UIGraphicsBeginImageContextWithOptions(map.bounds.size, map.opaque, 0.0);
[map.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that map is required to derive from UIView, which means you can use this trick on a variety of controls.
Edit 2:
You should also take a look at this article. Really well written an covers a lot of topics in relation to this with overlays, pins and so on.

I have written a handy category on UIImageView that allows for easy map preview creation based on MKMapView - it's supposed to work exactly the same way as AFNetworking for async image downs. Hope you will find it useful.
BGMapPreview

Related

How to pixelate an image in iOS?

I'm to create a simple app with the following features:
First page of app will display a list of images from server (when we display these images we should pixelate it).
Once user clicks on any pixelated image then it will open in detail view (opens that pixelated image in a new ViewController).
When the user does a single touch on the detail view controller image, then it will reduce its pixelation level, and after some clicks the user can see the real image.
My problem is I am not able to find out a way to pixelate all these things dynamically. Please help me.
The GPUImage Framework has a pixellate filter, since it uses the GPUAcceleration applying the filter on an image is very fast and you can vary the pixellate level at runtime.
UIImage *inputImage = [UIImage imageNamed:<#yourimageame#>];
GPUImagePixellateFilter *filter = [[GPUImagePixellateFilter alloc] init];
UIImage *filteredImage = [filter imageByFilteringImage:inputImage];
An easy way to pixellate an image would be to use the CIPixellate filter from Core Image.
Instructions and sample code for processing images with Core Image filters can be found in the Core Image Programming Guide.
UIImage *yourImage = [UIImage imageNamed:#"yourimage"];
NSData *imageData1 = UIImageJPEGRepresentation(yourImage, 0.2);
NSData *imageData2 = UIImageJPEGRepresentation(yourImage, 0.3);
and so on upto
NSData *imageDataN = UIImageJPEGRepresentation(yourImage, 1);
show the imageData with the help of the below:
UIImage *compressedImage = [UIImage imageWithData:imageData1];
try this. Happy coding

GPUImage output image is missing in screen capture

I am trying to capture screen portion to post image on social media.
I am using following code to capture screen.
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Above code is perfect for capturing screen.
Problem :
My UIView contains GPUImageView with the filtered image. When I tries to capture screen using above code, that particular portion of GPUImageView does not contains the filtered image.
I am using GPUImageSwirlFilter with the static image (no camera). I have also tried
UIImage *outImage = [swirlFilter imageFromCurrentFramebuffer]
but its not giving image.
Note : Following is working code, which gives perfect output of swirl effect, but I want same image in UIImage object.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
GPUImageSwirlFilter *swirlFilter = [GPUImageSwirlFilter alloc] init];
swirlLevel = 4;
[swirlFilter setAngle:(float)swirlLevel/10];
UIImage *inputImage = [UIImage imageNamed:gi.wordImage];
GPUImagePicture *swirlSourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage];
inputImage = nil;
[swirlSourcePicture addTarget:swirlFilter];
dispatch_async(dispatch_get_main_queue(), ^{
[swirlFilter addTarget:imgSwirl];
[swirlSourcePicture processImage];
// This works perfect and I have filtered image in my imgSwirl. But I want
// filtered image in UIImage to use at different place like posting
// on social media
sharingImage = [swirlFilter imageFromCurrentFramebuffer]; // This also
// returns nothing.
});
});
1) Am I doing something wrong with GPUImage's imageFromCurrentFramebuffer ?
2) And why does screen capture code is not including GPUImageView portion in output image ?
3) How do I get filtered image in UIImage ?
First, -renderInContext: won't work with a GPUImageView, because a GPUImageView renders using OpenGL ES. -renderinContext: does not capture from CAEAGLLayers, which are used to back views presenting OpenGL ES content.
Second, you're probably getting a nil image in the latter code because you've forgotten to set -useNextFrameForImageCapture on your filter before triggering -processImage. Without that, your filter won't hang on to its backing framebuffer long enough to capture an image from it. This is due to a recent change in the way that framebuffers are handled in memory (although this change did not seem to get communicated very well).

Creating single image by combining more than one images in iOS

I am using UIImageView and I have to set more than one image as a background.
All the images have transparent background and contains any one symbol at their corners. Images are saved based on the conditions. Also there is possibility that there can be more than one images too.
Currently I am setting images, but I can view only the last image. So I want that all the images should be displayed together.
Please do let me know if there is any other way through which I can convert multiple images into single image.
Any help will be appreciated
Thanks in advance
You can draw the images with blend modes. For example, if you have a UIImage, you can call drawAtPoint:blendMode:alpha:. You'd probably want to use kCGBlendModeNormal as the blend mode in most cases.
I had created a function which gets array of images and will return single image. My code is below:
-(UIImage *)blendImages:(NSMutableArray *)array{
UIImage *img=[array objectAtIndex:0];
CGSize size = img.size;
UIGraphicsBeginImageContext(size);
for (int i=0; i<array.count; i++) {
UIImage* uiimage = [array objectAtIndex:i];
[uiimage drawAtPoint:CGPointZero blendMode:kCGBlendModeNormal alpha:1.0];
}
return UIGraphicsGetImageFromCurrentImageContext();
}
Hope this will help others too.
You should composite your images into one -- especially because they have alpha channels.
To do this, you could
use UIGraphicsBeginImageContextWithOptions to create the image at the destination size (scale now, rather than when drawing to the screen and choose the appropriate opacity)
Render your images to the context using CGContextDrawImage
then call UIGraphicsGetImageFromCurrentImageContext to get the result as a UIImage, which you set as the image of the image view.
You can use:
typedef enum _imageType{
image1,
image2,
...
imageN
}imageType;
and declare in #interface
imageType imgType;
in .h file.
And in the.m file
-(void)setImageType:(imageType)type{
imgType = type;
}
and then you can use function setImageType: to set any images what you want.

Instantiate an image to appear at several different points on the screen?

I have an image called Empty.png, it is a small-ish square tile, how could I instantiate the image to appear at several different points on the screen?
Thank you for any help in advance :)
You can can place UIImageView's wherever you want the image to appear.And then set the image property of each image view as this image. (UIImage object).
If you are using interface builder then you just have to type in the name of the file in the attributes inspector of the imageview in the interface builder.
Or you could do this:
UIImage *img = [UIImage imageName:#"Empty.png"];
imageView.image = img; //Assuming this is your utlet to the image view.
It depends on how you want to use it.
If you just draw it with core graphics, let's say in drawInRect: or so, then you simply draw it several times.
If you want to display it within one or a number of image views, then instanciate your UIImageViews and assign the same object to all of them. Or let the Interface Builder do the instanciation for you. But you cannot add a single UIView object several times to one or a number of subview-hierarchies. If you add a UIView as subview to a view then it will disappear from the position where it was before.
10 UIImageView may use the same UIView but you need 10 UIImageViews to display all of them.
The same applies to UIButtons and every UI-thing that has an image or background image.
this will get you one image into some view
CGPoint position = CGPointMake(x,y);
UIImageView *img = [[UIImageView alloc] init];
img.image = [UIImage imageNamed:#"Empty.png"];
img.frame = CGRectMake(position.x, position.y, img.image.size.width, img.image.size.height);
[someUIView addSubview:img];
if you make an array for the positions (x,y) of all the images, then you can just run it in a for loop and it will place the images into the view at the positions you want
note: CGPoints cant be stored in an NSArray since its not an NSObject type, either use a C/C++ array or use something else that can fit into a NSArray

Creating an image view by pixels

I want to create an iOS app that contains a uiimageview and a button so that when a user hits a button the image view is generated by a set of 2 nested while loops that set the pixels for the uiimageview. i can do this in C with a bitmap quite easily but I'm not sure how to approach this for iOS could I save a bitmap to NSUserDefaults and load it from there?
Not sure, thanks for the help.
UIImageView works with UIImage, which is a UIKit's wrapper for CGImage. In any case you should have either a CGImage or UIImage. What can you do? Draw an image dynamically using CoreGraphics and/or UIKit's drawing methods (take a look at Quartz2D Programming Guide). Or if you can have a raw byte data of your image you can directly create an UIImage instance:
NSData *imgData = [[NSData alloc] initWithBytes:(const void*)myByteArray length:sizeof(myByteArray)];
UIImage *img = [[UIImage alloc] initWithData:imgData];
then just set your UIImageView's image property:
self.myImageView.image = img;

Resources