Bitmap causing memory leak on device - ios

i am using simple bitmap technique to convert text into image , after that i divide this image into raster and later on i am calculating the percentage of black pixels in each raster rectangle. Every thing works fine on simulator but get crashed on Device.here is some related code
-(int)blackValue:(UIImage *)image rect:(CGRect)rect {
int pixelInRect = (int)rect.size.width * rect.size.height;
__block int blackCount = 0;
ImageBitmap * imageBitmap = [[ImageBitmap alloc] initWithImage:image bitmapInfo:(CGBitmapInfo)kCGImageAlphaNoneSkipLast];
for (int x = 0; x <(int) rect.size.width; x++) {
for (int y = 0; y < (int) rect.size.height; y++) {
Byte * pixel = [imageBitmap pixelAtX:(int)rect.origin.x Y:(int)rect.origin.y];
Byte red = pixel[0];
if (red < 0.1)
{
blackCount++;
}
}
}
return blackCount/pixelInRect;
}
- (NSDictionary *)rasterizeBitmap:(UIImage *)image size:(CGFloat)size {
CGFloat width =(int) (image.size.width/size);
CGFloat height =(int) (image.size.height/size);
NSMutableArray *fields = [NSMutableArray array];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
CGRect rect = CGRectMake(x * size, y * size, size, size);
CGPoint center = CGPointMake(x * size + size/2.0, image.size.height - (y * size + size/2.0));
double black = [self blackValue:image rect:rect];
Field * field = [[Field alloc] init];
field.center = center;
field.black = black;
[fields addObject:field];
}
}
return #{#"width":#(width) , #"fields":fields};
}
when i have try to run it in Profile i got the below result
Can some one suggestion me how can i over come the memory issue?

The problem is that you're manually allocating memory in your ImageBitmap object, but you are never releasing it.
The two suspects are the bitmap context (context), and the bitmap data (contextData). Both of these are not managed by ARC, so you'll want to be freeing both of these yourself once you are done with them.
In ARC, you can simply implement the dealloc method in your ImageBitmap class and put your cleanup code there.
For example:
-(void) dealloc {
CGContextRelease(context); // releases the bitmap context, if it was created (CGContextRelease checks for NULL)
free(contextData); // releases the bitmap data (it was explicitly created, so no need to check)
}
It's also worth noting you should make init unavailable, and mark your designated initialiser.
This is because you cannot use your imageFromContext and pixelAtX:Y: instance methods without having created your instance through your custom initWithSize:bitmapInfo: initialiser, as it creates the bitmap context and allocates the memory for the bitmap data.
Therefore if you were to create your instance by calling init, and call one of your instance methods, you would most likely get a crash.
To remedy this, you can mark the init method as unavailable in your ImageBitmap.h file, and also mark your initWithSize:bitmapInfo: method as the designated initialiser.
-(instancetype) init NS_UNAVAILABLE;
-(id)initWithSize:(CGSize)size bitmapInfo:(CGBitmapInfo)bmInfo NS_DESIGNATED_INITIALIZER;
All the NS_UNAVAILABLE does is prevent you from creating your instance by just calling init, forcing you to use your custom initialisers.
If you try to do [[ImageBitmap alloc] init], the compiler will show you the following error:
All the NS_DESIGNATED_INITIALIZER does is make sure that any extra initialisers in ImageBitmap must create new instances through your initialiser, and will show you the following warning if they don't:
See here for more info on NS_DESIGNATED_INITIALIZER.
Now, in practise these are really just formalities as you're the only one who's going to be using this, and you know you have to use the custom initialisers. However, it's good to get these formalities right if you ever want to share your code with other people.

Related

How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?
You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing
You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How can I achieve a 30 button grid using autolayout storyboards?

So I had this working for a 12 button (4x3) grid of buttons.
I'd like all of the buttons to be equal size, common distances above and to the side of each other, and the entire grid to be centered on the device, like so:
The problem is, it looks like a jumbled mess when I build the project.
I have no problem getting the segmented control, score, or reset buttons positioned correctly, but the grid just messes everything up.
I've been using the middle tool to set up the constraints on the grid, which worked fine for the 12 button grid:
However, using this only creates infinite conflicting constraints that cannot be resolved by xcode.
I am very new to iOS and could be missing something simple, but I've been trying my best to match up to the blue auto suggested lines as much as possible here.
Thanks for any advice.
It would be a lot simpler just to use a UICollectionView with a UICollectionViewFlowLayout and let the flow layout create the grid for you.
But even if you're not going to do that, then still, my advice is: don't set this up in Xcode / Interface Builder; make the entire grid (and constraints if you want them) in code. It's much simpler (and more fun and less boring and, of course, less error-prone).
1.) Instead of setting each button up in the interface builder just create the container (a UIView) that the whole grid should fit inside. Add constraints to that container for how you would want that to expand and contract with screen size.
2.) Link that container UIView to your .h view controller class and name it gridContainer or whatever.
3.) Create a property in your .h class:
#property (strong, nonatomic) NSMutableArray *twoDimensionalArrayContainingRowsOfCardButtons;
4.) Then:
- (void)viewDidLoad {
// other stuff you're doing to set up your app
self.twoDimensionalArrayContainingRowsOfCardButtons = [NSMutableArray new];
//Do this inside the main thread to make sure all your other views are laid out before this starts
//Sometimes when you do layout stuff before the rest of the view is set up from Interface Builder you will get weird results.
dispatch_async(dispatch_get_main_queue(), ^{
[self createTwoMentionalArrayHoldingCardButtons];
[self displayCardGrid];
});
}
- (void)createTwoMentionalArrayHoldingCardButtons {
NSMutableArray *arrayWithRowsOfButtons= [NSMutableArray new];
for (int x = 0; x < 6; x++) {
NSMutableArray *arrayOfButtonsAtRowX = [NSMutableArray new];
for (int i = 0; i < 6; i++) {
CGRect rect = self.gridContainer.bounds;
CGSize cellSize = CGSizeMake(rect.size.width / 6, rect.size.height / 6);
UIButton *buttonInColumnI = [UIButton alloc] initWithFrame:CGRectMake(cellSize.width * i, cellSize.height * x, cellSize.width, cellSize.height);
[buttonInColumnI setImage:[UIImage imageNamed:#"yourCardImage"] forState:UIControlStateNormal];
[buttonInColumnI addTarget:self action:#selector(yourButtonAction:) forControlEvents:UIControlEventTouchUpInside];
[arrayOfButtonsAtRowX addObject:buttonInColumnI];
}
[arrayOfRowsOfButtons addObject:arrayOfButtonsAtRowX];
}
self.twoDimensionalArrayContainingRowsOfCardButtons = arrayWithRowsOfButtons;
}
- (void)displayCardGrid {
for (int x = 0; x < self.twoDimensionalArrayContainingRowsOfCardButtons.count; x++) {
NSMutableArray *arrayOfButtonsAtColumnsAtRowX = self.twoDimensionalArrayContainingRowsOfCardButtons[x];
for (int i = 0; i < arrayOfButtonsAtColumnsAtRowX.count; i++) {
UIButton *buttonAtColumnI = arrayOfButtonsAtColumnsAtRowX[i];
[self.gridContainer addSubview:buttonAtColumnI];
}
}
}
- (void)yourButtonAction:(UIButton *)tappedCard {
//To swap the card image on your tapped button
for (int x = 0; x < self.twoDimensionalArrayContainingRowsOfCardButtons.count; x++) {
NSMutableArray *arrayOfButtonsAtColumnsAtRowX = self.twoDimensionalArrayContainingRowsOfCardButtons[x];
for (int i = 0; i < arrayOfButtonsAtColumnsAtRowX.count; i++) {
UIButton *buttonAtColumnI = arrayOfButtonsAtColumnsAtRowX[i];
if (tappedCard == buttonAtColumnI) {
int row = x;
int column = i;
//Now you can save that the user has tapped something at this row and column.
//While you're here, you can update the card image.
[tappedCard setImage:[UIImage imageNamed:#"CardExposedImage"];
}
}
}
}
I'm writing this all in the box here without running it, so hopefully that works for you. Ended up being a few more lines than expected.
Edit: forgot to add that I separated the building of the card buttons and the displaying of them so that you could call the display method separately. With the property, you also have a retained source of all the cards so you can just fetch them out of the array and change what you need, as needed.

Crash running OpenGL on iOS after memory warning

I am having trouble with an app with an OpenGL component crashing on iPad. The app throws a memory warning and crashes, but it doesn't appear to be using that much memory. Am I missing something?
The app is based on the Vuforia augmented reality system (borrows heavily from the ImageTargets sample). I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
Anything odd in here?
I don't know much at all about OpenGL (part of the reason why I chose the Vurforia engine). Anything in this screen shot below that should concern me? Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
Any help would be appreciated!!
Here is the code that generates the 3D objects (in EAGLView):
// Load the textures for use by OpenGL
-(void)loadATexture:(int)texNumber {
if (texNumber >= 0 && texNumber < [tempTextureList count]) {
currentlyChangingTextures = YES;
[textureList removeAllObjects];
[textureList addObject:[tempTextureList objectAtIndex:texNumber]];
Texture *tex = [[Texture alloc] init];
NSString *file = [textureList objectAtIndex:0];
[tex loadImage:file];
[textures replaceObjectAtIndex:texNumber withObject:tex];
[tex release];
// Remove all old textures outside of the one we're interested in and the two on either side of the picker.
for (int i = 0; i < [textures count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
[textures replaceObjectAtIndex:i withObject:#""];
}
}
// Render - Generate the OpenGL texture objects
GLuint nID;
Texture *texture = [textures objectAtIndex:texNumber];
glGenTextures(1, &nID);
[texture setTextureID: nID];
glBindTexture(GL_TEXTURE_2D, nID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [texture width], [texture height], 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)[texture pngData]);
// Set up objects using the above textures.
Object3D *obj3D = [[Object3D alloc] init];
obj3D.numVertices = rugNumVerts;
obj3D.vertices = rugVerts;
obj3D.normals = rugNormals;
obj3D.texCoords = rugTexCoords;
obj3D.texture = [textures objectAtIndex:texNumber];
[objects3D replaceObjectAtIndex:texNumber withObject:obj3D];
[obj3D release];
// Remove all objects except the one currently visible and the ones on either side of the picker.
for (int i = 0; i < [tempTextureList count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
Object3D *obj3D = [[Object3D alloc] init];
[objects3D replaceObjectAtIndex:i withObject:obj3D];
[obj3D release];
}
}
if (QCAR::GL_20 & qUtils.QCARFlags) {
[self initShaders];
}
currentlyChangingTextures = NO;
}
}
Here is the code in the textures object.
- (id)init
{
self = [super init];
pngData = NULL;
return self;
}
- (BOOL)loadImage:(NSString*)filename
{
BOOL ret = NO;
// Build the full path of the image file
NSString* resourcePath = [[NSBundle mainBundle] resourcePath];
NSString* fullPath = [resourcePath stringByAppendingPathComponent:filename];
// Create a UIImage with the contents of the file
UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];
if (uiImage) {
// Get the inner CGImage from the UIImage wrapper
CGImageRef cgImage = uiImage.CGImage;
// Get the image size
width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
// Record the number of channels
channels = CGImageGetBitsPerPixel(cgImage)/CGImageGetBitsPerComponent(cgImage);
// Generate a CFData object from the CGImage object (a CFData object represents an area of memory)
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
// Copy the image data for use by Open GL
ret = [self copyImageDataForOpenGL: imageData];
CFRelease(imageData);
}
return ret;
}
- (void)dealloc
{
if (pngData) {
delete[] pngData;
}
[super dealloc];
}
#end
#implementation Texture (TexturePrivateMethods)
- (BOOL)copyImageDataForOpenGL:(CFDataRef)imageData
{
if (pngData) {
delete[] pngData;
}
pngData = new unsigned char[width * height * channels];
const int rowSize = width * channels;
const unsigned char* pixels = (unsigned char*)CFDataGetBytePtr(imageData);
// Copy the row data from bottom to top
for (int i = 0; i < height; ++i) {
memcpy(pngData + rowSize * i, pixels + rowSize * (height - 1 - i), width * channels);
}
return YES;
}
Odds are, you're not seeing the true memory usage of your application. As I explain in this answer, the Allocations instrument hides memory usage from OpenGL ES, so you can't use it to measure the size of your application. Instead, use the Memory Monitor instrument, which I'm betting will show that your application is using far more RAM than you think. This is a common problem people run into when trying to optimize OpenGL ES on iOS using Instruments.
If you're concerned about which objects or resources could be accumulating in memory, you can use the heap shots functionality of the Allocations instrument to identify specific resources that are allocated but never removed when performing repeated tasks within your application. That's how I've tracked down textures and other items that were not being properly deleted.
Seeing some code would help, but I can make some gusses:
I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
(...)
This kind of approach is not ideal; and it's most likely the reason for your problems, if the memory is not properly deallocated. Eventually you'll run out of memory and then your process dies if you don't take proper precautions. It's very likely that the engine used has some memory leak, exposed by your access scheme.
Today operating systems don't differentiate between RAM and storage. To them it's all just memory and all address space is backed by the block storage system anyway (if there's actually some storage device attached doesn't matter).
So here's what you should do: Instead of read-ing your models into memory, you should memory map them (mmap). This tells the OS "this part of storage should be visible in address space" and the OS kernel will do all the necessary transfers when they're due.
Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
This is a strong indicator, that OpenGL texture objects don't get properly deleted.
Any help would be appreciated!!
My advice: Stop programming like it was the 1970ies. Today's computers and operating systems work differently. See also http://www.varnish-cache.org/trac/wiki/ArchitectNotes

MapTypeStyle in MapKit

I wonder to know if there is any way to configure our MapKit maps like we do with the MapTypeStyle object in the Google Maps API.
If I refer to Apple doc's, MKMapView has a mapType option that takes MKMapType constant but no styles parameters like MapOptions with the MapTypeStyle and the MapTypeStyler wich is very powerfull for fast maps customizing.
So my question is : Is there any way to achieve something similar with the MapKit framework, if not, what is the best framework/library to do this ? I'm thinking of MapBox and similar products.
There are a few options for you my friend. You could use one of these frameworks
http://cloudmade.com/products/iphone-sdk
https://github.com/route-me/route-me
Or you could just use mapbox. Their api looks pretty good.
Alternatively you supply you own map tiles and overlay mapkit. Something like this in a MKOverlayView
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context {
NSURL* fileURL = [(HeatMap*)self.overlay localUrlForStyle:#"alien" withMapRect:mapRect andZoomScale:zoomScale];
NSData *imageData = [NSData dataWithContentsOfURL:fileURL ];
if (imageData != nil) {
UIImage* img = [UIImage imageNamed:#"aTileX.png"];
// Perform the image render on the current UI context
UIGraphicsPushContext(context);
[img drawInRect:[self rectForMapRect:mapRect] blendMode:kCGBlendModeNormal alpha:1.0];
UIGraphicsPopContext();
}
}
Also check this out if you want unsupported "terrain" mode
http://openradar.appspot.com/9621632
I'm actually in the middle of a program that requires overlaying tiles over a map. This example has been very helpful. You'll want to look into MKOverlay and MKOverlayView. The project that I am doing involves using gheat. I am accessing the tiles through an NSURLConnection and storing them locally. A gist of my implementation.
There is no way to customize the map styles natively with mapkit. Your only option for this is to opt for a hybrid app approach, and then customize the styles using html/javascript in the page itself.
As drawing the tiles takes place in a private class called MKMapTileView you can not simply write a category. You have to implement another class for the custom drawing. The Methods of this class will be used to overload the implementation of MKMapTileView during runtime:
Header file:
#interface MyColorMap : NSObject
+ (void)overLoadMethods:(Class)destinationClass;
#end
Imlementation:
#import "MyColorMap.h"
#import <objc/runtime.h>
#implementation MyColorMap
+ (void)overLoadMethods:(Class)destinationClass {
// get the original method for drawing a tile
Method originalDrawLayer = class_getInstanceMethod(destinationClass, #selector(drawLayer:inContext:));
// get the method we will replace with the original implementation of 'drawLayer:inContext:' later
Method backupDrawLayer = class_getInstanceMethod([self class], #selector(backupDrawLayer:inContext:));
// get the method we will use to draw our own colors
Method myDrawLayer = class_getInstanceMethod([self class], #selector(myDrawLayer:inContext:));
// dito with the implementations
IMP impOld = method_getImplementation(originalDrawLayer);
IMP impNew = method_getImplementation(myDrawLayer);
// replace the original 'drawLayer:inContext:' with our own implementation
method_setImplementation(originalDrawLayer, impNew);
// set the original 'drawLayer:inContext:' implementation to our stub-method, so wie can call it later on
SEL selector = method_getName(backupDrawLayer);
const char *types = method_getTypeEncoding(backupDrawLayer);
class_addMethod(destinationClass, selector, impOld, types);
}
- (void)backupDrawLayer:(CALayer*)l inContext:(CGContextRef)c {
// stub method, implementation will never be called. The only reason we implement this is so we can call the original method durring runtime
}
- (void)myDrawLayer:(CALayer*)l inContext:(CGContextRef)c {
// set background to white so wie can use it for blendmode
CGContextSetFillColorWithColor(c, [[UIColor whiteColor] CGColor]);
CGContextFillRect(c, CGContextGetClipBoundingBox(c));
// set blendmode so the map will show as grayscale
CGContextSetBlendMode(c, kCGBlendModeLuminosity);
// kCGBlendModeExclusion for inverted colors etc.
// calling the stub-method which will become the original method durring runtime
[self backupDrawLayer:l inContext:c];
// if you want more advanced manipulations you can alter the context after drawing:
// int w = CGBitmapContextGetWidth(c);
// int h = CGBitmapContextGetHeight(c);
//
// unsigned char* data = CGBitmapContextGetData(c);
// if (data != NULL) {
// int maxY = h;
// for(int y = 0; y<maxY; y++) {
// for(int x = 0; x<w; x++) {
//
// int offset = 4*((w*y)+x);
// char r = data[offset];
// char g = data[offset+1];
// char b = data[offset+2];
// char a = data[offset+3];
//
// // do what ever you want with the pixels
//
// data[offset] = r;
// data[offset+1] = g;
// data[offset+2] = b;
// data[offset+3] = a;
// }
// }
// }
}
now you have to call [MyColorMap overLoadMethods:NSClassFromString(#"MKMapTileView")] at some point before using a MKMapView

Autorelease issue with ARC (iOS)

I have a litle problem with ARC which I understand why it does what it does but not how to prevent it. This code is part of a sample "Tick tac toe" game.
The problem is allocating a new UIView inside a loop subclassed with the name Tile. Once it's setup I will add the Tile (the UIView) to the current view and add it to the gamecontroller array of tiles later on used for reference.
Now the problem is that with every iteration of the loop, the tile object(s) get auto released, and I want them to be retained so I can store them in the gamecontroler's tile container. How do I make it remember the tiles?
This is the code on the gameDelegate:
- (void) addTile:(Tile *)tile {
NSLog(#"Add tile %#", self.tiles);
[tiles addObject:tile];
}
The output of the last add is here:
Posted on pastebin.com for better formatting
At this point, as expected, the whole local tiles array inside the game controller will be outputted and that's as expected; there is a list with Tile objects.
New download link
This is the code in board.m (subclass of UIView).
-(void) drawBoard
{
NSLog(#"Drawboard called");
for (int j =0; j < 3; j++) {
for (int i = 0; i < 3; i++) {
CGRect frame = CGRectMake(tilex, tiley, hlineDistance-1, vlineDistance-1);
Tile* tile = [[Tile alloc] initWithFrame:frame];
[self addSubview:tile];
// ...
[self.gameDelegate addTile:tile];
}
// ...
}
// ...
}
ARC is not the problem here. For instance, the super view (self) retains the tiles when you add them using:
[self addSubview:tile];
Verify that gameDelegate is not nil at the line:
[self.gameDelegate addTile:tile];
and that it actually adds the tiles to the array.

Resources