Algorithm to detect GPS environment you're in - ios

i need to detect in which environment the app user is in. For example is he in a forrest, or in a city, or in the near of a sea...
Therefore i just made a image from the map, calculated the average pixel and compare this color to a green, blue, brown, gray color...
But this is very inaccurate since there could be a sea in the near but the average color isn't blueish at all. Also the comparison of colors does not always match the expectations you have.
Is there any better way to detect the environment the user is in ? Since it has to work worldwide i do not think there is any possible service which can give me reliable information about forrest, seas, maybe mountains...
Maybe someone of you has an idea how to solve this or has a hint for me.
Here you can see how i tried this until this point (its objective-c code, but i am glad for any answer, also answers that do not have anything to do with iOS app developing).
- (UIColor *)mergedColor
{
CGSize size = {1, 1};
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(ctx, kCGInterpolationMedium);
[self drawInRect:(CGRect){.size = size} blendMode:kCGBlendModeCopy alpha:1];
uint8_t *data = CGBitmapContextGetData(ctx);
UIColor *color = [UIColor colorWithRed:data[2] / 255.0f
green:data[1] / 255.0f
blue:data[0] / 255.0f
alpha:1];
UIGraphicsEndImageContext();
return color;
}
UIColor *forrest = [UIColor greenColor];
const CGFloat *components1 = CGColorGetComponents([[self.view mergedColor] CGColor]);
const CGFloat *components2 = CGColorGetComponents([forrest CGColor]);
double fDistance = sqrt(pow(components1[0] - components2[0], 2) + pow(components1[1] - components2[1], 2) + pow(components1[2] - components2[2], 2));
double fPercentage = fDistance / sqrt(pow(255, 2) + pow(255, 2) + pow(255, 2));

This idea for a solution does not involve image processing.
But if you know your latitude and longitude (and this is easy to get from CoreLocation), you can pass it to a Geocoding service.
For example, when I look at Google's Geocoding API, I see a section for "Address Types and Address Component Types", and types include:
natural_feature indicates a prominent natural feature.
airport
indicates an airport.
park indicates a named park.
point_of_interest
indicates a named point of interest. Typically, these "POI"s are
prominent local entities that don't easily fit in another category
such as "Empire State Building" or "Statue of Liberty."
So there may be enough in that API for you to work with.

Related

Find average color of an area inside UIImageView [duplicate]

I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef?
-(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image
{
CGImageRef rawImageRef = [image CGImage];
//This function returns the raw pixel values
const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
NSUInteger imageWidth = CGImageGetWidth(rawImageRef);
//Here I sort the R,G,B, values and get the average over the whole image
int i = 0;
unsigned int red = 0;
unsigned int green = 0;
unsigned int blue = 0;
for (int column = 0; column< imageWidth; column++)
{
int r_temp = 0;
int g_temp = 0;
int b_temp = 0;
for (int row = 0; row < imageHeight; row++) {
i = (row * imageWidth + column)*4;
r_temp += (unsigned int)rawPixelData[i];
g_temp += (unsigned int)rawPixelData[i+1];
b_temp += (unsigned int)rawPixelData[i+2];
}
red += r_temp;
green += g_temp;
blue += b_temp;
}
NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)];
NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)];
NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)];
//Then I store the result in an array
NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil];
return result;
}
I tried two things:
Option 1:
I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error"
Option 2:
I add one line
CGImageRelease(rawImageRef)
before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line
"Incorrect decrement of the reference count of an object that is not owned at this point by the caller"
Where and how should I release the CGImageRef?
Thanks!
Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average.
Create a 1x1 bitmap context.
Set the interpolation quality to medium (see later).
Draw your image scaled down to exactly this one pixel.
Read the RGB value from the context's buffer.
(Release the context, of course.)
This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.
Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here.
Worth a try, at least.
Edit: OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want.
It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.
I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here.
Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.
Note: Here's a late follow up on the example image above.
You don't own the CGImageRef rawImageRef because you obtain it using [image CGImage]. So you don't need to release it.
However, you own rawPixelData because you obtained it using CGDataProviderCopyData and must release it.
CGDataProviderCopyData
Return Value:
A new data object containing a copy of the provider’s data. You are responsible for releasing this object.
I believe your issue is in this statement:
const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
You should be releasing the return value of CGDataProviderCopyData.
Your mergedColor works great on an image loaded from a file, but not for an image capture by the camera. Because CGBitmapContextGetData() on the context created from a captured sample buffer doesn't return it bitmap. I changed your code to as following. It works on any image and it is as fast as your code.
- (UIColor *)mergedColor
{
CGImageRef rawImageRef = [self CGImage];
// scale image to an one pixel image
uint8_t bitmapData[4];
int bitmapByteCount;
int bitmapBytesPerRow;
int width = 1;
int height = 1;
bitmapBytesPerRow = (width * 4);
bitmapByteCount = (bitmapBytesPerRow * height);
memset(bitmapData, 0, bitmapByteCount);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate (bitmapData,width,height,8,bitmapBytesPerRow,
colorspace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextSetInterpolationQuality(context, kCGInterpolationMedium);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), rawImageRef);
CGContextRelease(context);
return [UIColor colorWithRed:bitmapData[2] / 255.0f
green:bitmapData[1] / 255.0f
blue:bitmapData[0] / 255.0f
alpha:1];
}
CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef));
const UInt8 *rawPixelData = CFDataGetBytePtr(abgrData);
...
CFRelease(abgrData);

Get hash color string from image in objective c

Hi Can we get hash color string from UIImage ?
In below method if i pass [UIColor redColor] it is working , but if i pass
#define THEME_COLOR [UIColor colorWithPatternImage:[UIImage imageNamed:#"commonImg.png"]]
then it is not working.
+(NSString *)hexValuesFromUIColor:(UIColor *)color {
if (CGColorGetNumberOfComponents(color.CGColor) < 4) {
const CGFloat *components = CGColorGetComponents(color.CGColor);
color = [UIColor colorWithRed:components[0] green:components[0] blue:components[0] alpha:components[1]];
}
if (CGColorSpaceGetModel(CGColorGetColorSpace(color.CGColor)) != kCGColorSpaceModelRGB) {
return [NSString stringWithFormat:#"#FFFFFF"];
}
return [NSString stringWithFormat:#"#%02X%02X%02X", (int)((CGColorGetComponents(color.CGColor))[0]*255.0), (int)((CGColorGetComponents(color.CGColor))[1]*255.0), (int)((CGColorGetComponents(color.CGColor))[2]*255.0)];
}
Is there any other methods which can directly get Hash color from UIImage ?
You can't access the raw data directly, but by getting the CGImage of this image you can access it. Reference Link
You can't do it directly from the UIImage, but you can render the image into a bitmap context, with a memory buffer you supply, then test the memory directly. That sounds more complex than it really is, but may still be more complex than you wanted to hear.
If you have Erica Sadun's iPhone Developer's Cookbook there's good coverage of it from page 54. I'd recommend the book overall, so worth getting that if you don't have it.
I arrived at almost exactly the same code independently, but hit one bug that it looks like may be in Sadun's code too. In the pointInside method the point and size values are floats and are multiplied together as floats before being cast to an int. This is fine if your coordinates are discreet values, but in my case I was supplying sub-pixel values, so the formula broke down. The fix is easy once you've identified the problem, of course - just cast each coordinate to an int before multiplying - so, in Sadun's case it would be:
long startByte = ((int)point.y * (int)size.width) + (int)point.x) * 4;
Also, Sadun's code, as well as my own, are only interested in alpha values, so we use 8 bit pixels that take the alpha value only. Changing the CGBitMapContextCreate call should allow you to get actual colour values too (obviously if you have more than 8 bits per pixel you will have to multiply that in to your pointInside formula too).
OR

Does MKOverlayPathView need drawMapRect?

I'm having some inconsistencies modifying the Breadcrumb example, to have the CrumbPathView subclassed from MKOverlayPathView (like it's supposed to) rather than subclassed from MKOverlayView.
Trouble is, the docs are limited in stating the difference in how these 2 should be implemented. For a subclass of MKOverlayPathView it's advised to use:
- createPath
- applyStrokePropertiesToContext:atZoomScale:
- strokePath:inContext:
But is this in place of drawMapRect, or in addition to? It doesn't seem like much point if it's in addition to, because both would be used for similar implementations. But using it instead of drawMapRect, leaves the line choppy and broken.
Struggling to find any real world examples of subclassing MKOverlayPathView too...is there any point?
UPDATE - modified code from drawMapRect, to what should work:
- (void)createPath
{
CrumbPath *crumbs = (CrumbPath *)(self.overlay);
CGMutablePathRef newPath = [self createPathForPoints:crumbs.points
pointCount:crumbs.pointCount];
if (newPath != nil) {
CGPathAddPath(newPath, NULL, self.path);
[self setPath:newPath];
}
CGPathRelease(newPath);
}
- (void)applyStrokePropertiesToContext:(CGContextRef)context atZoomScale:(MKZoomScale)zoomScale
{
CGContextSetStrokeColorWithColor(context, [[UIColor greenColor] CGColor]);
CGFloat lineWidth = MKRoadWidthAtZoomScale(zoomScale);
CGContextSetLineWidth(context, lineWidth);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineCap(context, kCGLineCapRound);
}
- (void)strokePath:(CGPathRef)path inContext:(CGContextRef)context
{
CGContextAddPath(context, path);
CGContextStrokePath(context);
[self setPath:path];
}
This draws an initial line, but fails to continue the line...it doesn't add the path. I've confirmed that applyStrokePropertiesToContext and strokePath are getting called, upon every new location.
Here's a screenshot of the broken line that results (it draws for createPath, but not after that):
Here's a screenshot of the "choppy" path that happens when drawMapRect is included with createPath:
Without having seen more of your code I'm guessing, but here goes.
I suspect the path is being broken into segments, A->B, C->D, E->F rather than a path with points A,B,C,D, E and F. To be sure of that we'd need to see what is happening to self.overlay and whether it is being reset at any point.
In strokePath you set self.path to be the one that is being stroked. I doubt that is a good idea since the stroking could happen at any time just like viewForAnnotations.
As for the choppiness it may be a side effect or a poor bounds calculation on Apple's part. If your like ends near the boundary of a tile that Apple uses to cover the map it would probably only prompt the map to draw the one the line is within. But your stroke width extends into a neighbouring tile that hasn't been draw. I'm guessing again but you could test this out by moving the point that is just north of the W in "Queen St W" a fraction south, or by increasing the stroke width and see if the cut off line stays in the same place geographically.

CGContextSetRGBFillColor too few arguments

I am trying to set a color to my CGContextSetRGBFillColor in this way:
- (void) drawArrowWithContext:(CGContextRef)context atPoint:(CGPoint)startPoint withSize: (CGSize)size lineWidth:(float)width arrowHeight:(float)aheight andColor:(UIColor *)color
{
CGContextSetRGBFillColor (context,color,color,color,1);
CGContextSetRGBStrokeColor (context, color.CGColor);
....
}
...but I am getting in both cases the error "Too few arguments, should be 5, are 2". How can I fix this issue?
Seeing your other question, I would suggest that you stop for an hour and do some reading of the docs rather than simply trying to hammer your way through without understanding or learning anything.
You have a problem in your code: you are passing in a UIColor and trying to use it in a function which takes floats as arguments. Either change the params for you method or use a different CoreGraphics function which can accept a UIColor (or rather the CGColor represenation of that).
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextSetStrokeColorWithColor(context,[color CGColor]);
From the documentation:
void CGContextSetRGBFillColor (
CGContextRef c,
CGFloat red,
CGFloat green,
CGFloat blue,
CGFloat alpha
);
All you need to do is break apart your UIColor using
- (BOOL)getRed:(CGFloat *)red green:(CGFloat *)green blue:(CGFloat *)blue alpha:(CGFloat *)alpha

Core Plot: Random Colors for Scatter Plot(s)

I'm designing a graph that will have multiple scatter plots on it. The number of scatter plots changes for each set for data. I am trying to distinguish the scatter plots by color, however I am running into some trouble.
Currently, I have a for loop that creates a scatter plot for each object in an array. Inside the for loop, I set a color based off a random number:
lineStyle.lineColor = [CPTColor colorWithComponentRed:((arc4random()%255)/255.0) green:((arc4random()%255)/255.0) blue:((arc4random()%255)/255.0) alpha:1.0];
This works sometimes, however the color sometimes might be too hard to distinguish from other colors, or might be totally white. Is there a better way to generate random colors (maybe something similar to the way the pie charts generate their colors)?
I don't think there is really anything core-plot specific in this questions, it's really a just a matter of programmatically generating color schemes.
As an idea for how to do that better than just pure random numbers, heres some almost psuedo code for how i would do it:
float red = 0;
float blue = 0;
float green = 0;
while(need more colors){
float colorToInc = (arc4Random()%100)/100;
float incValue = (arc4Random()%100)/500;//value between 0 and .2
if(colorToInc < .3){
red += incValue;
if(red > 1)
red -= 1;
}else if(colorToInc < .7){
green += incValue;
if(green > 1)
green -= 1;
}else{
blue += incValue;
if(blue > 1)
blue -= 1;
}
newcolor = [color with red:red blue:blue green:green];
}
The following works for me:
red = (arc4random()%100)/100.0;
green = (arc4random()%100)/100.0;
blue = (arc4random()%100)/100.0;
[UIColor colorWithRed:red green:green blue:blue alpha:1];

Resources