I'm trying to input the arguments for CGContextSetRGBFillColor using a data type. For example:
NSString *colorcode = ctx, 0, 1, 0, 0;
CGContextSetRGBFillColor(colorcode);
But I get an error saying that I have too few arguments.
I want to change the arguments (ctx, 0, 1, 0, 1 ) sent to CGContextSetRGBFillColor depending on the users actions.
I want to input the argument for CGContextSetRGBFillColor using a data type because the values of it is set in a separate view controller. Or can I directly input the arguments to CGContextSetRGBFillColor and then bring it over to the other view controller to use it?
Try using a UIColor object to store the user's selected color. You can create one like this:
UIColor *color = [UIColor colorWithRed:0 green:1 blue:0 alpha:0];
Then when it's time to use it as the fill color, you can do this:
CGContextSetFillColorWithColor(ctx, color.CGColor);
I should mention that if you are not using ARC, you need to retain and release color appropriately.
Sounds like what you really need to be doing is:
CGContextSetRGBFillColor (ctx, 0.0f, 1.0f, 0.0f, 1.0f);
Where each color component is some fraction between 0.0 and 1.0.
Why are you using a NSString?
Here is the documentation on Apple's website.
I want to input the argument for CGContextSetRGBFillColor using a data type because the values of it is set in a separate view controller.
You may be interested in the CGColor class, or, on iOS specifically, UIColor.
Or can I directly input the arguments to CGContextSetRGBFillColor …
That's the only way to input the arguments to CGContextSetRGBFillColor.
… and then bring it over to the other view controller to use it?
That doesn't make sense. Bring what over?
If you want to bring the color from one view controller to another, that's best done by creating a color object—either a CGColor or a UIColor—and passing that.
Related
Hi Can we get hash color string from UIImage ?
In below method if i pass [UIColor redColor] it is working , but if i pass
#define THEME_COLOR [UIColor colorWithPatternImage:[UIImage imageNamed:#"commonImg.png"]]
then it is not working.
+(NSString *)hexValuesFromUIColor:(UIColor *)color {
if (CGColorGetNumberOfComponents(color.CGColor) < 4) {
const CGFloat *components = CGColorGetComponents(color.CGColor);
color = [UIColor colorWithRed:components[0] green:components[0] blue:components[0] alpha:components[1]];
}
if (CGColorSpaceGetModel(CGColorGetColorSpace(color.CGColor)) != kCGColorSpaceModelRGB) {
return [NSString stringWithFormat:#"#FFFFFF"];
}
return [NSString stringWithFormat:#"#%02X%02X%02X", (int)((CGColorGetComponents(color.CGColor))[0]*255.0), (int)((CGColorGetComponents(color.CGColor))[1]*255.0), (int)((CGColorGetComponents(color.CGColor))[2]*255.0)];
}
Is there any other methods which can directly get Hash color from UIImage ?
You can't access the raw data directly, but by getting the CGImage of this image you can access it. Reference Link
You can't do it directly from the UIImage, but you can render the image into a bitmap context, with a memory buffer you supply, then test the memory directly. That sounds more complex than it really is, but may still be more complex than you wanted to hear.
If you have Erica Sadun's iPhone Developer's Cookbook there's good coverage of it from page 54. I'd recommend the book overall, so worth getting that if you don't have it.
I arrived at almost exactly the same code independently, but hit one bug that it looks like may be in Sadun's code too. In the pointInside method the point and size values are floats and are multiplied together as floats before being cast to an int. This is fine if your coordinates are discreet values, but in my case I was supplying sub-pixel values, so the formula broke down. The fix is easy once you've identified the problem, of course - just cast each coordinate to an int before multiplying - so, in Sadun's case it would be:
long startByte = ((int)point.y * (int)size.width) + (int)point.x) * 4;
Also, Sadun's code, as well as my own, are only interested in alpha values, so we use 8 bit pixels that take the alpha value only. Changing the CGBitMapContextCreate call should allow you to get actual colour values too (obviously if you have more than 8 bits per pixel you will have to multiply that in to your pointInside formula too).
OR
I'm trying to render text on a map using an MKOverlayRenderer. I have an existing, functional MKOverlayRenderer rendering a set of points, so my only problem is rendering a piece of text for each point within the '-(void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context' function.
All solutions I have found through SO and Google use annotations or UILabels. But I want to have the text drawing code in the same location as the code rendering the points. Also there are about 10,000 points, though I'm ensuring it's not rendering them all at the same time through zoom and bounds checking. I am reasonably sure I don't want to create 10,000 objects with the other solutions.
This is the current test code I have to try to render one of the 'Text Text' items. It is a combination of some of the methods I have found on the net to try to render something.
CGPoint* point = self.pointList.pointArray + pointIndex;
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSelectFont(context, "Helvetica", 20.f, kCGEncodingFontSpecific);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGAffineTransform xform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, 0.0);
CGContextSetTextMatrix(context, xform);
CGContextShowTextAtPoint(context, point->x, point->y, "Test Text 1", 11);
CGContextShowTextAtPoint(context, 10, 10, "Test Text 2", 11);
CGContextShowText(context, "Test Text 4", 11);
UIFont* font = [UIFont fontWithName:#"Helvetica" size:12.0];
[#"Test Text 3" drawAtPoint:*point withFont:font];
This is my first SO questions, so sorry if it isn't that correct.
Edit: I just saw the text when zoomed in as far as I can go, so realise I haven't been accounting for the zoom scale. Assuming I need to do a scale transform before rendering to account for it. Haven't solved it currently, but I think I am on my way.
I have solved it. Sorry for posting this, but I was at my wit's end and thought I needed help.
The line that rendered was:
CGContextShowTextAtPoint(context, point->x, point->y, "Test Text 1", 11);
Which is a deprecated function, but I don't know any other way to render to a specific context.
To fix it, the affine transform became:
CGAffineTransform xform = CGAffineTransformMake(1.0 / zoomScale, 0.0, 0.0, -1.0 / zoomScale, 0.0, 0.0);
The other error was that the 'select font' call needed to become:
CGContextSelectFont(context, "Helvetica", 12.f, kCGEncodingMacRoman);
I had copied the other encoding from some example code I had seen on the net, but it causes the text to have wrong characters.
If there is still a way I can do it without using the deprecated CGContextShowTextAtPoint function I would still love to know.
i need to detect in which environment the app user is in. For example is he in a forrest, or in a city, or in the near of a sea...
Therefore i just made a image from the map, calculated the average pixel and compare this color to a green, blue, brown, gray color...
But this is very inaccurate since there could be a sea in the near but the average color isn't blueish at all. Also the comparison of colors does not always match the expectations you have.
Is there any better way to detect the environment the user is in ? Since it has to work worldwide i do not think there is any possible service which can give me reliable information about forrest, seas, maybe mountains...
Maybe someone of you has an idea how to solve this or has a hint for me.
Here you can see how i tried this until this point (its objective-c code, but i am glad for any answer, also answers that do not have anything to do with iOS app developing).
- (UIColor *)mergedColor
{
CGSize size = {1, 1};
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(ctx, kCGInterpolationMedium);
[self drawInRect:(CGRect){.size = size} blendMode:kCGBlendModeCopy alpha:1];
uint8_t *data = CGBitmapContextGetData(ctx);
UIColor *color = [UIColor colorWithRed:data[2] / 255.0f
green:data[1] / 255.0f
blue:data[0] / 255.0f
alpha:1];
UIGraphicsEndImageContext();
return color;
}
UIColor *forrest = [UIColor greenColor];
const CGFloat *components1 = CGColorGetComponents([[self.view mergedColor] CGColor]);
const CGFloat *components2 = CGColorGetComponents([forrest CGColor]);
double fDistance = sqrt(pow(components1[0] - components2[0], 2) + pow(components1[1] - components2[1], 2) + pow(components1[2] - components2[2], 2));
double fPercentage = fDistance / sqrt(pow(255, 2) + pow(255, 2) + pow(255, 2));
This idea for a solution does not involve image processing.
But if you know your latitude and longitude (and this is easy to get from CoreLocation), you can pass it to a Geocoding service.
For example, when I look at Google's Geocoding API, I see a section for "Address Types and Address Component Types", and types include:
natural_feature indicates a prominent natural feature.
airport
indicates an airport.
park indicates a named park.
point_of_interest
indicates a named point of interest. Typically, these "POI"s are
prominent local entities that don't easily fit in another category
such as "Empire State Building" or "Statue of Liberty."
So there may be enough in that API for you to work with.
I'm trying to customize grouped UITableViewCell's backgroundView with a gradient, based on code I find on this blog. It's a subclass of UIView for use on cell.backgroundView.
The colors of the background's gradient are defined like this on the original code :
#define TABLE_CELL_BACKGROUND { 1, 1, 1, 1, 0.866, 0.866, 0.866, 1} // #FFFFFF and #DDDDDD
And then, used like this on the drawRect of the subclassed backgroundView:
CGFloat components[8] = TABLE_CELL_BACKGROUND;
myGradient = CGGradientCreateWithColorComponents(myColorspace, components , locations, 2);
I'm trying to implement a function to set start and end color for the gradient, which takes two UIColors and then fill in a global float array float startAndEndColors[8] (in .h / #interface) for later use:
-(void)setColorsFrom:(UIColor*)start to:(UIColor*)end{
float red = 0.0, green = 0.0, blue = 0.0, alpha =0.0, red1 = 0.0, green1 = 0.0, blue1 = 0.0, alpha1 =0.0;
[start getRed:&red green:&green blue:&blue alpha:&alpha];
[end getRed:&red1 green:&green1 blue:&blue1 alpha:&alpha1];
//This line works fine, my array is successfully filled, just for test
float colorsTest[8] = {red, green, blue, alpha, red1, green1, blue1, alpha1};
//But for this one, I just have an error.
//"Expected expression"
// \
// v
startAndEndColors = {red, green, blue, alpha, red1, green1, blue1, alpha1};
}
But it throw me this error "Expected expression" at assignation.
I tried with CGFloat, desperately adding random const, but I quickly ran out of ideas.
I simply don't get it, why can't I fill my float array this way? What am I doing wrong?
Comment added as answer:
The only way of creating an array that way is dynamically in code. If you are adding to an iVar (class variable) you need to go through one by one because the memory has already been allocated at initialization. So use startAndEndColors[0] = ..., etc.
As for your follow up question: No, there is no way to assign values in that way to memory that has already been initialized in the allocation phase. If you used std::vector or other objects then it would be possible.
A way around that would be something like this in your header
CGFloat *startAndEndColors;
And then something like this in your implementation
float colorsTest[8] = {red, green, blue, alpha, red1, green1, blue1, alpha1};
startAndEndColors = colorsTest;
That way you can initialize it the way you want to, but you have no guarantee of the number of objects in your startAndEndColors object. You could later assign it to something of the wrong size and cause crashes if you try to access outside of it's bounds.
I am trying to set a color to my CGContextSetRGBFillColor in this way:
- (void) drawArrowWithContext:(CGContextRef)context atPoint:(CGPoint)startPoint withSize: (CGSize)size lineWidth:(float)width arrowHeight:(float)aheight andColor:(UIColor *)color
{
CGContextSetRGBFillColor (context,color,color,color,1);
CGContextSetRGBStrokeColor (context, color.CGColor);
....
}
...but I am getting in both cases the error "Too few arguments, should be 5, are 2". How can I fix this issue?
Seeing your other question, I would suggest that you stop for an hour and do some reading of the docs rather than simply trying to hammer your way through without understanding or learning anything.
You have a problem in your code: you are passing in a UIColor and trying to use it in a function which takes floats as arguments. Either change the params for you method or use a different CoreGraphics function which can accept a UIColor (or rather the CGColor represenation of that).
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextSetStrokeColorWithColor(context,[color CGColor]);
From the documentation:
void CGContextSetRGBFillColor (
CGContextRef c,
CGFloat red,
CGFloat green,
CGFloat blue,
CGFloat alpha
);
All you need to do is break apart your UIColor using
- (BOOL)getRed:(CGFloat *)red green:(CGFloat *)green blue:(CGFloat *)blue alpha:(CGFloat *)alpha