I saw the syntax below in some example code and am not sure I understand it.
CGRect imageRect = (CGRect){.size = baseImage.size};
Is this simply a shorthand way of initializing a CGRect equivalent to:
CGRect imageRect = CGRectMake(0,0,baseImage.size.width, baseImage.size.height);
Is there any benefit to this syntax aside from slightly less typing?
That's C99 initializer syntax. You can use it with any structure.
The main advantage to an Objective-C is that it gives you some very Objective-C like syntax, where the fields are close to the values rather than implied by positioning. (That's not to say this is intentionally similar, or that it's the only advantage. But it is nice.)
It's sometimes slightly more typing, but I use it everywhere now.
Consider:
CGRect a = CGRectMake(a+c/2, b+d/2, c, d);
In order to understand this, you need to understand the order of the parameters. You also need to be able to catch the commas easily with your eyes. In this case, that's pretty easy, but if the expressions were more complicated you'd probably be storing them in a temporary variable first.
The C99 way:
CGRect a = (CGRect){
.origin.x = a+c/2,
.origin.y = b+d/2,
.size.width = c,
.size.height = d
};
It's longer, but it's more explicit. It's also very easy to follow what is assigned to what, no matter how long the expression are. It's also more like an Objective-C method. After all, if CGRect was a class, it would probably look like this:
CGRect *a = [[CGRect alloc] initWithOriginX:x originY:y width:w height:h];
You can also do things like this:
CGRect a = (CGRect){
.origin = myOrigin,
.size = computedSize
};
Here, you're building a rectangle using a CGPoint and CGSize. The compiler understands that .origin expects a CGPoint, and .size expects a CGSize. You've provided that. All's gravy.
The equivalent code would be CGRectMake(myOrigin.x, myOrigin.y, size.width, size.height). By using CGRectMake you're no longer expressing the same kind of meaning to the compiler. It can't stop you from assigning part of the size to the origin. It also won't stop you from assigning the width to the height. It doesn't even give you a good clue about which is the X and Y; if you've used APIs that provide vertical coordinates first, you'll get it wrong.
You can assign part from a structure and part from floats as well:
CGRect a = (CGRect){
.origin = myOrigin,
.size.width = c,
.size.height = d
};
The CGRectMake function predates C99. I have no evidence to this effect, but I think if C99 had come first CGRectMake probably wouldn't exist at all; it's the sort of crusty function you write when your language has no direct way to perform the initialization. But now it does.
Basically, if you use it for a while, you'll probably come to prefer C99 syntax. It's more explicit, more flexible, more Objective-C-like and harder to screw up.
Unfortunately, as of 4.6 Xcode will not autocomplete structure field names when in the C99 field initializer list.
This is not just shorthand syntax but is also useful when you want to change only the size and not the origin in CGRect and vice versa.
Eg : I want to change only the size and the position has a complicated syntax and I dont want to change it. Noramlly, I would do
CGRect imageRect = CGRectMake(sprite.origin.x,sprite.origin.y,40, 60);
With the other syntax i would do
CGRect imageRect = (CGRect){.size = sprite.size};
also we can directy use add, subtract and multiply methods
eg.
CGRect imageRect = (CGRect){.size = ccpAdd(sprite.size,addsize)};
Hope this helps
it looks like C99 / GCC style initializing http://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
Related
Note that I'm not trying to set the value in a CGRect. I'm mystified as to why the compiler is issuing this claim:
let widthFactor = 0.8
let oldWidth = wholeFrameView.frame.width
let newWidth = wholeFrameView.frame.width * widthFactor // Value of type '(CGRect) -> CGRect' has no member 'width'
let newWidth2 = wholeFrameView.frame.width * 0.8 // This is fine.
Width is a CGFloat where your multiplier is a Double. Explicitly declare the type of your multiplier:
let widthFactor: CGFloat = 0.8
All the dimensions of a CGRect are of type CGFloat, not Double, and because Swift is especially strict about types, you can't multiply a CGFloat by a Double.
The interesting thing though, is that both CGFloat and Double implement ExpressibleByFloatLiteral. This means that 0.8, a "float literal", can be interpreted as either a Double or a CGFloat. Without context, it's always a Double, because of how the compiler is designed. Note that this only applies to float literals like 3.14, 3e8 etc, and not to identifiers of variables.
So the expression wholeFrameView.frame.width * 0.8 is valid because the compiler sees that width is a CGFloat, so it treats 0.8 as a CGFloat as well. No problems.
On the other hand, when you declare the variable widthFactor, it is automatically given the type Double, because there aren't any more context on that line to suggest to the compiler that you want it to be any other type.
This can be fixed by directly telling the compiler that you want widthFactor to be a CGFloat:
let widthFactor: CGFloat = 0.8
Because, as others have noted, you can't multiply a Double and a CGFloat, the compiler doesn't know what you're intending.
So, instead of giving you an error about the frame property, which you currently think it's doing, it's actually making its best guess*, and giving you an error related to the frame method. No method method has a width property, so what it tells you is true.
*Of course, its best guess is not good enough, hence a human being coming here to ask a question about it. So please file a bug!
Stepping onto my soapbox: This confusion would be avoided if Apple hadn't named the method the thing it returns. The convention to prefix all such methods with get solves the problem. Some convention is important in any language with first-class functions, to disambiguate between properties and methods.
wholeFrameView.frame has no member width. Also, you need widthFactor to be of type CGFloat. Try:
let newWidth = wholeFrameView.frame.size.width * CGFloat(widthFactor)
I've noticed that in IOS X-Code using (Swift 4.0), I can ask for the height of a view, V, in at least these two ways:
V.bounds.size.height
and...
V.bounds.height
Is there any actual difference between these two?
I did the option-click thing (which give different definitions, but don't explain any practical difference or reason for one over the other)... and stackoverflow... but here on stackoverflow, all the results are discussing the difference between bounds and frame... which is NOT what I'm asking.
V.bounds.height is only a GET Property. You Can't set a value for this property.
Example:
self.view.bounds.height = 5
This error message results...
Cannot assign to property: 'height' is a get-only property
If you want to assign a value to this property, then you can write...
self.view.bounds.size.height = 5
So you can set value to this object. Have a look at here.
There is small difference. view.bounds.height is a shortcut. You cannot edit it :
view.bounds.height = 150 won't work, but view.bounds.size.height = 150 will.
Actually V.bounds.size.height, height have both get-set property and where as in V.bounds.height, height is only getter property and it always return you height of the rectangle.
For the getter perspective both are same.
In addition to the fact that view.bounds.height is readonly, there is another difference: if you have negative width/height, view.bounds.height will return you the normalized value (the positive one), while view.bounds.size.height will return the real value. These getters are the equivalent of the CGRectGetWidth() CGRectGetHeight() from Obj-C. All these getters from CGRect struct (widht, height, minX, minY...) are returning the normalized values of the CGRect's dimensions and they are recommended in case you want to use them in frame computations.
I'm fairly new to Swift, only having used Python and Pascal before. I was wondering if anyone could help with generating a floating point number in range. I know that cannot be done straight up. So this is what I've created. However, it doesn't seem to work.
func location() {
// let DivisionConstant = UInt32(1000)
let randomIntHeight = arc4random_uniform(1000000) + 12340000
let randomIntWidth = arc4random_uniform(1000000) + 7500000
XRandomFloat = Float(randomIntHeight / UInt32(10000))
YRandomFloat = Float(randomIntWidth / UInt32(10000))
randomXFloat = CGFloat(XRandomFloat)
randomYFloat = CGFloat(YRandomFloat)
self.Item.center = CGPointMake(randomXFloat, randomYFloat)
}
By the looks of it, when I run it, it is not dividing by the value of the DivisionConstant, so I commented this and replaced it with a raw value. However, self.Item still appears off screen. Any advice would be greatly appreciated.
This division probably isn't what you intended:
XRandomFloat = Float(randomIntHeight / UInt32(10000))
This performs integer division (discarding any remainder) and then converts the result to Float. What you probably meant was:
XRandomFloat = Float(randomIntHeight) / Float(10000)
This is a floating point number with a granularity of approximately 1/10000.
Your initial code:
let randomIntHeight = arc4random_uniform(1000000) + 12340000
generates a random number between 12340000 and (12340000+1000000-1). Given your final scaling, that means a range of 1234 and 1333. This seems odd for your final goals. I assume you really meant just arc4random_uniform(12340000), but I may misunderstand your goal.
Given your comments, I think you've over-complicated this. The following should give you a random point on the screen, assuming you want an integral (i.e. non-fractional) point, which is almost always what you'd want:
let bounds = UIScreen.mainScreen().bounds
let x = arc4random_uniform(UInt32(bounds.width))
let y = arc4random_uniform(UInt32(bounds.height))
let randomPoint = CGPoint(x: CGFloat(x), y: CGFloat(y))
Your problem is that you're adding the the maximum value to your random value, so of course it's always going to be offscreen.
I'm not sure what numbers you're hoping to generate, but what you're getting are results like:
1317.0, 764.0
1237.0, 795.0
1320.0, 814.0
1275.0, 794.0
1314.0, 758.0
1300.0, 758.0
1260.0, 809.0
1279.0, 768.0
1315.0, 838.0
1284.0, 763.0
1273.0, 828.0
1263.0, 770.0
1252.0, 776.0
1255.0, 848.0
1277.0, 847.0
1236.0, 847.0
1320.0, 772.0
1268.0, 759.0
You're then using this as the center of a UI element. Unless it's very large, it's likely to be off-screen.
Hi Can we get hash color string from UIImage ?
In below method if i pass [UIColor redColor] it is working , but if i pass
#define THEME_COLOR [UIColor colorWithPatternImage:[UIImage imageNamed:#"commonImg.png"]]
then it is not working.
+(NSString *)hexValuesFromUIColor:(UIColor *)color {
if (CGColorGetNumberOfComponents(color.CGColor) < 4) {
const CGFloat *components = CGColorGetComponents(color.CGColor);
color = [UIColor colorWithRed:components[0] green:components[0] blue:components[0] alpha:components[1]];
}
if (CGColorSpaceGetModel(CGColorGetColorSpace(color.CGColor)) != kCGColorSpaceModelRGB) {
return [NSString stringWithFormat:#"#FFFFFF"];
}
return [NSString stringWithFormat:#"#%02X%02X%02X", (int)((CGColorGetComponents(color.CGColor))[0]*255.0), (int)((CGColorGetComponents(color.CGColor))[1]*255.0), (int)((CGColorGetComponents(color.CGColor))[2]*255.0)];
}
Is there any other methods which can directly get Hash color from UIImage ?
You can't access the raw data directly, but by getting the CGImage of this image you can access it. Reference Link
You can't do it directly from the UIImage, but you can render the image into a bitmap context, with a memory buffer you supply, then test the memory directly. That sounds more complex than it really is, but may still be more complex than you wanted to hear.
If you have Erica Sadun's iPhone Developer's Cookbook there's good coverage of it from page 54. I'd recommend the book overall, so worth getting that if you don't have it.
I arrived at almost exactly the same code independently, but hit one bug that it looks like may be in Sadun's code too. In the pointInside method the point and size values are floats and are multiplied together as floats before being cast to an int. This is fine if your coordinates are discreet values, but in my case I was supplying sub-pixel values, so the formula broke down. The fix is easy once you've identified the problem, of course - just cast each coordinate to an int before multiplying - so, in Sadun's case it would be:
long startByte = ((int)point.y * (int)size.width) + (int)point.x) * 4;
Also, Sadun's code, as well as my own, are only interested in alpha values, so we use 8 bit pixels that take the alpha value only. Changing the CGBitMapContextCreate call should allow you to get actual colour values too (obviously if you have more than 8 bits per pixel you will have to multiply that in to your pointInside formula too).
OR
I was writing a program in swift and just now I noticed that I can directly access a CGRect frame's width and height properties directly without using the CGSize width and height. That is I am now able to write a code like this.
#IBOutlet var myView: UIView!
override func viewDidLoad()
{
super.viewDidLoad()
var height = myView.frame.height
var height1 = myView.frame.size.height
}
In Objective C, when I tried to write the same code, the line height = view.frame.height is throwing an error. Can anyone please tell me the difference(if any) in these two lines of code.
I just looked into the CGRect structure reference. In Swift there is an extension defined which have members height and width. Please have a look at the code below
struct CGRect {
var origin: CGPoint
var size: CGSize
}
extension CGRect {
...
var width: CGFloat { get }
var height: CGFloat { get }
...
}
So that you can directly fetch height and width values from a CGRect. As you can see these are only getters, so you will get an error if you try to set these values using view.frame.height = someValue
frame is of CGRect structure, apart from its width and height have only getters, they can only be positive. From the documentation:
Regardless of whether the height is stored in the CGRect data structure as a positive or negative number, this function returns the height as if the rectangle were standardized. That is, the result is never a negative number.
However, size is of CGSize structure, from the documentation:
A CGSize structure is sometimes used to represent a distance vector, rather than a physical size. As a vector, its values can be negative. To normalize a CGRect structure so that its size is represented by positive values, call the standardized function.
So the difference is obvious.
In Objective C, when I tried to write the same code, the line height = view.frame.height is throwing an error. Can anyone please tell me the difference (if any) in these two lines of code.
CGGeometry.h defines a couple of types, among them the C struct CGRect. This struct has two members: origin and size.
That's all you can access in C (and Objective-C) using dot notation. Neither C nor Objective-C offer extensions for structs.
Swift imports the type as a Swift struct. The difference is that Swift does allow for extensions on structs. So it exposes several free C functions as extensions:
CGRectGetMinX() — CGRect.minX
CGRectGetMidX() — CGRect.midX
CGRectGetMaxX() — CGRect.maxX
CGRectGetWidth() — CGRect.width
[... same for y]
These C functions are there since ages—they just live in a dusty corner of CoreGraphics.
They are quite useful but you have to know their semantics (which differ a bit from the standard accessors): They normalise the dimensions.
This means that they convert a rect with negative width or height to a rect that covers the same area with positive size and offset origin.
let rect = CGRect(x: 0, y: 0, width: 10, height: -10)
assert(rect.width == rect.size.width) // OK
assert(rect.height == rect.size.height) // boom