Not sure how to word the title correctly... but what I am wondering is if there is some clever format specifier that will take the number 4.5 and give me #"4.5" but also take the number 2 and give me #"2".
Using the %.1f specifier gives me #"4.5" but also #"2.0". I am trying to get rid of the ".0" bit.
Does such a beast exist, or am I going to have to do some math on this? FWIW, I am trying to iterate over an array of values ranging from 0 to 5 increasing in half-steps, so 0, 0.5, 1, 1.5, ..., 4.5, 5
Cheers!
NSNumberFormatter is a good choice here. You can configure it to not show the fractional digits if the number is an integer. For example:
NSArray *numbers = #[#0, #0.5, #1.0, #1.5, #2.0, #2.5];
NSNumberFormatter *numberFormatter = [[NSNumberFormatter alloc] init];
numberFormatter.alwaysShowsDecimalSeparator = NO;
numberFormatter.minimumFractionDigits = 0;
numberFormatter.maximumFractionDigits = 1;
numberFormatter.minimumIntegerDigits = 1;
for (NSNumber *number in numbers) {
NSLog(#"%#", [numberFormatter stringFromNumber:number]);
}
Output:
>> 0
>> 0.5
>> 1
>> 1.5
>> 2
>> 2.5
This is even easier (Swift):
let num1: Double = 5
let num2: Double = 5.52
let numberFormatter = NSNumberFormatter()
numberFormatter.numberStyle = .DecimalStyle
print(numberFormatter.stringFromNumber(NSNumber(double: num1)))
print(numberFormatter.stringFromNumber(NSNumber(double: num2)))
This will print 5 and then 5.52.
Related
I am trying to format numbers so that there are always 4 digits after the decimal place. For example:
1 // 1.0000
0 // 0.0000
1.23 // 1.2300
1.234 // 1.2340
1.2345 // 1.2345
1.23456 // 1.2346 **[edited]**
I have tried all kinds of combinations of the following:
let formatter = NumberFormatter()
formatter.usesSignificantDigits = true // I believe this the default so not required
formatter.numberStyle = .decimal
formatter.maximumSignificantDigits = 4
formatter.minimumSignificantDigits = 4
formatter.maximumFractionDigits = 4
formatter.minimumFractionDigits = 4
let p = formatter.string(from: NSNumber(value: percentage))
debugPrint("p = \(p)")
But in two of the cases, this is what I get:
0 // 0.000
0.0123456 // 0.01234
Here is an example:
and the debug output:
"p = 0.9375"
"p = 0.000"
"p = 0.03125"
"p = 0.000"
"p = 0.03125"
What am I missing?
[I thought I had seen really good explanation in here some time ago, but can no longer find it - if anyone could drop a link to it, that would be great too!]
If you are trying to dictate the number of decimal places, then simply remove this significant digits stuff:
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.maximumFractionDigits = 4
formatter.minimumFractionDigits = 4
let values: [Double] = [
1, // 1.0000
0, // 0.0000
1.23, // 1.2300
1.234, // 1.2340
1.2345, // 1.2345
1.23456 // 1.2346 ... if you really want 1.2345, then change formatter’s `roundingMode` to `.down`.
]
let strings = values.map { formatter.string(for: $0) }
That yields the four digits after the decimal point, as desired.
This is the follow up of a previous question of mine.
In a nutshell, I am trying to follow this tutorial step-by-step: https://jtauber.github.io/mars-clock/ to get to Coordinated Mars Time, but I got stuck right before the end. My code works fine up until the end (some values are more accurate than in the tutorial because I went back to the source from NASA: https://www.giss.nasa.gov/tools/mars24/help/algorithm.html ):
double millis = ( [[NSDate date] timeIntervalSince1970] * 1000 );
NSLog(#"millis: %f", millis);
double JDUT = ( 2440587.5 + (millis / 86400000) );
NSLog(#"JDUT: %f", JDUT);
double JDTT = ( JDUT + (37 +32.184) / 86400);
NSLog(#"JDTT: %f", JDTT);
double J2000Epoch = ( JDTT - 2451545.0 );
NSLog(#"J2000Epoch: %f", J2000Epoch);
double MSD = ( (( J2000Epoch - 4.5 ) / 1.0274912517) + 44796.0 - 0.0009626 );
NSLog(#"MSD: %f", MSD);
The only step remaining is actually calculating Coordinated Mars Time, using this equation:
MTC = mod24 { 24 h × MSD }
The problem is that I have no idea how. I tried to use modf( (double), (double *) ) but no idea how it actually works. I tried it the way below, but it gave me an incorrect answer (obviously as I have really no idea what I am doing). :(
double MSD24 = (24 * MSD);
double MCT = modf(24, &MSD24);
NSLog(#"MCT: %f", MCT); // Result: 0.000000
Any help would be much appreciated. Thank you very much!
p.s.: Notice that I use Objective-C; I do not understand swift unfortunately! :(
Carrying on from the code you gave, I tried:
CGFloat MTC = fmod(24 * MSD, 24);
and got
// 19.798515
which was right according to the web page you cited at the moment I tried it.
The sort of thing his page actually shows, e.g. "19:49:38" or whatever (at the time I tried it), is merely a string representation of that number, treating it as a number of hours and just dividing it up into minutes and seconds in the usual way. Which, I suppose, brings us to the second part of your question, i.e. how to convert a number of hours into an hours-minutes-seconds representation? But that is a simple matter, dealt with many times here. See NSNumber of seconds to Hours, minutes, seconds for example.
So, carrying on once again, I tried this:
CGFloat secs = MTC*3600;
NSDate* d = [NSDate dateWithTimeIntervalSince1970:secs];
NSDateFormatter* df = [NSDateFormatter new];
df.dateFormat = #"HH:mm:ss";
df.timeZone = [NSTimeZone timeZoneWithAbbreviation:#"GMT"];
NSString* result = [df stringFromDate:d];
NSLog(#"%#", result); // 20:10:20
...which is exactly the same as his web page was showing at that moment.
And here's a Swift version for those who would like to know what the "mean time" is on Mars right now:
let millis = Date().timeIntervalSince1970 * 1000
let JDUT = 2440587.5 + (millis / 86400000)
let JDTT = JDUT + (37 + 32.184) / 86400
let J2000Epoch = ( JDTT - 2451545 )
let MSD = (( J2000Epoch - 4.5 ) / 1.0274912517) + 44796.0 - 0.0009626
let MTC = (24 * MSD).truncatingRemainder(dividingBy: 24)
let d = Date(timeIntervalSince1970: MTC*3600)
let df = DateFormatter()
df.dateFormat = "HH:mm:ss"
df.timeZone = TimeZone(abbreviation: "GMT")!
df.string(from:d)
I am working on changing the code created in objective c to swift3.
I want to change the code below to the swift3 code created with objective c.
Objective c NSDate to NSData code :
NSCalendar *calendar = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar];
NSDateComponents *components = [calendar components:NSDayCalendarUnit |NSMonthCalendarUnit | NSYearCalendarUnit | NSHourCalendarUnit | NSMinuteCalendarUnit | NSSecondCalendarUnit fromDate:[NSDate date]];
NSInteger year = components.year;
NSMutableData *yearData = [[NSMutableData alloc] initWithBytes:&year length:sizeof(year)];
int year1 = *(int *)[[yearData subdataWithRange:NSMakeRange(0, 1)] bytes];
int year2 = *(int *)[[yearData subdataWithRange:NSMakeRange(1, 1)] bytes];
int month = components.month;
int day = components.day;
int hour = components.hour;
int min = components.minute;
int second = components.second;
char bytes[7];
bytes[0] = year1;
bytes[1] = year2;
bytes[2] = month;
bytes[3] = day;
bytes[4] = hour;
bytes[5] = min;
bytes[6] = second;
NSData *data = [[NSData alloc] initWithBytes:&bytes length:sizeof(bytes)];
Objective c NSData to NSDate code :
NSData *date = [[NSData alloc] initWithData:characteristic.value];
int year = *(int *)[[date subdataWithRange:NSMakeRange(0, 2)] bytes];
int month = *(int *)[[date subdataWithRange:NSMakeRange(2, 1)] bytes];
int day = *(int *)[[date subdataWithRange:NSMakeRange(3, 1)] bytes];
int hour = *(int *)[[date subdataWithRange:NSMakeRange(4, 1)] bytes];
int minutes = *(int *)[[date subdataWithRange:NSMakeRange(5, 1)] bytes];
int seconds = *(int *)[[date subdataWithRange:NSMakeRange(6, 1)] bytes];
NSLog(#"year %d month %d day %d hour %d minutes %d second %d", year, month, day, hour, minutes, seconds); //year 2017 month 7 day 13 hour 16 minutes 8 second 2
NSDateComponents *components = [[NSDateComponents alloc] init];
[components setYear:year];
[components setMonth:month];
[components setDay:day];
[components setHour:hour];
[components setMinute:minutes];
[components setSecond:seconds];
NSCalendar *calendar = [NSCalendar currentCalendar];
self.time = [calendar dateFromComponents:components];
Swift Date to Data code :
let cal = Calendar(identifier: .gregorian)
var comp = cal.dateComponents([.day,.month,.year,.hour,.minute,.second], from: Date())
var year = comp.year
let yearData:Data = Data(bytes: &year, count: MemoryLayout.size(ofValue: year))
let year1:Data = yearData.subdata(in: 0..<1)
let year2:Data = yearData.subdata(in: 1..<2)
let settingArray = [UInt8]([
UInt8(year1[0])
, UInt8(year2[0])
, UInt8(comp.month!)
, UInt8(comp.day!)
, UInt8(comp.hour!)
, UInt8(comp.minute!)
, UInt8(comp.second!)
])
let settingData:Data = Data(bytes: settingArray, count: MemoryLayout.size(ofValue: settingArray))
Swift Data to Date code :
var yearVal:UInt8 = 0
let year = characteristic.value?.subdata(in: 0..<2)
year?.copyBytes(to: &yearVal, count: MemoryLayout.size(ofValue: year))
var month = characteristic.value?.subdata(in: 2..<3)
var day = characteristic.value?.subdata(in: 3..<4)
var hour = characteristic.value?.subdata(in: 4..<5)
var minutes = characteristic.value?.subdata(in: 5..<6)
var seconds = characteristic.value?.subdata(in: 6..<7)
print("year = \(yearVal), month = \(Int((month?[0])!)), day = \(Int((day?[0])!)), hour = \(Int((hour?[0])!)), minutes = \(Int((minutes?[0])!)), seconds = \(Int((seconds?[0])!))") // year = 225, month = 7, day = 13, hour = 15, minutes = 56, seconds = 56
When I modify the let year = characteristic.value?.subdata(in: 0..<2) part, the conversion value should be 2017. However, only 225 values are output. I do not know how to solve this part.
Please help me.
You are very lucky your Objective-C code works as you are reading unassigned memory and ignoring endian issues.
Consider the line:
int month = *(int *)[[date subdataWithRange:NSMakeRange(2, 1)] bytes];
Here you are taking a pointer to a single byte, casting it to a pointer to 4 bytes (the size on an int), and then reading 4 bytes and storing them in month. By luck the extra three bytes you read happen to be zero.
Then there is the endian issue, different cpu architectures store multi-byte values in different orders in memory. A little-endian architecture stores the least significant byte first, a big-endian one the most significant.
E.g. the 4-byte integer 0xDEADBEEF is stored as the byte sequence EF, BE, AD, DE on a little-endian machine and as DE, AD, BE, EF on a big-endian one. What this means in terms of your month value above is if the byte is 06 then you might get back the integer 0x06000000 when you read those 4 bytes (and only if those extra bytes are zeroes).
For the month case you could load the byte and then convert to an integer:
int month = (int *)(*(Byte *)[[date subdataWithRange:NSMakeRange(2, 1)] bytes]);
When converting the year to two bytes you go through the long winded process:
NSMutableData *yearData = [[NSMutableData alloc] initWithBytes:&year length:sizeof(year)];
int year1 = *(int *)[[yearData subdataWithRange:NSMakeRange(0, 1)] bytes];
int year2 = *(int *)[[yearData subdataWithRange:NSMakeRange(1, 1)] bytes];
This converts an integer to an NSData, makes to more NSData values containing 1 byte each, and then loads 4 bytes for each - the same issue as above, but in this case as you will only be storing 1 byte in your bytes array it doesn't matter if the extra bytes are garbage.
The process is convoluted, you would be better off sticking with integer operations to obtain the two values. You can obtain the individual bytes using division and remainder operations, or bit-wise shift and mask operations.
E.g. using decimal first to demonstrate:
int year = 2017;
int firstDigit = year % 10; // the remainder of year / 10 => 7
int secondDigit = (year / 10) % 10; // 1
int thirdDigit = (year / 100) % 10; // 0
int fourthDigit = (year / 1000) % 10; // 2
To extract the bytes just change the divisor:
int year = 2017; // = 0x7E1
int loByte = year % 256; // = 0xE1
int hiByte = (year / 256) % 256; // = 0x7
Finally you can use bit-wise shift and masking:
int year = 2017; // = 0x7E1
int loByte = year & 0xFF; // = 0xE1
int hiByte = (year >> 8) & 0xFF; // = 0x7
Using bit-wise operations makes the byte splitting more obvious, but divide and remainder achieve the same result.
What does all this mean in terms of your Objective-C code? Well the second of your two methods can be written:
+ (NSDate *) dataToDate:(NSData *)data
{
NSDateComponents *components = [[NSDateComponents alloc] init];
const Byte *bytes = data.bytes;
components.year = (NSInteger)bytes[0] | ((NSInteger)bytes[1] << 8); // reassemble 2-byte value
components.month = (NSInteger)bytes[2];
components.day = (NSInteger)bytes[3];
components.hour = (NSInteger)bytes[4];
components.minute = (NSInteger)bytes[5];
components.second = (NSInteger)bytes[6];
NSCalendar *calendar = [NSCalendar currentCalendar];
return[calendar dateFromComponents:components];
}
This is a lot less complex, doesn't read random memory, and is easier to convert to Swift.
Following the same approach here is your first method in Swift:
func toData(_ date : Date) -> Data
{
let cal = Calendar(identifier: .gregorian)
let comp = cal.dateComponents([.day,.month,.year,.hour,.minute,.second], from: date)
let year = comp.year!
let yearLo = UInt8(year & 0xFF) // mask to avoid overflow error on conversion to UInt8
let yearHi = UInt8(year >> 8)
let settingArray = [UInt8]([
yearLo
, yearHi
, UInt8(comp.month!)
, UInt8(comp.day!)
, UInt8(comp.hour!)
, UInt8(comp.minute!)
, UInt8(comp.second!)
])
return Data(bytes: settingArray)
}
Finally, you can index the Data type in Swift lust like an array, so the above Objective-C line:
components.month = (NSInteger)bytes[2];
where bytes came from calling NSData's bytes can be written directly in Swift as:
components.month = Int(data[2])
where data is the Data value.
The above approach doesn't answer the issue you actually had, because it avoids messing with splitting data values into bits and trying to extra values from them - just index the byte and convert with a cast.
The rest of the code you need is left as an excercise!
HTH
you are fetching year value as UInt8 which only have range of 0-255 so use UInt32
var yearVal: UInt32 = 0
(year as! NSData).getBytes(&yearVal, length: MemoryLayout.size(ofValue: year))
I have the following variable:
NSNumber *consumption = [dict objectForKey:#"con"];
Which returns 42. How can I pad this number to 10 digits on the left, leading with zeros. The output should look as,
0000000042
or if it were 420,
0000000420
NSString *paddedStr = [NSString stringWithFormat:#"%010d", 42];
EDIT: It's C style formatting. %nd means the width is at least n. So if the integer is 2 digit long, then you will have length 3 string (when %3d is used). By default the left empty spaces are filled by space. %0nd (0 between % and n) means 0 is used for padding instead of space. Here n is the total length. If the integer is less than n digits then left padding is used.
The Objective-C way,
NSNumberFormatter * numberFormatter = [[[NSNumberFormatter alloc] init] autorelease];
[numberFormatter setPaddingPosition:NSNumberFormatterPadBeforePrefix];
[numberFormatter setPaddingCharacter:#"0"];
[numberFormatter setMinimumIntegerDigits:10];
NSNumber * number = [NSNumber numberWithInt:42];
NSString * theString = [numberFormatter stringFromNumber:number];
NSLog(#"%#", theString);
The C way is faster though.
You can't in the NSNumber itself. If you're creating a string from the number or using NSLog(), simply use the appropriate format, e.g.
NSLog(#"%010d", [consumption intValue]);
You can do pretty much any number formatting you would ever want with NSNumberFormatter. In this case I think you would want to use the setFormatWidth: and setPaddingCharacter: functions.
with variable num_digits
NSString* format =[NSString stringWithFormat:#"%%0%zdzd", num_digits];
NSString *theString = [NSString stringWithFormat:format, 42];
E.g. Fill the rest with zeros, 5 digits:
NSInteger someValue = 10;
[NSString stringWithFormat:#"%05ld", someValue];
Equivalent of .02f for float number when you need only 2 digits after the dot.
So there you have 0 = fill with zeros, 5 = the number of digits and type.
Solution for Swift 3
let x = 1078.1243
let numFormatter = NumberFormatter()
numFormatter.minimumFractionDigits = 1 // for float
numFormatter.maximumFractionDigits = 1 // for float
numFormatter.minimumIntegerDigits = 10 // how many digits do want before decimal
numFormatter.paddingPosition = .beforePrefix
numFormatter.paddingCharacter = "0"
let s = numFormatter.string(from: NSNumber(floatLiteral: x))!
OUTPUT
"0000001078.1"
I have some issue when calculating 3 CGFloats
I have: -34.522 + 39.049 + 0.2889 = ios gives me 73
but it should give me more like aproximative to an normal calculator values like = 4.81
CGFloat x = (46.2076 * -34.522) + (60.3827 * 39.049) + (2.028 * 0.2889);
NSLog(#"d %f",x); ->> 763.291199
CGFloat t = -34.522 + 39.049 + 0.2889;
NSLog(#"%f",t);
I'm not 100% sure if this is what you're asking, but if you only want 2 digits of precision, you have to specify this. It's easy to do via format specifier by using %.2f where 2 is the number of digits after the decimal place to be shown.
CGFloat x = (46.2076 * -34.522) + (60.3827 * 39.049) + (2.028 * 0.2889);
NSLog(#"d %.2f",x);
Alternatively, this can also be done with NSNumberFormatter.
NSNumberFormatter *formatter = [NSNumberFormatter new];
[formatter setPositiveFormat:#"#.##"];
NSString *output = [formatter stringFromNumber:#(x)];
NSLog(#"Out: %#",output);