Removing First Element in Char * Array Objective C - ios

I have a variable buf stored as char *buf this variable comes out to be an ID that looks something like /3B494538-9120-46E0-95D4-51A4CF5712A1. I want to remove the first element of the char *buf so that /3B494538-9120-46E0-95D4-51A4CF5712A1 becomes 3B494538-9120-46E0-95D4-51A4CF5712A1. How do I do this?

You can create an array that reuses buf's memory:
char *nonCopyBuf = buf + 1;
or allocate new memory storage:
char *copyBuf = malloc(strlen(buf));
memcpy(copyBuf, buf + 1, strlen(buf));
//...
free(copyBuf);

Related

NSString to char* with greek characters

I am using the following code to store the data of a string in a char*.
NSString *hotelName = [components[2] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
hotelInfo->hotelName = malloc(sizeof(char) * hotelName.length + 1);
strncpy(hotelInfo->hotelName, [hotelName UTF8String], hotelName.length + 1);
NSLog(#"HOTEL NAME: %s",hotelInfo->hotelName);
The problem is with the Greek characters that are printed strangely. I have also tried to use another encoding (e.g NSWindowsCP1253StringEncoding -it crashes- )
I tried even that:
hotelInfo->hotelName = (const char *)[hotelName cStringUsingEncoding:NSUnicodeStringEncoding];
but it also produces strange characters.
What do I miss?
EDIT:
After some suggestions I tried the following:
if ([hotelName canBeConvertedToEncoding:NSWindowsCP1253StringEncoding]){
const char *cHotelName = (const char *)[hotelName cStringUsingEncoding:NSWindowsCP1253StringEncoding];
int bufSize = strlen(cHotelName) + 1;
if (bufSize >0 ){
hotelInfo->hotelName = malloc(sizeof(char) * bufSize);
strncpy(hotelInfo->hotelName, [hotelName UTF8String], bufSize);
NSLog(#"HOTEL NAME: %s",hotelInfo->hotelName);
}
}else{
NSLog(#"String cannot be encoded! Sorry! %#",hotelName);
for (NSInteger charIdx=0; charIdx<hotelName.length; charIdx++){
// Do something with character at index charIdx, for example:
char x[hotelName.length];
NSLog(#"%C", [hotelName characterAtIndex:charIdx]);
x[charIdx] = [hotelName characterAtIndex:charIdx];
NSLog(#"%s", x);
if (charIdx == hotelName.length - 1)
hotelInfo->hotelName = x;
}
NSLog(#"HOTEL NAME: %s",hotelInfo->hotelName);
}
But still nothing!
First of all, it is not guaranteed that any NSString can be represented as a C character array (so-called C-String). The reason is that there is just a limited set of characters available. You should check if the string can be converted (by calling canBeConvertedToEncoding:).
Secondly, when using the malloc and strncpy functions, they rely on the length of the C-String, not on the length of the NSString. So you should first get the C-String from the NSString, then get it's length (strlen), and use this value to the function calls:
const char *cHotelName = (const char *)[hotelName cStringUsingEncoding:NSWindowsCP1253StringEncoding];
int bufSize = strlen(cHotelName) + 1;
hotelInfo->hotelName = malloc(sizeof(char) * bufSize);
strncpy(hotelInfo->hotelName, cHotelName, bufSize);

malloc using 4 bytes for char

I am writing a code to examine how memory is managed between stack and heap. for a course work.
#include<stdio.h>
#include<stdlib.h>
#define NUM_OF_CHARS 100
// function prototype
void f(void);
int main()
{
f();
return 0;
}
void f(void)
{
char *ptr1;
ptr1 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 1: %016lx\n", (long)ptr1);
char *ptr2;
ptr2 = (char *) malloc(NUM_OF_CHARS * sizeof(int));
printf("Address array 2: %016lx\n", (long)ptr2);
}
when I run this code I get the following:
Address array 1: 000000000209e010
Address array 2: 000000000209e1b0
my expectation was to see a difference in the address of 100 bytes, but the difference is 416 bytes, when I changed the NUM_OF_CHARS to any other value (200,300,...) the result was always (NUM_OF_CHARS*4 + 16), so it seams like malloc is allocating 4 bytes for each char rather one byte plus 16 bytes of some overhead.
can anyone explain what is happening here?
Memory allocation is platform/compiler dependent. The only thing malloc ensures is that it allocates enough memory for what you are asking and nothing more.
There is no guarantee that your addresses will be contiguous due to memory alignment
Also, you are allocating by size of ints and not char in your code. This is most likely the reason why you see a NUM_OF_CHARS*4 difference, while the remaining difference can be attributed to padding.

How to convert char(or Byte) with hexadecimal data in it, to an integer(decimal value)

I have
char tem;
Its value is shown as blow:
Printing description of tem:
(char) tem = '\xd1'
which should equals to 209 in decimal.
My question is how can I implement this conversion programmatically? That is I want to get a NSInteger that equals 209 in this case.
Maybe there’s something I’m overlooking here, but given that both char and NSInteger are integral types, can’t you just do
char tem = '\xd1';
NSInteger i = tem;
? Or perhaps, to avoid surprises from sign extension,
NSInteger i = tem & 0xff;
A char variable actually is a 8-bit integer. You don't have to convert it to a NSInteger to get its decimal value. Just explicitly tell the compiler to interpret it as an unsigned 8-bit integer, uint8_t:
char theChar = '\xd1';
NSLog(#"decimal: %d", (uint8_t)theChar); //prints 'decimal: 209'
To convert it to a NSInteger:
NSInteger decimal = (uint8_t)theChar;
If yours char in ANSCII, do this:
char a = '3';//example char
int i = (int)(a - '0');

Checking the length of an array of characters in Objective-C

I'm translating a small Java library for using in an Objective-C application I'm writing.
char[] chars = sentence.toCharArray();
int i = 0;
while (i < chars.length) { ... }
Where sentence is an NSString.
I'd like to translate the above Java code to Objective-C. Here's what I have so far:
// trims sentence off white space
sentence = [sentence stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
const char *chars = [sentence UTF8String];
How do I the above while condition? I'm not sure of how I'm supposed to check the length of the the string after it was converted to a character array.
Your Objective-C string already holds a measure of its length, it's just a matter of retrieving it:
// trims sentence off white space
sentence = [sentence stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
NSUInteger length = sentence.length;
const char *chars = [sentence UTF8String];
But I would like to remember that even if you didn't know the length, you could use the C strlen function:
// trims sentence off white space
sentence = [sentence stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
const char *chars = [sentence UTF8String];
size_t length = strlen(chars);
Even there is already an accepted answer I want to warn of using strlen(), even in this case it might be without any problem. There are a differences between NSString and C-Strings.
A. -length (NSString) and strlen() has different semantics:
NSString is not(!) \0-terminated, but length based. It can store \0 characters. It is very easy to get different length, if there is a \0 character in the string instance:
NSString *sentence = #"Amin\0Negm";
NSLog( #"length %ld", [sentence length]); // 9
const char *chars = [sentence cStringUsingEncoding:NSUTF8StringEncoding];
size_t length= strlen(chars);
NSLog(#"strlen %ld", (long)length); // 4
length 9
strlen 4
But -UTF8String and even the used -cStringUsingEnocding: (both NSString) copy out the whole string stored in the string instance. (I think in case of -cStringUsingEncoding it is misleading, because standard string functions like strlen() always uses the first \0 as the termination of strings.)
B. In UTF8 a character can have multibytes. A char in C is one byte. (With byte not in the meaning of 8 bits, but smallest addressable unit.)
NSString *sentence = #"Αmin Negm";
NSLog( #"length %ld", [sentence length]);
const char *chars = [sentence UTF8String];
size_t length= strlen(chars);
NSLog(#"strlen %ld", (long)length);
length 9
strlen 10
WTF happened here? The "A" of Amin is no latin capital letter A but a greek capital letter Alpha. In UTF8 this takes two bytes and for pure C's strlen there are two characters!
NSLog(#"%x-%x %x-%x", 'A', 'm', (unsigned char)*chars, (unsigned char)*(chars+1) );
41-6d ce-91
The first two numbers are the codes for 'A', 'm', the second two numbers are the UTF8 code for greek capital letter Alpha (CE 91).
I do not think, that it is a good idea to simply change from NSString to char * without good reason and a complete understanding of the problems. If you do not expect such characters, use NSASCIIStringEncoding. If you expect such characters check your code again and again … or read C.
C. C supports wide characters. This is similiar to Mac OS' unichar, but typed wchar_t. There are string functions for wchar_t in wchar.h.
NSString *sentence = #"Αmin Negm";
NSLog( #"length %ld", [sentence length]);
wchar_t wchars[128]; // take care of the size
wchar_t *wchar = wchars;
for (NSUInteger index = 0; index < [sentence length]; index++)
{
*wchar++ = [sentence characterAtIndex:index];
}
*wchar = '\0';
NSLog(#"widestrlen %ld", wcslen(wchars));
length 9
widestrlen 9
D. Obviously you want to iterate through the string. The common pattern in pure C is not to use an index and to compare it to the length and definitly not to to strlen() in every loop, because it produces high costs. (C strings are not length based so the whole string has to be scanned over and over.) You simply increment the pointer to the next char:
char letter;
while ( (letter = *chars++) ) {…}
or
do
{
// *chars points to the actual char
} while (*char++);
int lenght = sizeof(chars) / sizeof(char)
might work, but it will (inte the best case) return same thing as
sentence.lenght
in worst case 0 because the whole pointer / sizeof thing i don't remember now

Converting integer from 0 to 255 ( 1 byte ) into NSDATA

I need exactly 1 byte for some kind of socket based application and i cant find a way to create it.
unsigned char mydata = 3;
[NSMutableData dataWithBytes:&mydata length:sizeof(mydata)];
NSData reference, unsigned char is used to save 1 byte.

Resources