How to convert long long to 8 byte array in objective C - ios

In my application i have to convert long long number into 8 byte array. Then i have to convert 8 byte array into hexadecimel string. Can you please help me in this. i'm struck up.

One way to do integer/byte array conversion is to use a union:
union {
long long l;
uint8_t b[sizeof(long long)];
} u;
u.l = mylonglong;
Then u.b[] contains the bytes, which can be accessed individually.
EDIT: Please note as pointed out by #NikolaiRuhe this use of union can lead to undefined behaviour, so it might be best to use memcpy() instead:
uint8_t b[sizeof(long long)];
memcpy(b, &mylonglong, sizeof(b));
If you want the hex string of the long long in native-endian order, then:
void hexChar(uint8_t b, char *out)
{
static const char *chars = "0123456789abcdef";
out[0] = chars[(b >> 4) & 0xf];
out[1] = chars[b & 0xf];
}
// Make sure outbuf is big enough
void hexChars(const uint8_t *buffer, size_t len, char *outbuf)
{
for (size_t i = 0; i < len; i++)
{
hexChar(buffer[i], outbuf);
outbuf += 2;
}
*outbuf = '\0';
}
and call it with:
char hex[32];
hexChars(u.b, sizeof(u.b), hex);
However if instead you want the hex value of the long long:
char hex[32];
sprintf(hex, "%llx", mylonglong);

would that do the trick ?
#include <stdio.h>
int main() {
long long int val = 0x424242;
char str_val[32];
snprintf(str_val, sizeof(str_val), "%#llx", val);
printf("Value : %s\n", str_val);
}

Related

How to call C function from Objective-C

I'm currently trying to convert videos on iOS using FFmpeg.
I've modified their example in transcoding.c according to my needs.
How do I actually call the following function from Objective-C:
int mainFunction(int argc, char **argv);
I'm not sure on how to provide the arguments from Objective-C.
Any help would greatly be appreciated.
Objective-C is a superset of C. All you need to do is call it from your Objective-C code.
char* args[] = { "foo", "bar", NULL }; // Might need the NULL to emulate command line arguments.
int result = mainFunction(2, args);
You could pass argc and argv directly from your own main
int main(int argc, char** argv)
{
int result = mainFunction(argc, argv);
}
If your arguments are in an array of NSStrings, you'll need to convert them to a normal C array of C strings:
char** cStringArray = malloc(([stringArray count] + 1) * sizeof(char*));
for (int i = 0 ; i < [stringArray count] ; ++i)
{
NSString* aString = [stringArray objectAtIndex: i];
cStringArray[i] = [aString UTF8String];
}
cStringArray[[stringArray count]] = NULL;
int result = mainFunction([stringArray count], cStringArray);
free(cStringArray);
It's a long time since I last did any Objective-C, so apologies for any syntax errors.

iOS a Very Amazing(malloc_error_break)

first this my code
#pragma pack (4)
typedef struct _Login{
char user[32];
char pwd[32];
int userID;
}Login,*PLogin;
const unsigned long MSG_TAG_HEADER_YXHY = 0x59485859;
#pragma pack (2)
typedef struct tagTcpPacketHeader
{
int ulHtag;
char ucVersion;
char ucCmd;
int ulUserId;
short usPacketNum;
int ulDataLen;
}TcpPacketHeader,*LPTcpPacketHeader;
#pragma pack ()
const unsigned int TCP_HEADER_PACKET_LEN = sizeof(TcpPacketHeader);
- (NSData*)sendDataFileWithUserId:(const int)nUserId nCmd:(const int)nCmd pData:(NSData*)data{
NSData* sendData;
void* sendObj = malloc(data.length);
[data getBytes:sendObj length:data.length];
static int nPacketNum = 0;
int nLen = (int)data.length + TCP_HEADER_PACKET_LEN;
char *pTmpBuf = malloc(nLen);
LPTcpPacketHeader tcpHeader = (LPTcpPacketHeader)pTmpBuf;
tcpHeader->ulHtag = MSG_TAG_HEADER_YXHY;
tcpHeader->ucVersion = 1;
tcpHeader->ucCmd = nCmd;
tcpHeader->ulUserId = nUserId;
tcpHeader->usPacketNum = nPacketNum;
tcpHeader->ulDataLen = nLen;
memcpy(tcpHeader + TCP_HEADER_PACKET_LEN,sendObj, data.length);
sendData = [NSData dataWithBytes:pTmpBuf length:nLen];
nPacketNum++;
free(pTmpBuf);
free(sendObj);
return sendData;
}
- (NSData*)get_File_Login:(NSString*)userID{
int length = sizeof(Login);
Login log_in = {"123","456",userID.intValue};
NSData* login_data = [NSData dataWithBytes:&log_in length:length];
NSData* ret = [self sendDataFileWithUserId:log_in.userID nCmd:5 pData:login_data];
return ret;
}
Use
NSData* ms = [self get_File_Login:#"123"];
NSLog(#"%#",ms);
After frequent use can be a problem
question
This question makes me very headache why appear “ set a breakpoint in malloc_error_break to debug ”
I have added the "malloc_error_break" the breakpoint,But it doesn't work......
Who can tell me the answer???
When you use the pointer in memcpy this way
memcpy(tcpHeader + TCP_HEADER_PACKET_LEN,sendObj, data.length);
this means that you want to copy into memory location pointed by tcpHeader plus TCP_HEADER_PACKET_LEN times the size of the data the pointer points to. It is the same as doing &tcpHeader[TCP_HEADER_PACKET_LEN].
Assuming you want to write to a location right after the header there are two ways to fix it:
1) use a pointer with a size of 1, meaning a char*. In your code you have a pointer pTmpBuf that is such so just change the code to:
memcpy(pTmpBuf + TCP_HEADER_PACKET_LEN, sendObj, data.length);
2) use the size 1 for this calculation. Since the size of the data it points to is the same as TCP_HEADER_PACKET_LEN then multiplying it by one gives the correct location:
memcpy(tcpHeader + 1, sendObj, data.length);
I would recommend the first since it's clear what you are calculating. In the second it is unclear why you would add one, as well as using a pointer to one type when copying data that isn't that type.

Any good idea for OpenCL atom_inc separation?

I want to count the total non-zero points number in an image using OpenCL.
Since it is an adding work, I used the atom_inc.
And the kernel code is shown here.
__kernel void points_count(__global unsigned char* image_data, __global int* total_number, __global int image_width)
{
size_t gidx = get_global_id(0);
size_t gidy = get_global_id(1);
if(0!=*(image_data+gidy*image_width+gidx))
{
atom_inc(total_number);
}
}
My question is, by using atom_inc it will be much redundant right?
Whenever we meet a non-zero point, we should wait for the atom_inc.
I have a idea like this, we can separate the whole row into hundreds groups, we find the number in different groups and add them at last.
If we can do something like this:
__kernel void points_count(__global unsigned char* image_data, __global int* total_number_array, __global int image_width)
{
size_t gidx = get_global_id(0);
size_t gidy = get_global_id(1);
if(0!=*(image_data+gidy*image_width+gidx))
{
int stepy=gidy%10;
atom_inc(total_number_array+stepy);
}
}
We will separate the whole problem into more groups.
In that case, we can add the numbers in the total_number_array one by one.
Theoretically speaking, it will have a great performance improvement right?
So, does anyone have some advice about the summing issue here?
Thanks!
Like mentioned in the comments this is a reduction problem.
The idea is to keep separate counts and then put them back together at the end.
Consider using local memory to store the values.
Declare a local buffer to be used by each work group.
Keep track of the number of occurrences in this buffer by using the local_id as the index.
Sum these values at the end of execution.
A very good introduction to the reduction problem using Opencl is shown here:
http://developer.amd.com/resources/documentation-articles/articles-whitepapers/opencl-optimization-case-study-simple-reductions/
The reduction kernel could look like this (taken from the link above):
__kernel
void reduce(
__global float* buffer,
__local float* scratch,
__const int length,
__global float* result) {
int global_index = get_global_id(0);
int local_index = get_local_id(0);
// Load data into local memory
if (global_index < length) {
scratch[local_index] = buffer[global_index];
} else {
// Infinity is the identity element for the min operation
scratch[local_index] = INFINITY;
}
barrier(CLK_LOCAL_MEM_FENCE);
for(int offset = get_local_size(0) / 2;
offset > 0;
offset >>= 1) {
if (local_index < offset) {
float other = scratch[local_index + offset];
float mine = scratch[local_index];
scratch[local_index] = (mine < other) ? mine : other;
}
barrier(CLK_LOCAL_MEM_FENCE);
}
if (local_index == 0) {
result[get_group_id(0)] = scratch[0];
}
}
For further explanation see the proposed link.

I get a Suspicious pointer conversion in function main. How to get rid of this?

I'm new here at stackoverflow. The title is my question. Can someone please help me on this. Thanks. I've been working on this for like 3 days.
This part of code encodes the file to a huffman code
void encode(const char *s, char *out)
{
while (*s) {
strcpy(out, code[*s]);
out += strlen(code[*s++]);
}
}
This part of code deciphers the file from a huffman code to a human readable code
void decode(const char *s, node t)
{
node n = t;
while (*s) {
if (*s++ == '0') n = n->left;
else n = n->right;
if (n->c) putchar(n->c), n = t;
}
putchar('\n');
if (t != n) printf("garbage input\n");
}
This part is where I get my error.
int main(void)
{
int i;
const char *str = "this is an example for huffman encoding", buf[1024];
init(str);
for (i=0;i<128;i++)
if (code[i]) printf("'%c': %s\n", i, code[i]);
encode(str, buf); /* I get the error here */
printf("encoded: %s\n", buf);
printf("decoded: ");
decode(buf, q[1]);
return 0;
}
Declare 'buf' in a different line, and not as 'const':
char buf[1024];
The const applies to all the declarations on the line, so you're declaring buf as a const char[1024]. That means that calling encode casts away the constness, resulting in the warning.
Avoid having multiple variable declarations on the same line, unless they are all exactly the same type.

MD5 with ASCII Char

I have a string
wDevCopyright = [NSString stringWithFormat:#"Copyright: %c 1995 by WIRELESS.dev, Corp Communications Inc., All rights reserved.",0xa9];
and to munge it I call
-(NSString *)getMD5:(NSString *)source
{
const char *src = [source UTF8String];
unsigned char result[CC_MD5_DIGEST_LENGTH];
CC_MD5(src, strlen(src), result);
return [NSString stringWithFormat:
#"%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x",
result[0], result[1], result[2], result[3],
result[4], result[5], result[6], result[7],
result[8], result[9], result[10], result[11],
result[12], result[13], result[14], result[15]
]; //ret;
}
because of 0xa9 *src = [source UTF8String] does not create a char that represents the string, thus returning a munge that is not comparable with other platforms.
I tried to encode the char with NSASCIIStringEncoding but it broke the code.
How do I call CC_MD5 with a string that has ASCII characters and get the same hash as in Java?
Update to code request:
Java
private static char[] kTestASCII = {
169
};
System.out.println("\n\n>>>>> msg## " + (char)0xa9 + " " + (char)169 + "\n md5 " + md5(new String(kTestASCII), false) //unicode = false
Result >>>>> msg## \251 \251
md5 a252c2c85a9e7756d5ba5da9949d57ed
ObjC
char kTestASCII [] = {
169
};
NSString *testString = [NSString stringWithCString:kTestASCII encoding:NSUTF8StringEncoding];
NSLog(#">>>> objC msg## int %d char %c md5: %#", 0xa9, 169, [self getMD5:testString]);
Result >>>> objC msg## int 169 char © md5: 9b759040321a408a5c7768b4511287a6
** As stated earlier - without the 0xa9 the hashes in Java and ObjC are the same. I am trying to get the hash for 0xa9 the same in Java and ObjC
Java MD5 code
private static char[] kTestASCII = {
169
};
md5(new String(kTestASCII), false);
/**
* Compute the MD5 hash for the given String.
* #param s the string to add to the digest
* #param unicode true if the string is unciode, false for ascii strings
*/
public synchronized final String md5(String value, boolean unicode)
{
MD5();
MD5.update(value, unicode);
return WUtilities.toHex(MD5.finish());
}
public synchronized void update(String s, boolean unicode)
{
if (unicode)
{
char[] c = new char[s.length()];
s.getChars(0, c.length, c, 0);
update(c);
}
else
{
byte[] b = new byte[s.length()];
s.getBytes(0, b.length, b, 0);
update(b);
}
}
public synchronized void update(byte[] b)
{
update(b, 0, b.length);
}
//--------------------------------------------------------------------------------
/**
* Add a byte sub-array to the digest.
*/
public synchronized void update(byte[] b, int offset, int length)
{
for (int n = offset; n < offset + length; n++)
update(b[n]);
}
/**
* Add a byte to the digest.
*/
public synchronized void update(byte b)
{
int index = (int)((count >>> 3) & 0x03f);
count += 8;
buffer[index] = b;
if (index >= 63)
transform();
}
I believe that my issue is with using NSData withEncoding as opposed to a C char[] or the Java byte[]. So what is the best way to roll my own bytes into a byte[] in objC?
The character you are having problems with, ©, is the Unicode COPYRIGHT SIGN (00A9). The correct UTF-8 encoding of this character is the byte sequence 0xc9 0xa9.
You are attempting, however to convert from the single-byte sequence 0xa9 which is not a valid UTF-8 encoding of any character. See table 3-7 of http://www.unicode.org/versions/Unicode5.2.0/ch03.pdf#G7404 . Since this is not a valid UTF-8 byte sequence, stringWithCString is converting your input to the Unicode REPLACEMENT_CHARACTER (FFFD). When this character is then encoded back into UTF-8, it yields the byte sequence 0xef 0xbf 0xbd. The MD5 of this sequence is 9b759040321a408a5c7768b4511287a6 as reported by your Objective-C example.
Your Java example yields an MD5 of a252c2c85a9e7756d5ba5da9949d57ed, which simple experimentation shows is the MD5 of the byte sequence 0xa9, which I have already noted is not a valid UTF-8 representation of the desired character.
I think we need to see the implementation of the Java md5() method you are using. I suspect it is simply dropping the high bytes of every Unicode character to convert to a byte sequence for passing to the MessageDigest class. This does not match your Objective-C implementation where you are using a UTF-8 encoding.
Note: even if you fix your Objective-C implementation to match the encoding of your Java md5() method, your test will need some adjustment because you cannot use stringWithCString with the NSUTF8StringEncoding encoding to convert the byte sequence 0xa9 to an NSString.
UPDATE
Having now seen the Java implementation using the deprecated getBytes method, my recommendation is to change the Java implementation, if at all possible, to use a proper UTF-8 encoding.
I suspect, however, that your requirements are to match the current Java implementation, even if it is wrong. Therefore, I suggest you duplicate the bad behavior of Java's deprecated getBytes() method by using NSString getCharacters:range: to retrieve an array of unichars, then manually create an array of bytes by taking the low byte of each unichar.
stringWithCString requires a null terminated C-String. I don't think that kTestASCII[] is necessarily null terminated in your Objective-C code. Perhaps that is the cause of the difference.
Try:
char kTestASCII [] = {
169,
0
};
Thanks to GBegan's explanation - here is my solution
for(int c = 0; c < [s length]; c++){
int number = [s characterAtIndex:c];
unsigned char c[1];
c[0] = (unsigned char)number;
NSMutableData *oneByte = [NSMutableData dataWithBytes:&c length:1];
}

Resources