Updating my app from 32-bit to 64-bit.
According to the Apple Documentation floats are only 4 byte and I need to use CGFloat (8 byte)
I am using the memcpy to read in bytes. I have updated all my sizeof(float)s to sizeof(CGFloat).
But when I do I get the Semantic issue
__builtin___memcpy_chk will always overflow destination buffer. Expanded from macro memcpy
I updated my NSData readDataOfLenght to take sizeof(CGFloat) and it seems to work ok. Sometimes not all the data that is read in is correct.
I am afraid I am over my head in this and could use some help.
-(void) readByteData:(NSFileHandle *)fHandle Size:(NSInteger)byteSize
{
[super readByteData:fHandle Size:byteSize];
NSData *data = [fHandle readDataOfLength:sizeof(CGFloat)];
float r;
memcpy(&r, [data bytes], sizeof(CGFloat));
self.radius = r;
int nCGPointSize = sizeof(CGFloat) * 2;
data = [fHandle readDataOfLength:nCGPointSize];
float xy[2];
memcpy(xy, [data bytes], nCGPointSize);
self.centerPos = ccp(xy[0], xy[1]);
data = [fHandle readDataOfLength:sizeof(CGFloat)];
float start_angle;
memcpy(&start_angle, [data bytes], sizeof(CGFloat));
self.startAngle = start_angle;
data = [fHandle readDataOfLength:sizeof(CGFloat)];
float end_angle;
memcpy(&end_angle, [data bytes], sizeof(CGFloat));
self.endAngle = end_angle;
data = [fHandle readDataOfLength:sizeof(int)];
int d;
memcpy(&d, [data bytes], sizeof(int));
self.dir = d;
flagClosed = YES;
}
This instruction:
float r;
memcpy(&r, [data bytes], sizeof(CGFloat));
Tells your compiler:
Read sizeof(CGFloat) (== 8 bytes!) from the location [data bytes]
and write them to r
But r is only 4 bytes in size! So the first 4 bytes are written to r and the next 4 bytes are written to whatever follows r in memory and this is not allowed. memcpy is a simple byte copy instructions, it moves any number of bytes from memory location A to memory location B, it cannot convert data types for you. If you need to convert CGFloat values to float values, then you actually need to do that conversion yourself.
CGFloat bigR;
memcpy(&bigR, [data bytes], sizeof(bigR));
self.radius = (float)bigR;
Same when reading multiple values:
CGFloat bigXY[2];
data = [fHandle readDataOfLength:sizeof(bigXY)];
memcpy(bigXY, [data bytes], sizeof(bigXY));
self.centerPos = ccp((float)bigXY[0], (float)bixXY[1]);
The casts are only to make it more clear where the conversion takes place, most compilers will also compile the code without all the (float) casts and without complaining.
As a general rule:
memcpy(dst, src, size)
size must never be bigger than the memory src points to or the memory dst points to. In your case, size was always bigger than the memory dst pointed to.
So far the explanation why your code didn't work. However, you actually don't need to use memcpy at all as if you have a memory block out of multiple values of a known data type, of course you can access that memory directly without having to copy it anywhere:
NSData * data = [fHandle readDataOfLength:sizeof(CGFloat)];
if (!data) {
// ... handle errorr ...
}
const CGFloat * cgfloatsInData = (const CGFloat *)[data bytes];
self.radius = (float)cgfloatsInData[0];
data = [fHandle readDataOfLength:sizeof(CGFloat) * 2];
if (!data) {
// ... handle errorr ...
}
const CGFloat * cgfloatsInData = (const CGFloat *)[data bytes];
self.centerPos = ccp((float)cgfloatsInData[0], (float)cgfloatsInData[1]);
And so on. But this is highly inefficient, as you seem to always expect some fixed size structure with no optional values, so why not reading it as a structure? That way you only need one I/O access to read all of it and only one NSData object must be created by the system.
const struct {
CGFloat radius;
CGFloat xCoord;
CGFloat yCoord;
CGFloat startAngle;
CGFloat endAngle;
int dir;
} __attribute__((packed)) * entry;
// `const` as the memory `entry` will point to will be read-only.
// `* entry` means entry is a pointer to memory of a struct
// that looks as described above. __attribute__((packed)) means
// the memory must be laid out exactly as shown above and have no
// padding for better alignment of fields.
NSData * data = [fHandle readDataOfLength:sizeof(*entry)];
// `sizeof(*entry)` means the size of the memory entry points to,
// contrary to `sizeof(entry)` which would be the size of entry itself
// and that would simply be the size of a pointer on your system, 8 bytes,
// whereas `sizeof(*entry)` will be 44 bytes.
entry = (const void *)dataBytes;
// Any pointer type can be cased to `void *` and assigning
// a `void *` pointer so some pointer is always allowed by the compiler.
self.radius = (float)entry->radius;
self.centerPos = ccp((float)entry->xCoord, (float)entry->yCoord);
self.startAngle = (float)entry->startAngle;
self.endAngle = (float)entry->endAngle;
self.dir = entry->dir;
Related
first this my code
#pragma pack (4)
typedef struct _Login{
char user[32];
char pwd[32];
int userID;
}Login,*PLogin;
const unsigned long MSG_TAG_HEADER_YXHY = 0x59485859;
#pragma pack (2)
typedef struct tagTcpPacketHeader
{
int ulHtag;
char ucVersion;
char ucCmd;
int ulUserId;
short usPacketNum;
int ulDataLen;
}TcpPacketHeader,*LPTcpPacketHeader;
#pragma pack ()
const unsigned int TCP_HEADER_PACKET_LEN = sizeof(TcpPacketHeader);
- (NSData*)sendDataFileWithUserId:(const int)nUserId nCmd:(const int)nCmd pData:(NSData*)data{
NSData* sendData;
void* sendObj = malloc(data.length);
[data getBytes:sendObj length:data.length];
static int nPacketNum = 0;
int nLen = (int)data.length + TCP_HEADER_PACKET_LEN;
char *pTmpBuf = malloc(nLen);
LPTcpPacketHeader tcpHeader = (LPTcpPacketHeader)pTmpBuf;
tcpHeader->ulHtag = MSG_TAG_HEADER_YXHY;
tcpHeader->ucVersion = 1;
tcpHeader->ucCmd = nCmd;
tcpHeader->ulUserId = nUserId;
tcpHeader->usPacketNum = nPacketNum;
tcpHeader->ulDataLen = nLen;
memcpy(tcpHeader + TCP_HEADER_PACKET_LEN,sendObj, data.length);
sendData = [NSData dataWithBytes:pTmpBuf length:nLen];
nPacketNum++;
free(pTmpBuf);
free(sendObj);
return sendData;
}
- (NSData*)get_File_Login:(NSString*)userID{
int length = sizeof(Login);
Login log_in = {"123","456",userID.intValue};
NSData* login_data = [NSData dataWithBytes:&log_in length:length];
NSData* ret = [self sendDataFileWithUserId:log_in.userID nCmd:5 pData:login_data];
return ret;
}
Use
NSData* ms = [self get_File_Login:#"123"];
NSLog(#"%#",ms);
After frequent use can be a problem
question
This question makes me very headache why appear “ set a breakpoint in malloc_error_break to debug ”
I have added the "malloc_error_break" the breakpoint,But it doesn't work......
Who can tell me the answer???
When you use the pointer in memcpy this way
memcpy(tcpHeader + TCP_HEADER_PACKET_LEN,sendObj, data.length);
this means that you want to copy into memory location pointed by tcpHeader plus TCP_HEADER_PACKET_LEN times the size of the data the pointer points to. It is the same as doing &tcpHeader[TCP_HEADER_PACKET_LEN].
Assuming you want to write to a location right after the header there are two ways to fix it:
1) use a pointer with a size of 1, meaning a char*. In your code you have a pointer pTmpBuf that is such so just change the code to:
memcpy(pTmpBuf + TCP_HEADER_PACKET_LEN, sendObj, data.length);
2) use the size 1 for this calculation. Since the size of the data it points to is the same as TCP_HEADER_PACKET_LEN then multiplying it by one gives the correct location:
memcpy(tcpHeader + 1, sendObj, data.length);
I would recommend the first since it's clear what you are calculating. In the second it is unclear why you would add one, as well as using a pointer to one type when copying data that isn't that type.
I'm converting a very large json result on my server to a compressed format that I can decompress on my objective c app. I would prefer to use the iOS 9 compression lib if possible (libcompression.tbd), described in Apple's CompressionSample/BlockCompression.c sample code.
I'm passing the compressed NSData result to the following method:
#include "compression.h"
...
- (NSData *) getDecompressedData:(NSData *) compressed {
size_t dst_buffer_size = 20000000; //20MB
uint8_t *dst_buffer = malloc(dst_buffer_size);
uint8_t *src_buffer = malloc(compressed.length);
[compressed getBytes:src_buffer length:compressed.length];
size_t decompressedSize = compression_decode_buffer(dst_buffer, dst_buffer_size, src_buffer, compressed.length, nil, COMPRESSION_ZLIB);
NSData *decompressed = [[NSData alloc] initWithBytes:dst_buffer length:decompressedSize];
return decompressed;
}
The compressed parameter has a length that matches my server logs, but the result from compression_decode_buffer is always zero and dst_buffer is not modified. I'm not receiving any errors, and the log has no relevant info.
I've tried ZLIB and LZ4 compression / decompression methods and several libraries on the server side, all with the same result.
What am I doing wrong here?
After much testing and research, I found that the compression library I was using on my server adds a compression header (1st two bytes), per RFC1950. I skipped those two bytes and compression_decode_buffer works like a champ!
- (NSData *) getDecompressedData:(NSData *) compressed {
size_t dst_buffer_size = 20000000; //20MB
uint8_t *dst_buffer = malloc(dst_buffer_size);
uint8_t *src_buffer = malloc(compressed.length);
[compressed getBytes:src_buffer range:NSMakeRange(2, compressed.length - 2)];
size_t decompressedSize = compression_decode_buffer(dst_buffer, dst_buffer_size, src_buffer, compressed.length - 2, nil, COMPRESSION_ZLIB);
NSData *decompressed = [[NSData alloc] initWithBytes:dst_buffer length:decompressedSize];
return decompressed;
}
Thank you so much, azcoastal - saved me heaps of time!
Here's some working Swift code ..
let bytes = [UInt8](data) // Data -> [Uint8]
// Need to remove the first 2 bytes (a header) from the array!!
let slice = bytes[2...bytes.count-1]
let noheader = Array(slice)
let dst_count = bytes.count * MULTIPLY
var dst = [UInt8](repeating: 0, count: dst_count) // destination
let size = compression_decode_buffer(&dst, dst_count,
noheader, noheader.count, nil, COMPRESSION_ZLIB)
I am just updating one of my applications and I have an error with my decompression method.
This is the warning I am experiencing
Implicit conversion loses integer precision: 'unsigned long' to 'unit' (aka 'unsigned int')
this is the line of code its happening on
stream.avail_in = len - stream.total_in;
And this is what the whole method looks like
#pragma mark - ZLib Compression Methods
// Returns the decompressed version if the zlib compressed input data or nil if there was an error
- (NSData*) dataByDecompressingData:(NSData*)data {
NSLog(#"%lu", (unsigned long)data.length);
Byte* bytes = (Byte*)[data bytes];
NSInteger len = [data length];
NSMutableData *decompressedData = [[NSMutableData alloc] initWithCapacity:COMPRESSION_BLOCK];
Byte* decompressedBytes = (Byte*) malloc(COMPRESSION_BLOCK);
z_stream stream;
int err;
stream.zalloc = (alloc_func)0;
stream.zfree = (free_func)0;
stream.opaque = (voidpf)0;
stream.next_in = bytes;
err = inflateInit(&stream);
CHECK_ERR(err, #"inflateInit");
while (true) {
stream.avail_in = len - stream.total_in;
stream.next_out = decompressedBytes;
stream.avail_out = COMPRESSION_BLOCK;
err = inflate(&stream, Z_NO_FLUSH);
[decompressedData appendBytes:decompressedBytes length:(stream.total_out-[decompressedData length])];
if(err == Z_STREAM_END)
break;
CHECK_ERR(err, #"inflate");
}
err = inflateEnd(&stream);
CHECK_ERR(err, #"inflateEnd");
free(decompressedBytes);
return decompressedData;
}
First off, you should not use stream.total_in. It may or may not be a large enough type for your application. It is always unsigned long. Use your own total input counter sized for your application and ignore stream.total_in.
Second, I'm guessing that your CHECK_ERR() aborts somehow. You should not abort in the event of a Z_BUF_ERROR. In that case, you can continue by simply providing more input and/or more output space.
Third, the problem here is that you need to pick a stream.avail_in that is assured to fit in unsigned. You should be comparing the amount of remaining input to the largest value of unsigned, e.g. UINT_MAX or (unsigned)0 - 1. If the remaining data is larger, use the max value and deduct that from the remaining input. If smaller or equal, use all of it and set the remaining input to zero.
Here is an example of turning 4 bytes into a 32bit integer in objective-c. The function readInt grabs 4 bytes from the read function and then converts it into a single 32 bit int. Does anyone know how I would convert 4 bytes to a float? I believe it is big endian. Basically I need a readFloat function. I can never grasp these bitwise operations.
EDIT:
I forgot to mention that the original data comes from Java's DataOutputStream class. The writeFloat function accordign to java doc is
Converts the float argument to an int using the floatToIntBits method
in class Float, and then writes that int value to the underlying
output stream as a 4-byte quantity, high byte first.
This is Objective c trying to extract the data written by java.
- (int32_t)read{
int8_t v;
[data getBytes:&v range:NSMakeRange(length,1)];
length++;
return ((int32_t)v & 0x0ff);
}
- (int32_t)readInt {
int32_t ch1 = [self read];
int32_t ch2 = [self read];
int32_t ch3 = [self read];
int32_t ch4 = [self read];
if ((ch1 | ch2 | ch3 | ch4) < 0){
#throw [NSException exceptionWithName:#"Exception" reason:#"EOFException" userInfo:nil];
}
return ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));
}
OSByteOrder.h contains functions for reading, writing, and converting integer data.
You can use OSSwapBigToHostInt32() to convert a big-endian integer to the native representation, then copy the bits into a float:
NSData* data = [NSData dataWithContentsOfFile:#"/tmp/java/test.dat"];
int32_t bytes;
[data getBytes:&bytes length:sizeof(bytes)];
bytes = OSSwapBigToHostInt32(bytes);
float number;
memcpy(&number, &bytes, sizeof(bytes));
NSLog(#"Float %f", number);
[data getBytes:&myFloat range:NSMakeRange(locationWhereFloatStarts, sizeof(float)] ought to do the trick.
Given that the data comes from DataOutputStream's writeFloat() method, then that is documented to use Float.floatToIntBits() to create the integer representation. intBitsToFloat() further documents how to interpret that representation.
I'm not sure if it's the same thing, but the xdr API seems like it might handle that representation. The credits on the man page refer to Sun Microsystems standards/specifications, so it seems likely it's related to Java.
So, it may work to do something like:
// At top of file:
#include <rpc/types.h>
#include <rpc/xdr.h>
// In some function or method:
XDR xdr;
xdrmem_create(&xdr, (char*)data.bytes + offset, data.length - offset, XDR_DECODE);
float f;
if (!xdr_float(&xdr, &f))
/* handle error */;
xdr_destroy(&xdr);
If the data consists of a whole stream in eXternal Data Representation, then you would create one XDR stream for the whole task of extracting items from it, and use many xdr_...() calls between creating and destroying it to extract all of the items.
I have limited experience working with the Core Foundation types & collections, so apologies if this is obvious.
I'm using the CFBitVector type to store some bit sequences, and I need to store it in a binary data format (so that it can be added to a Core Data store). The most sensible thing seems to be to store this in a CFData type, which can be toll-free bridged with an NSData and added to the store, but I am unsure of how to do this.
Can anybody help me out with a simple example of storing CFTypes in CF/NSData?
Edit:
Is this even the right approach? Should I try converting the CFBitVector into a series of ints which can then be stored in the data model? Or perhaps a transformable attribute?
The way I ended up doing this was to roll my own attribute transformer in order to convert a CFBitVectorRef into an NSData instance. The benefit of this is that I can really cram the bit array tightly into a block of binary data, as in my case I really need to keep the storage size to a minimum.
Below is the implementation of my CFBitVectorTransformer class. It essentially reads each bit and packs them into unsigned chars ("segments" in the code below), which are then appended to a mutable NSData buffer. The code would work with types larger than unsigned chars, however I wanted the smallest chunks possible in order to really minimise the size of the resulting data.
#define kBitsPerByte 8
#implementation CFBitVectorTransformer
+ (Class)transformedValueClass
{
return [NSData class];
}
+ (BOOL)allowsReverseTransformation
{
return YES;
}
/* CFBitVectorRef -> NSData */
- (id)transformedValue:(id)value
{
if (!value) return nil;
if ([value isKindOfClass:[NSData class]]) return value;
/* Prepare the bit vector. */
CFBitVectorRef bitVector = (__bridge CFBitVectorRef)value;
CFIndex bitVectorCount = CFBitVectorGetCount(bitVector);
/* Prepare the data buffer. */
NSMutableData *bitData = [NSMutableData data];
unsigned char bitVectorSegment = 0;
NSUInteger bytesPerSegment = sizeof(char);
NSUInteger bitsPerSegment = bytesPerSegment * kBitsPerByte;
for (CFIndex bitIndex = 0; bitIndex < bitVectorCount; bitIndex++) {
/* Shift the bit into the segment the appropriate number of places. */
CFBit bit = CFBitVectorGetBitAtIndex(bitVector, bitIndex);
int segmentShift = bitIndex % bitsPerSegment;
bitVectorSegment |= bit << segmentShift;
/* If this is the last bit we can squeeze into the segment, or it's the final bit, append the segment to the data buffer. */
if (segmentShift == bitsPerSegment - 1 || bitIndex == bitVectorCount - 1) {
[bitData appendBytes:&bitVectorSegment length:bytesPerSegment];
bitVectorSegment = 0;
}
}
return [NSData dataWithData:bitData];
}
/* NSData -> CFBitVectorRef */
- (id)reverseTransformedValue:(id)value
{
if (!value) return NULL;
if (![value isKindOfClass:[NSData class]]) return NULL;
/* Prepare the data buffer. */
NSData *bitData = (NSData *)value;
char *bitVectorSegments = (char *)[bitData bytes];
NSUInteger bitDataLength = [bitData length];
/* Prepare the bit vector. */
CFIndex bitVectorCapacity = bitDataLength * kBitsPerByte;
CFMutableBitVectorRef bitVector = CFBitVectorCreateMutable(kCFAllocatorDefault, bitVectorCapacity);
CFBitVectorSetCount(bitVector, bitVectorCapacity);
for (NSUInteger byteIndex = 0; byteIndex < bitDataLength; byteIndex++) {
unsigned char bitVectorSegment = bitVectorSegments[byteIndex];
/* Store each bit of this byte in the bit vector. */
for (NSUInteger bitIndex = 0; bitIndex < kBitsPerByte; bitIndex++) {
CFBit bit = bitVectorSegment & 1 << bitIndex;
CFIndex bitVectorBitIndex = (byteIndex * kBitsPerByte) + bitIndex;
CFBitVectorSetBitAtIndex(bitVector, bitVectorBitIndex, bit);
}
}
return (__bridge_transfer id)bitVector;
}
#end
This nicely abstracts the conversion of data, allowing you to just set the CFBitVectorRef as an attribute in the data model, and should be plenty fast enough for most purposes.
I hope this helps somebody else in a similar situation.