sizeof works in a very strange way in C
printf("%lld\n", sizeof(char));
printf("%lld\n", sizeof('A'));
Result: 1; 4.
'A' - English
Related
Try bitshifting for a Uint32 in Dart. How to handle this, if there is no support from dart:ffi?
import 'dart:ffi';
Uint32 len = 0 as Uint32;
len >>= 1; // will not be compiled
Try it with "normal" int. But i am afraid, will the result be everytime the same as using Uint32?
The Uint32 type represents native values. There is no Dart value with that type, so your as Uint32 cast will fail.
If you have a Pointer<Uint32>, then you can use that to refer to the native value.
Example:
Pointer<Uint32> p = ...;
p.value >>= 1;
I tried sending F13 with kb.stroke("F13");
Well it doesn't work, works fine with anything F12 and below.
I'm trying to use this in a custom remote in Unified Remote app, so my only workaround for know is using os.start to run an ahk script that does the key sending but it's a very slow approach.
Any help will be appreciated.
local ffi = require"ffi"
ffi.cdef[[
typedef struct {
uintptr_t type;
uint16_t wVk;
uint16_t wScan;
uint32_t dwFlags;
uint32_t time;
uintptr_t dwExtraInfo;
uint32_t x[2];
} INP;
int SendInput(int, void*, int);
]]
local inp_t = ffi.typeof"INP[2]"
local function PressAndReleaseKey(vkey)
local inp = inp_t()
for j = 0, 1 do
inp[j].type = 1
inp[j].wVk = vkey
inp[j].dwFlags = j * 2
end
ffi.C.SendInput(2, inp, ffi.sizeof"INP")
end
PressAndReleaseKey(0x57) -- W
PressAndReleaseKey(0x7C) -- F13
VKeys:
https://learn.microsoft.com/en-us/windows/win32/inputdev/virtual-key-codes
I am using the following code to store the data of a string in a char*.
NSString *hotelName = [components[2] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
hotelInfo->hotelName = malloc(sizeof(char) * hotelName.length + 1);
strncpy(hotelInfo->hotelName, [hotelName UTF8String], hotelName.length + 1);
NSLog(#"HOTEL NAME: %s",hotelInfo->hotelName);
The problem is with the Greek characters that are printed strangely. I have also tried to use another encoding (e.g NSWindowsCP1253StringEncoding -it crashes- )
I tried even that:
hotelInfo->hotelName = (const char *)[hotelName cStringUsingEncoding:NSUnicodeStringEncoding];
but it also produces strange characters.
What do I miss?
EDIT:
After some suggestions I tried the following:
if ([hotelName canBeConvertedToEncoding:NSWindowsCP1253StringEncoding]){
const char *cHotelName = (const char *)[hotelName cStringUsingEncoding:NSWindowsCP1253StringEncoding];
int bufSize = strlen(cHotelName) + 1;
if (bufSize >0 ){
hotelInfo->hotelName = malloc(sizeof(char) * bufSize);
strncpy(hotelInfo->hotelName, [hotelName UTF8String], bufSize);
NSLog(#"HOTEL NAME: %s",hotelInfo->hotelName);
}
}else{
NSLog(#"String cannot be encoded! Sorry! %#",hotelName);
for (NSInteger charIdx=0; charIdx<hotelName.length; charIdx++){
// Do something with character at index charIdx, for example:
char x[hotelName.length];
NSLog(#"%C", [hotelName characterAtIndex:charIdx]);
x[charIdx] = [hotelName characterAtIndex:charIdx];
NSLog(#"%s", x);
if (charIdx == hotelName.length - 1)
hotelInfo->hotelName = x;
}
NSLog(#"HOTEL NAME: %s",hotelInfo->hotelName);
}
But still nothing!
First of all, it is not guaranteed that any NSString can be represented as a C character array (so-called C-String). The reason is that there is just a limited set of characters available. You should check if the string can be converted (by calling canBeConvertedToEncoding:).
Secondly, when using the malloc and strncpy functions, they rely on the length of the C-String, not on the length of the NSString. So you should first get the C-String from the NSString, then get it's length (strlen), and use this value to the function calls:
const char *cHotelName = (const char *)[hotelName cStringUsingEncoding:NSWindowsCP1253StringEncoding];
int bufSize = strlen(cHotelName) + 1;
hotelInfo->hotelName = malloc(sizeof(char) * bufSize);
strncpy(hotelInfo->hotelName, cHotelName, bufSize);
I am streaming video to a UIImage in my app using NSURLConnection. Part of my code, that works in Objective C I am having trouble converting to Swift:
func connection(connection: NSURLConnection, didReceiveData data: NSData) {
//...other code
UInt32 sizeOfJPEG_1; //same type as was in Objective C
var payloadsize = [UInt8](count: 4, repeatedValue: 0x00) //was uint8_t in Objective C
data.getBytes(&payloadsize[1], range: NSMakeRange(12, 3))
payloadsize[0] = 0
sizeOfJPEG_1 = (payloadsize[1] << 16) + (payloadsize[2] << 8) + payloadsize[3]//here is the problem
//UInt32(sizeOfJPEG_1 = (payloadsize[1] << 16) + (payloadsize[2] << 8) + payloadsize[3]) //the way I am currently dealing with converting my shifting and additions to the correct sizeOfJPEG_1 UInt32 argument type
//..more code
}
I am having two issues here:
I am not sure of the best way to convert the payloadsize UInt8 bitshifting and additions to the argument UInt32
More importantly, possibly, to figure out first, is the runtime error I get shifting a UInt8 << 16, in objective C the type was uint8_t, I do not know if that was a legal operation in Objective C with a tyupe of uint8_t and just is not in Swift with a type of UInt8
I am getting an overflow error because I am left shifting a UInt8 << 16 as well as UInt8 << 8:
fatal error: shift amount is larger than type size in bits
I think I understand Objective C will quietly shift a uint8_t << 16 without crashing but I do not know how to calculate this, what is the result of uint8_t << 16, wouldn't it be 0? (uint8_t is defined as unsigned char)
Unlike (Objective-)C, Swift does not promote smaller integer types
to int or unsigned int, so you have to do that explicitly (before
shifting the data):
let sizeOfJPEG_1 = UInt32(payloadsize[1]) << 16 + UInt32(payloadsize[2]) << 8 + UInt32(payloadsize[3])
I have a code snippet from openCV example as follows:
CvScalar sum_line_pixels( IplImage* image, CvPoint pt1, CvPoint pt2 )
{
CvLineIterator iterator;
int blue_sum = 0, green_sum = 0, red_sum = 0;
int count = cvInitLineIterator( image, pt1, pt2, &iterator, 8, 0 );
for( int i = 0; i < count; i++ ){
blue_sum += iterator.ptr[0];
green_sum += iterator.ptr[1];
red_sum += iterator.ptr[2];
CV_NEXT_LINE_POINT(iterator);
/* print the pixel coordinates: demonstrates how to calculate the
coordinates */
{
int offset, x, y;
/* assume that ROI is not set, otherwise need to take it
into account. */
offset = iterator.ptr - (uchar*)(image->imageData);
y = offset/image->widthStep;
x = (offset - y*image->widthStep)/(3*sizeof(uchar)
/* size of pixel */);
printf("(%d,%d)\n", x, y );
}
}
return cvScalar( blue_sum, green_sum, red_sum );
}
I got stuck on the line:
offset = iterator.ptr - (uchar*)(image->imageData);
Iterator structure is:
PCvLineIterator = ^TCvLineIterator;
TCvLineIterator = packed record
ptr: ^UCHAR;
err: Integer;
plus_delta: Integer;
minus_delta: Integer;
plus_step: Integer;
minus_step: Integer;
end;
image->imageData is
imageData: PByte;
Could someone help me convert the offset line to delphi?
Thanks!
The line that calculates offset is simply calculating the number of bytes between the pointers iterator.ptr and image->imageData. Assuming you are using the same variable names a Delphi version of that code would be like this:
offset := PByte(iterator.ptr) - image.ImageData;
However, since you are using an older version of Delphi, the above code will not compile. Older Delphi versions (pre Delphi 2009) don't permit pointer arithmetic on types other than PAnsiChar. So you will need to write it like this:
offset := PAnsiChar(iterator.ptr) - PAnsiChar(image.ImageData);
I suspect that what is confusing you in the C code is (uchar*). That is the C syntax for a type cast.
As an aside, it is a mistake to use packed records for OpenCV structs. If you take a look at the C header files you will see that these structs are not packed. This is benign in the case of CvLineIterator since it has no padding, but you will get caught out somewhere down the line if you get into the bad habit of packing structs that should not be packed.