I'm in the middle of writing a small application that needs to read some complex binary messages in LuaJit.
I've been using the bit module and string.rep a lot. However, it's all very cumbersome. I'm new to using LuaJit and think there might be a much easier way using FFI.
In C I can declare a structure like this:
struct mystruct
{
uint32_t field1;
char field2[6];
uin64_t field3;
short field4;
} __attribute__(packed);
In reading LuaJit's FFI it seems you can declare
ffi.cdef[[
#pragma pack(1)
struct mystruct
{
uint32_t field1;
char field2[6];
uin64_t field3;
short field4;
};
]]
I can then create a mystruct and access the fields like this:
local ms = ffi.new("mystruct")
ms.field1 = 32;
// ... etc
But, how do I convert this back into a lua string?
I tried this, but it didn't seem to do what I wanted.
local s = tostring(ms)
and this:
local s = ffi.string(ms)
produces the following error "bad argument #1 to 'string' (cannot convert 'struct mystruct' to 'const char *')"
So I tried a cast:
local s = ffi.string(ffi.cast("char*", ms))
No error, but it looks wrong on the wire.
You have to explicitly specify the length when using ffi.string with a non-string-like parameter:
str = ffi.string(ptr [,len])
Creates an interned Lua string from the data pointed to by ptr.
If the optional argument len is missing, ptr is converted to a "char *" and the data is assumed to be zero-terminated. The length of the string is computed with strlen().
When running the following code, I get the expected (little endian) result:
ffi = require 'ffi'
ffi.cdef[[
typedef unsigned long uint32_t;
typedef unsigned long long uint64_t;
#pragma pack(1)
struct mystruct
{
uint32_t field1;
char field2[6];
uint64_t field3;
short field4;
};
]]
function string.tohex(str)
return (str:gsub('.', function (c)
return string.format('%02X', string.byte(c))
end))
end
ms = ffi.new('struct mystruct', 1, {2, 3, 4, 5, 6, 7}, 8, 9)
s = ffi.string(ms, ffi.sizeof(ms)) -- specify how long the byte sequence is
print(s:tohex()) --> 0100000002030405060708000000000000000900
Update: I know this is not a part of the original question, but I just learned this trick, and in order to be complete, here is a way to convert Lua string back to FFI cdata:
data = ffi.new('struct mystruct') -- create a new cdata
ffi.copy(data, s, ffi.sizeof(data)) -- fill it with data from Lua string 's'
print(data.field1, data.field4) --> 1 9
The previous has an error.
According to current luaffi implementation, you can cast it to void* . Use ffi.string(ffi.cast("void*",ms),ffi.sizeof(ms)) to do so.
Related
I'm in an Objective-C method with various NSStrings that I want to pass to a C function. The C function requires a struct object be malloc'd so that it can be passed in - this struct contains char fields. So the struct is defined like this:
struct libannotate_baseManual {
char *la_bm_code; // The base code for this manual (pointer to malloc'd memory)
char *la_bm_effectiveRevisionId; // The currently effective revision ID (pointer to malloc'd memory or null if none effective)
char **la_bm_revisionId; // The null-terminated list of revision IDs in the library for this manual (pointer to malloc'd array of pointers to malloc'd memory)
};
This struct is then used in the following C function definition:
void libannotate_setManualLibrary(struct libannotate_baseManual **library) { ..
So that's the function I need to call from Objective-C.
So I have various NSStrings that I basically want to pass in there, to represent the chars - la_bm_code, la_bm_effectiveRevisionId, la_bm_revision. I could convert those to const chars by using [NSString UTF8String], but I need chars, not const chars.
Also I need to do suitable malloc's for these fields, though apparently I don't need to worry about freeing the memory afterwards. C is not my strong point, though I know Objective-C well.
strdup() is your friend here as that both malloc()s and strcpy()s for you in one simple step. It's memory is also released using free() and it does your const char * to char * conversion for you!
NSString *code = ..., *effectiveRevId = ..., *revId = ...;
struct libannotate_baseManual *abm = malloc(sizeof(struct libannotate_baseManual));
abm->la_bm_code = strdup([code UTF8String]);
abm->la_bm_effectiveRevisionId = strdup([effectiveRevId UTF8String]);
const unsigned numRevIds = 1;
abm->la_bm_effectiveRevisionId = malloc(sizeof(char *) * (numRevIds + 1));
abm->la_bm_effectiveRevisionId[0] = strdup([revId UTF8String]);
abm->la_bm_effectiveRevisionId[1] = NULL;
const unsigned numAbms = 1;
struct libannotate_baseManual **abms = malloc(sizeof(struct libannotate_baseManual *) * (numAbms + 1));
abms[0] = abm;
abms[1] = NULL;
libannotate_setManualLibrary(abms);
Good luck, you'll need it. It's one of the worst interfaces I've ever seen.
I need to find a way to convert an arbitrary character typed by a user into an ASCII representation to be sent to a network service. My current approach is to create a lookup dictionary and send the corresponding code. After creating this dictionary, I see that it is hard to maintain and determine if it is complete:
__asciiKeycodes[#"F1"] = #(112);
__asciiKeycodes[#"F2"] = #(113);
__asciiKeycodes[#"F3"] = #(114);
//...
__asciiKeycodes[#"a"] = #(97);
__asciiKeycodes[#"b"] = #(98);
__asciiKeycodes[#"c"] = #(99);
Is there a better way to get ASCII character code from an arbitrary key typed by a user (using standard 104 keyboard)?
Objective C has base C primitive data types. There is a little trick you can do. You want to set the keyStroke to a char, and then cast it as an int. The default conversion in c from a char to an int is that char's ascii value. Here's a quick example.
char character= 'a';
NSLog("a = %ld", (int)test);
console output = a = 97
To go the other way around, cast an int as a char;
int asciiValue= (int)97;
NSLog("97 = %c", (char)asciiValue);
console output = 97 = a
Alternatively, you can do a direct conversion within initialization of your int or char and store it in a variable.
char asciiToCharOf97 = (char)97; //Stores 'a' in asciiToCharOf97
int charToAsciiOfA = (int)'a'; //Stores 97 in charToAsciiOfA
This seems to work for most keyboard keys, not sure about function keys and return key.
NSString* input = #"abcdefghijklkmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890!##$%^&*()_+[]\{}|;':\"\\,./<>?~ ";
for(int i = 0; i<input.length; i ++)
{
NSLog(#"Found (at %i): %i",i , [input characterAtIndex:i]);
}
Use stringWithFormat call and pass the int values.
I want to extract string that exist after first occurrence of "/"
For example a have a char *card in one of this form:
hw:0/Line
default/Master
and I need char *channel (Line, Master, ...)
In standard C can use "strcspn" and "strlen"
char *card, *channel;
int i;
i = strcspn(card, "/");
if (i == strlen(card)) {
channel = "Master";
} else {
card[i] = 0;
channel = card + i + 1;
}
How can I do this using glib string utility functions?
What's wrong with strcspn and strlen?
GLib reimplements a lot of stuff C99 and POSIX because it targets C89 compilers. It doesn't reimplement things like strlen or strcspn because there is no need to—they are in C89.
GLib also tends to implement UTF-8 versions of a lot of functions, and while there is a g_utf8_strlen I'm not aware of a g_utf8_strcspn I don't think it's really necessary in this case. If it is then you'll probably have to implement it yourself by iterating through the string with g_utf8_next_char and checking against the result of g_utf8_get_char or g_utf8_get_char_validated.
I got a suspicious pointer conversion error here. What might be the reason of this error?
I also initialized the code[] array globally as int *code[128]={0};
void encode(const char *s, int *out)
{
while(*s)
{
out=code[*s];
out+=strlen(code[*s++]); //this is where i got the error.
}
}
When you will assign a particular type pointer variable with address of different type, such type of automatic type conversion is known as suspicious type conversion.
strlen requires const char *, while int *code[128] means code is an array of int *, so code[*s++] is a int *.
When int * is supplied to const char *, you get this error.
Generally supply int * pointer to strlen isn't a good idea because strlen will end when a byte is '\0'. You have a good chance to have 0 in a 4 bytes int. E.g., an integer 3 will have 3 bytes of 0, and 1 byte of 3.
I’m trying to write a simple encryption routine in C using OpenSSL and I’ve found something strange. I’m not a C guru nor OpenSSL professional. So I might have made a mistake.
The function is as follows
char *rsa_encrypt(char *data)
{
const char xponent_in_hex[] = "010001";
const char modulus_in_hex[] = "D0BA16F11907E7B0819705A15264AC29BEE9F1EC5F22642992
D3E27100B7F212864A624A12FFB6D531712B0B0225AAD0C2E313D077A7DB2A5A33483EEFF41A9D";
BIGNUM *xponent = NULL;
BIGNUM *modulus = NULL;
BN_hex2bn(&xponent, xponent_in_hex);
BN_hex2bn(&modulus, modulus_in_hex);
RSA *rsa = RSA_new();
rsa->e = xponent;
rsa->n = modulus;
rsa->iqmp = NULL;
rsa->d = NULL;
rsa->p = NULL;
rsa->q = NULL;
char encoded[512] = { 0 };
RSA_public_encrypt(
strlen(data),
(const unsigned char *)data,
(unsigned char *)encoded,
rsa,
RSA_PKCS1_OAEP_PADDING
);
RSA_free(rsa);
return (encoded);
}
int _tmain(int argc, _TCHAR* argv[])
{
printf("%s\n", base64_encode(rsa_encrypt("ABC")));
printf("%s\n", base64_encode(rsa_encrypt("ABC")));
printf("%s\n", base64_encode(rsa_encrypt("ABC")));
}
I call that function on same data several times and it generates different value each time it is called. It is apparently wrong because exponent and modulus for created RSA structure are constant and input data is the same in each call.
So why RSA_public_encrypt behaves that way?
How should I generate a public key for RSA encryption based on exponent and modulus?
And where I’ve made mistake?
This is actually correct, and you're not making a mistake. Your confusion stems from the RSA_PKCS1_OAEP_PADDING parameter to RSA_public_encrypt.
The RSA encryption process is actually:
Take the plaintext (plain) and encode it , producing encoded_plain.
Encrypt encoded_plain.
(As you would expect, the decryption process requires you to both decrypt the value, and then decode the message).
The RSA_PKCS1_OAEP_PADDING parameter specifies how the plaintext should be encoded (that OAEP encoding should be used).
A simplified explanation is that OAEP padding uses some random values for the padding, so both xxxxxxxABC and yyyyyyyABC and zzzzzzzABC are all valid encoded_plain values for your plaintext, and those encoded_plain encrypts to a different value. If you perform the corresponding decrypt (and decode, by passing the same RSA_PKCS1_OAEP_PADDING paramater to RSA_private_decrypt) operation, you should still get "ABC" as an output for each of the ciphertexts, as the padding stripped off all three.
(If you want to be precise, the OAEP encoding scheme is more complicated that, see RFC 3447 section 7.1.1. But those are probably details you don't care about.)
The scope of encoded ends at the end of the rsa_encrypt function. Your return pointer will point to an invalid area of memory, that might not contain what you expect anymore because somebody else (another thread, for example) wrote over it. The answer explaining the padding is correct.