How to get the size of a user defined struct? (sizeof) - memory

I've got a structure with C representation:
struct Scard_IO_Request {
proto: u32,
pciLength: u32
}
when I want to ask the sizeof (like in C sizeof()) using:
mem::sizeof<Scard_IO_Request>();
I get compilation error:
"error: `sizeof` is a reserved keyword"
Why can't I use this sizeof function like in C? Is there an alternative?

For two reasons:
There is no such function as "sizeof", so the compiler is going to have a rather difficult time calling it.
That's not how you invoke generic functions.
If you check the documentation for mem::size_of (which you can find even if you search for "sizeof"), you will see that it includes a runnable example which shows you how to call it. For posterity, the example in question is:
fn main() {
use std::mem;
assert_eq!(4, mem::size_of::<i32>());
}
In your specific case, you'd get the size of that structure using
mem::size_of::<Scard_IO_Request>()

Related

Does the using declaration allow for incomplete types in all cases?

I'm a bit confused about the implications of the using declaration. The keyword implies that a new type is merely declared. This would allow for incomplete types. However, in some cases it is also a definition, no? Compare the following code:
#include <variant>
#include <iostream>
struct box;
using val = std::variant<std::monostate, box, int, char>;
struct box
{
int a;
long b;
double c;
box(std::initializer_list<val>) {
}
};
int main()
{
std::cout << sizeof(val) << std::endl;
}
In this case I'm defining val to be some instantiation of variant. Is this undefined behaviour? If the using-declaration is in fact a declaration and not a definition, incomplete types such as box would be allowed to instantiate the variant type. However, if it is also a definition, it would be UB no?
For the record, both gcc and clang both create "32" as output.
Since you've not included language-lawyer, I'm attempting a non-lawyer answer.
Why should that be UB?
With a using delcaration, you're just providing a synonym for std::variant<whatever>. That doesn't require an instantiation of the object, nor of the class std::variant, pretty much like a function declaration with a parameter of that class doesn't require it:
void f(val); // just fine
The problem would occur as soon as you give to that function a definition (if val is still incomplete because box is still incomplete):
void f(val) {}
But it's enough just to change val to val& for allowing a definition,
void f(val&) {}
because the compiler doesn't need to know anything else of val than its name.
Furthermore, and here I'm really inventing, "incomplete type" means that some definition is lacking at the point it's needed, so I expect you should discover such an issue at compile/link time, and not by being hit by UB. As in, how can the compiler and linker even finish their job succesfully if a definition to do something wasn't found?

Can we convert a function object to String on iOS?

In JavaScript we have something like .toString which can convert the entire function object to string.
Do we have something similar on IOS?
For example, in JavaScript if we have function like this, after converting it with .toString and printing the value in console we see the entire function object.
function sum(a, b)
{
return a + b;
}
console.log(sum.toString());
// expected output:
// "function sum(a, b)
// {
//return a + b;
// }"
Can we do something similar for IOS? I tried String (describing :Function) in Swift but that didn't work and gave me output as (Function) but not the complete structure like we get in JavaScript .toString.
public func say_hello()
{
print("Hello, World!")
}
String(describing: say_hello))
//Output:(Function)
Despite the many comments explaining why that's not possible (nor feasible in many cases), I want to point out that you can use JavaScript code in your Swift app and thus use the serialization mechanism of that language. Have a look at JSContext for details. This of course won't make things simpler, but it does give extra flexibility with injecting/changing/extending functionality at runtime.
This is not possible from Swift/Objc

In dart web projects, shouldn't type and reference warnings be errors?

In dart, when developing a web application, if I invoke a method with a wrong number of arguments, the editor shows a warning message, the javascript compilation however runs successfully, and an error is only raised runtime. This is also the case for example if I refer and unexistent variable, or I pass a method argument of the wrong type.
I ask, if the editor already know that things won't work, why is the compilation successful? Why do we have types if they are not checked at compile time? I guess this behaviour has a reason, but I couldn't find it explained anywhere.
In Dart, many programming errors are warnings.
This is for two reasons.
The primary reason is that it allows you to run your program while you are developing it. If some of your code isn't complete yet, or it's only half refactored and still uses the old variable names, you can still test the other half. If you weren't allowed to run the program before it was perfect, that would not be possible.
The other reason is that warnings represent only static type checking, which doesn't know everything about your program, It might be that your program will work, it's just impossible for the analyser to determine.
Example:
class C {
int foo(int x) => x;
}
class D implements C {
num foo(num x, [num defaultValue]) => x == null ? defaultValue : x;
}
void bar(C c) => print(c.foo(4.1, 42)); // Static warning: wrong argument count, bad type.
main() { bar(new D()); } // Program runs fine.
If your program works, it shouldn't be stopped by a pedantic analyser that only knows half the truth. You should still look at the warnings, and consider whether there is something to worry about, but it is perfectly fine to decide that you actually know better than the compiler.
There is no compilation stage. What you see is warning based on type. For example:
This code will have warning:
void main() {
var foo = "";
foo.baz();
}
but this one won't:
void main() {
var foo;
foo.baz();
}
because code analyzer cant deduct the type of foo

Calling Lua from C

I'm trying to call a user-defined Lua function from C. I've seen some discussion on this, and the solution seems clear. I need to grab the index of the function with luaL_ref(), and save the returned index for use later.
In my case, I've saved the value with luaL_ref, and I'm at a point where my C code needs to invoke the Lua function saved with luaL_ref. For that, I'm using lua_rawgeti as follows:
lua_rawgeti(l, LUA_REGISTRYINDEX, fIndex);
This causes a crash in lua_rawgeti.
The fIndex I'm using is the value I received from luaL_ref, so I'm not sure what's going on here.
EDIT:
I'm running a Lua script as follows:
function errorFunc()
print("Error")
end
function savedFunc()
print("Saved")
end
mylib.save(savedFunc, errorFunc)
I've defined my own Lua library 'mylib', with a C function:
static int save(lua_State *L)
{
int cIdx = myCIndex = luaL_ref(L, LUA_REGISTRYINDEX);
int eIdx = luaL_ref(L, LUA_REGISTRYINDEX);
I save cIdx and eIdx away until a later point in time when I receive some external event at which point I would like to invoke one of the functions set as parameters in my Lua script. Here, (on the same thread, using the same lua_State*), I call:
lua_rawgeti(L, LUA_REGISTRYINDEX, myCIndex);
Which is causing the crash.
My first suggestion is to get it working without storing the function in C at all. Just assign your function to a global in Lua, then in C use the Lua state (L) to get the global, push the args, call the function, and use the results. Once that's working, you've got the basics and know your function is working, you can change the way you get at the function to use the registry. Good luck!
As #Schollii mentioned, I was making this call after doing a lua_close(L).

How can I load an unnamed function in Lua?

I want users of my C++ application to be able to provide anonymous functions to perform small chunks of work.
Small fragments like this would be ideal.
function(arg) return arg*5 end
Now I'd like to be able to write something as simple as this for my C code,
// Push the function onto the lua stack
lua_xxx(L, "function(arg) return arg*5 end" )
// Store it away for later
int reg_index = luaL_ref(L, LUA_REGISTRY_INDEX);
However I dont think lua_loadstring will do "the right thing".
Am I left with what feels to me like a horrible hack?
void push_lua_function_from_string( lua_State * L, std::string code )
{
// Wrap our string so that we can get something useful for luaL_loadstring
std::string wrapped_code = "return "+code;
luaL_loadstring(L, wrapped_code.c_str());
lua_pcall( L, 0, 1, 0 );
}
push_lua_function_from_string(L, "function(arg) return arg*5 end" );
int reg_index = luaL_ref(L, LUA_REGISTRY_INDEX);
Is there a better solution?
If you need access to parameters, the way you have written is correct. lua_loadstring returns a function that represents the chunk/code you are compiling. If you want to actually get a function back from the code, you have to return it. I also do this (in Lua) for little "expression evaluators", and I don't consider it a "horrible hack" :)
If you only need some callbacks, without any parameters, you can directly write the code and use the function returned by lua_tostring. You can even pass parameters to this chunk, it will be accessible as the ... expression. Then you can get the parameters as:
local arg1, arg2 = ...
-- rest of code
You decide what is better for you - "ugly code" inside your library codebase, or "ugly code" in your Lua functions.
Have a look at my ae. It caches functions from expressions so you can simply say ae_eval("a*x^2+b*x+c") and it'll only compile it once.

Resources