I wonder if it's possible to access memory which are used by another program? E.g. given the name(or handle) of program A, can I use Lua to read all memory allocated to A, and then modify the values of some address?
it is possible if you use either a C module providing functions for this or use LuaJIT instead of Lua, and take a look at e.g. https://github.com/nonchip/ljptrace, where i pretty much did exactly what you want
Related
As per the documentation(https://www.lua.org/manual/5.3/manual.html#lua_pushliteral),
which says that:
This macro is equivalent to lua_pushstring, but should be used only when s is a literal string.
But I can't understand the explanation aforementioned at all.
As far as I can see, there is no difference from the the macro definition for lua_pushliteral:
#define lua_pushliteral(L, s) lua_pushstring(L, "" s)
The documentation for lua_pushliteral in Lua 5.4 is the same as 5.3, except it adds "(Lua may optimize this case.)". So while it is currently the same as calling lua_pushstring, the Lua devs are giving themselves the option to optimize it in the future.
EDIT: As an example, the doc for lua_pushstring says:
Lua will make or reuse an internal copy of the given string, so the memory at s can be freed or reused immediately after the function returns.
But a C string literal is read-only, so it's impossible for the C code to free or reuse the memory. Also, Lua strings are immutable. It's basically useless to copy one immutable object to another immutable object when you could just refer to the same memory from both places. That means one possible way to optimize lua_pushliteral would be to just not make the copy that lua_pushstring does.
A u32 takes 4 bytes of memory, a String takes 3 pointer-sized integers (for location, size, and reserved space) on the stack, plus some amount on the heap.
This to me implies that Rust doesn't know, when the code is executed, what type is stored at a particular location, because that knowledge would require more memory.
But at the same time, does it not need to know what type is stored at 0xfa3d2f10, in order to be able to interpret the bytes at that location? For example, to know that the next bytes form the spec of a String on the heap?
How does Rust store types at runtime?
It doesn't, generally.
Rust doesn't know, when the code is executed, what type is stored at a particular location
Correct.
does it not need to know what type is stored
No, the bytes in memory should be correct, and the rest of the code assumes as much. The offsets of fields in a struct are baked-in to the generated machine code.
When does Rust store something like type information?
When performing dynamic dispatch, a fat pointer is used. This is composed of a pointer to the data and a pointer to a vtable, a collection of functions that make up the interface in question. The vtable could be considered a representation of the type, but it doesn't have a lot of the information that you might think goes into "a type" (unless the trait requires it). Dynamic dispatch isn't super common in Rust as most people prefer static dispatch when it's possible, but both techniques have their benefits.
There's also concepts like TypeId, which can represent one specific type, but only of a subset of types. It also doesn't provide much capability besides "are these the same type or not".
Isn't this all terribly brittle?
Yes, it can be, which is one of the things that makes Rust so interesting.
In a language like C or C++, there's not much that safeguards the programmer from making dumb mistakes that go out and mess up those bytes floating around in memory. Making those mistakes is what leads to bugs due to memory safety. Instead of interpreting your password as a password, it's interpreted as your username and printed out to an attacker (oops!)
Rust provides safeguards against that in the form of a strong type system and tools like the borrow checker, but still all done at compile time. Unsafe Rust enables these dangerous tools with the tradeoff that the programmer is now expected to uphold all the guarantees themselves, much like if they were writing C or C++ again.
See also:
When does type binding happen in Rust?
How does Rust implement reflection?
How do I print the type of a variable in Rust?
How to introspect all available methods and members of a Rust type?
Is there a way of identify the program name in the call stack?
i.e., I've got a PGM X that links to a PGM B and this one links to a PGM C, and then, in C, I want to know which program originates the call (PGM X)?
If you're in CICS, you can do EXEC CICS ASSIGN and get the name of the current program with the PROGRAM option, and the program that linked to it with INVOKINGPROG. That will give you Program C and Program B in this case.
To get the original highest level program is more difficult. You can inquire on the current transaction (EIBTRNID) to get what program that runs, but if you've been routed somewhere then that won't be Program X but DFHMIRS for example.
You can do this, but it will take a little assembler. Essentially, you need to chase up the save areas and execute a CSVQUERY on the return address, this will give you the name of the module that owns that save area.
There are a few quirks, you need to watch out for Cobol runtime modules (prefixed with IGZ) and/or Language Environment modules (prefixed with CEE). When you do a Cobol Call, it invokes a runtime module that then calls the program you called.
Also, this will not identify programs that did a E. C. LINK or E. C. XCTL, only Cobol Call invocations that use OS save area conventions.
Example:
CSVQUERY SEARCH=JPALPA,INADDR=<R14_from_savearea>,OUTEPNM=<module_name_output>,MF=(E,PLIST)
Do that repeatedly for every return address on the save area chain and you will know all the callers.
There is no supported way to do that. Some people try chasing save areas and walking through the executable code to determine its name, but I suspect that all ends in tears.
One problem is that you have no guarantee of how LINK or XCTL have been implemented. In the case of dynamic calls you might be able to follow the save area chain, but then you need to figure out how to identify the module. Not something you're likely to be able to do in just COBOL.
I've recently started a project in Erlang after many years I've touched it last time.
I need to use some POSIX calls that are not available in stdlib or 3rd side wrappers, like, for instance sys/mount.h
mount call (man 2 mount) uses some int flags for mount parameters.
They are defined in some headers.
What's approach is better: to use integer flags / defines in Erlang wrappes, or it's more safe to use a list of atoms for arguments like this and parse them in C?
Are there any active port/driver wrapper generator for Erlang?
I know about dryverl, ic, etc, but they looks abandoned and also
it's inconvenient to write descriptions for functions in XML.
I think the better approach is to use a list of atoms in API functions which you provide for programmers and then transform them to integer flags in wrapper itself. Then pass them to C as integer.
This may be a naive question, and I suspect the answer is "yes," but I had no luck searching here and elsewhere on terms like "erlang compiler optimization constants" etc.
At any rate, can (will) the erlang compiler create a data structure that is constant or literal at compile time, and use that instead of creating code that creates the data structure over and over again? I will provide a simple toy example.
test() -> sets:from_list([usd, eur, yen, nzd, peso]).
Can (will) the compiler simply stick the set there at the output of the function instead of computing it every time?
The reason I ask is, I want to have a lookup table in a program I'm developing. The table is just constants that can be calculated (at least theoretically) at compile time. I'd like to just compute the table once, and not have to compute it every time. I know I could do this in other ways, such as compute the thing and store it in the process dictionary for instance (or perhaps an ets or mnesia table). But I always start simple, and to me the simplest solution is to do it like the toy example above, if the compiler optimizes it.
If that doesn't work, is there some other way to achieve what I want? (I guess I could look into parse transforms if they would work for this, but that's getting more complicated than I would like?)
THIS JUST IN. I used compile:file/2 with an 'S' option to produce the following. I'm no erlang assembly expert, but it looks like the optimization isn't performed:
{function, test, 0, 5}.
{label,4}.
{func_info,{atom,exchange},{atom,test},0}.
{label,5}.
{move,{literal,[usd,eur,yen,nzd,peso]},{x,0}}.
{call_ext_only,1,{extfunc,sets,from_list,1}}.
No, erlang compiler doesn't perform partial evaluation of calls to external modules which set is. You can use ct_expand module of famous parse_trans to achieve this effect.
providing that set is not native datatype for erlang, and (as matter of fact) it's just a library, written in erlang, I don't think it's feasibly for compiler to create sets at compile time.
As you could see, sets are not optimized in erlang (as any other library written in erlang).
The way of solving your problem is to compute the set once and pass it as a parameter to the functions or to use ETS/Mnesia.