Does GraalVM use same thread and heap space when calling polyglot functions? - graalvm

If I call R code from Java within GraalVM (using GraalVM's polyglot function), does the R code and the Java code run on the same Java thread (ie there's no switching between OS or Java threads etc?) Also, is it the same "memory/heap" space? That is, in the example code below (which I took from https://www.baeldung.com/java-r-integration)
public double mean(int[] values) {
Context polyglot = Context.newBuilder().allowAllAccess(true).build();
String meanScriptContent = RUtils.getMeanScriptContent();
polyglot.eval("R", meanScriptContent);
Value rBindings = polyglot.getBindings("R");
Value rInput = rBindings.getMember("c").execute(values);
return rBindings.getMember("customMean").execute(rInput).asDouble();
}
does the call rBindings.getMember("c").execute(values) cause the values object (an array of ints) to be copied? Or is GraalVM smart enough to consider it a pointer to the same memory space? If it's a copy, is the copying time the same (or similar, ie within say 20%) time as if I were to a normal java clone() operation? Finally, does calling a polyglot function (in this case customMean implemented in R) have the same overhead as calling a native Java function? Bonus question: can the GraalVM JIT compiler even compile accross the layers, eg say I had this:
final long sum = IntStream.range(0,10000)
.stream()
.map(x -> x+4)
.map(x -> <<<FastR version of the following inverse operation: x-4 >>>)
.sum();
would the GraalVM compiler be as smart as say a normal Java JIT compiler and realize that the whole above statement can be simply written without the two map operations (Since they cancel each other out)?
FYI: I'm considering using GraalVM to run both my Java code and my R code, once the issue I identified here is resolved (Why is FASTR (ie GraalVM version of R) 10x *slower* compared to normal R despite Oracle's claim of 40x *faster*?) and one of the motivitations is that I hope to eliminate the 50% of time that calling R (using RServe()) from Java is spent on network IO (because Java communicates with RServer over TCP/IP and RServe and Java are on different threads and memory spaces etc etc.)

does the R code and the Java code run on the same Java thread. Also, is it the same "memory/heap" space?
Yes and yes. You can even use GraalVM VisualVM to inspect the heap: it provides standard Java view where you can see instances of FastR internal representations like RIntVector mingled with the rest of the other Java objects, or R view where you can see integer vectors, lists, environments, ...
does the call rBindings.getMember("c").execute(values) cause the values object (an array of ints) to be copied?
In general yes: most objects are passed to R as-is. Inside R you have two choices:
Explicitly convert them to some concrete type, i.e., as.integer(arg), which does not make a copy, but tells R explicitly how you want that value to be treated as "native" R type including R's value semantics.
Leave it up to the default rules, which will be applied once your objects is passed to some R builtin, e.g., int[] is treated as integer vector (but note that treating it as a list would be also reasonable in some cases). Again no copies here. And the object itself keeps its reference semantics.
However, sometimes FastR needs to make a copy:
some builtin functions cannot handle foreign objects yet
R language often implicitly copies vectors, because of its value semantics, arguments coercion, etc.
when a vector is passed to native R extension, we need to move its data to off heap memory
I would say that if you happen to have a very large vector, say GBs of data, you need to be very careful about it even in regular R. Note: FastR vectors are by default backed by Java arrays, so their size limitations apply to FastR vectors too.
Finally, does calling a polyglot function (in this case customMean implemented in R) have the same overhead as calling a native Java function?
Mostly yes, except that the function cannot be pulled and inlined into the surrounding Java code(+). The call itself is as fast as regular Java call. For the example you give: it cannot be optimized as you suggest, because the R function cannot be inlined(+). However, I would be very skeptical that any compiler can optimize this as you suggest even if both functions where pure Java code. That being said, yes: some things that compiler can optimize, like eliminating some useless computations that it can analyze well, is not going to work because of the impossibility to inline code across the Java <-> R boundary(+).
(+) Unless you'd run the Java code with Espresso (Java on Truffle), but then you would not be using Context API but Espresso's interop support.

Related

Why I always fail on loading big files in lua? [duplicate]

The overview is I am prototyping code to understand my problem space, and I am running into 'PANIC: unprotected error in call to Lua API (not enough memory)' errors. I am looking for ways to get around this limit.
The environment bottom line is Torch, a scientific computing framework that runs on LuaJIT, and LuaJIT runs on Lua. I need Torch because I eventually want to hammer on my problem with neural nets on a GPU, but to get there I need a good representation of the problem to feed to the nets. I am (stuck) on Centos Linux, and I suspect that trying to rebuild all the pieces from source in 32bit mode (this is reported to extend the LuaJIT memory limit to 4gb) will be a nightmare if it works at all for all of the libraries.
The problem space itself is probably not particularly relevant, but in overview I have datafiles of points that I calculate distances between and then bin (i.e. make histograms of) these distances to try and work out the most useful ranges. Conveniently I can create complicated Lua tables with various sets of bins and torch.save() the mess of counts out, then pick it up later and inspect with different normalisations etc. -- so after one month of playing I am finding this to be really easy and powerful.
I can make it work looking at up to 3 distances with 15 bins each (15x15x15 plus overhead), but this only by adding explicit garbagecollection() calls and using fork()/wait() for each datafile so that the outer loop will keep running if one datafile (of several thousand) still blows the memory limit and crashes the child. This gets extra painful as each successful child process now has to read, modify and write the current set of bin counts -- and my largest files for this are currently 36mb. I would like to go larger (more bins), and would really prefer to just hold the counts in the 15 gigs of RAM I can't seem to access.
So, here are some paths I have thought of; please do comment if you can confirm/deny that any of them will/won't get me outside of the 1gb boundary, or will just improve my efficiency within it. Please do comment if you can suggest another approach that I have not thought of.
am I missing a way to fire off a Lua process that I can read an arbitrary table back in from? No doubt I can break my problem into smaller pieces, but parsing a return table from stdio (as from a system call to another Lua script) seems error prone, and writing/reading small intermediate files will be a lot of disk i/o.
am I missing a stash-and-access-table-in-high-memory module ? This seems like what I really want, but not found it yet
can FFI C data structures be put outside the 1gb? Doesn't seem like that would be the case but certainly I lack a full understanding of what is causing the limit in the first place. I suspect that this will just get me an efficiency improvement over generic Lua tables for the few pieces that have moved beyond prototyping? (unless I do a bunch of coding for each change)
Surely I can get out by writing an extension in C (Torch appears to support nets that should go outside of the limit), but my brief investigation there turns up references to 'lightuserdata' pointers -- does this mean that a more normal extension won't get outside 1gb either? This also seems like it has the heavy development cost for what should be a prototyping exercise.
I know C well so going the FFI or extension route doesn't bother me - but I know from experience that encapsulating algorithms in this way can be both really elegant and really painful with two places to hide bugs. Working through data structures containing tables within tables on the stack doesn't seem great either. Before I make this effort I would like to be certain that the end result really will solve my problem.
Thanks for reading the long post.
Only object allocated by LuaJIT itself are limited to the first 2GB of memory. This means that tables, strings, full userdata (i.e. not lightuserdata), and FFI objects allocated with ffi.new will count towards the limit, but objects allocated with malloc, mmap, etc. are not subjected to this limit (regardless if called by a C module or the FFI).
An example for allocating a structure with malloc:
ffi.cdef[[
typedef struct { int bar; } foo;
void* malloc(size_t);
void free(void*);
]]
local foo_t = ffi.typeof("foo")
local foo_p = ffi.typeof("foo*")
function alloc_foo()
local obj = ffi.C.malloc(ffi.sizeof(foo_t))
return ffi.cast(foo_p, obj)
end
function free_foo(obj)
ffi.C.free(obj)
end
The new GC to be implemented in LuaJIT 3.0 IIRC will not have this limit, but I haven't heard any news on it's development recently.
Source: http://lua-users.org/lists/lua-l/2012-04/msg00729.html
Here is some follow-up information for those who find this question later:
The key information is as posted by Colonel Thirty Two, that C module extensions and FFI code can easily get outside of the limit. (and the referenced lua list post reminds that plain Lua tables that go outside the limit will be very slow to garbage collect)
It took me some time to pull the pieces together to both access and save/load my objects, so here it is in one place:
I used lds at https://github.com/neomantra/lds as a starting point, in particular the 1-D Array code.
This broke using torch.save(), as it doesn't know how to write the new objects. For each object I added the code below (using Array as the example):
function Array:load(inp)
for i=1,#inp do
self._data[i-1] = tonumber(inp[i])
end
return self
end
function Array:serialize ()
local siz = tonumber(self._size)
io.write(' lds.ArrayT( ffi.typeof("double"), lds.MallocAllocator )( ', siz , "):load({")
for i=0,siz-1 do
io.write(string.format("%a,", self._data[i]))
end
io.write("})")
end
Note that my application specifically uses doubles and malloc(), so a better implementation would store and use these in self rather than hard coding above.
Then as discussed in PiL and elsewhere, I needed a serializer that would handle the object:
function serialize (o)
if type(o) == "number" then
io.write(o)
elseif type(o) == "string" then
io.write(string.format("%q", o))
elseif type(o) == "table" then
io.write("{\n")
for k,v in pairs(o) do
io.write(" ["); serialize(k); io.write("] = ")
serialize(v)
io.write(",\n")
end
io.write("}\n")
elseif o.serialize then
o:serialize()
else
error("cannot serialize a " .. type(o))
end
end
and this needs to be wrapped with:
io.write('do local _ = ')
serialize( myWeirdTable )
io.write('; return _; end')
and then the output from that can be loaded back in with
local myWeirdTableReloaded = dofile('myWeirdTableSaveFile')
See PiL (Programming in Lua book) for dofile()
Hope that helps someone!
You can use the torch tds module. From the README:
Data structures which do not rely on Lua memory allocator, nor being limited by Lua garbage collector.
Only C types can be stored: supported types are currently number, strings, the data structures themselves (see nesting: e.g. it is possible to have a Hash containing a Hash or a Vec), and torch tensors and storages. All data structures can store heterogeneous objects, and support torch serialization.

How is a list constructed by the Erlang VM (BEAM)?

When I create a list in Erlang, such as in the Erlang shell:
1> [1, 2].
From what I understand, in the vm this list will be represented as a singly linked list.
How is this structure created by the Erlang runtime? For example, is it constructed something like this:
create a structure in memory to hold an list that terminates the list
create a structure in memory to hold the item '2', and a reference to the empty list.
create a structure in memory to hold the item '1', and a reference to item '2'.
Am I correct in thinking the following c and erlang code is where the bulk of the work is done?
https://github.com/erlang/otp/blob/maint/lib/stdlib/src/lists.erl
https://github.com/erlang/otp/blob/maint/erts/emulator/beam/erl_bif_lists.c
https://github.com/erlang/otp/blob/maint/erts/emulator/beam/erl_term.h
https://github.com/erlang/otp/blob/maint/erts/emulator/beam/erl_term.c
erl_term.h contains a macro make_list but I haven't been able to find the implementation yet...
The Erlang VM implementation BEAM uses a technique which dates back to first Lisp implementations back to the 60s or early 70s. It is sometimes referred as tagged or typed pointers. (Tags) This technique does not store type of a target in a target object (lists CONS in this case) but in the pointer itself or save a scalar value in the place where is usually a pointer. It allows save a quite good amount of memory especially in dynamically typed languages as LISP or Erlang. (It was interesting in old days when memory was very expensive and become important again nowadays when CPU become much faster than memory and cache miss/hit determines a speed of algorithms.) As a drawback, it also leads to a little bit confusing code. The whole part which deals with list construction starts at line 216 of erl_term.h. You can note there is macro
#define _unchecked_make_list(x) ((Uint) COMPRESS_POINTER(x) + TAG_PRIMARY_LIST)
which is macro you are looking for. It is your make_list. The line
_ET_DECLARE_CHECKED(Eterm,make_list,const Eterm*)
Would make a checked version of it when compiled with ET_DEBUG. (See more details.) Macro make_list
#define make_list(x) _ET_APPLY(make_list,(x))
would just call the checked or unchecked version of make_list. The macros which really constructing list are
#define CONS(hp, car, cdr) \
(CAR(hp)=(car), CDR(hp)=(cdr), make_list(hp))
#define CAR(x) ((x)[0])
#define CDR(x) ((x)[1])
The list cell structure is simply two consecutive Uint values on the heap (hp) which address is compressed and tagged (See _unchecked_make_list). I hope this description helps you.

boost lockfree spsc_queue cache memory access

I need to be extremely concerned with speed/latency in my current multi-threaded project.
Cache access is something I'm trying to understand better. And I'm not clear on how lock-free queues (such as the boost::lockfree::spsc_queue) access/use memory on a cache level.
I've seen queues used where the pointer of a large object that needs to be operated on by the consumer core is pushed into the queue.
If the consumer core pops an element from the queue, I presume that means the element (a pointer in this case) is already loaded into the consumer core's L2 and L1 cache. But to access the element, does it not need to access the pointer itself by finding and loading the element either from either the L3 cache or across the interconnect (if the other thread is on a different cpu socket)? If so, would it maybe be better to simply send a copy of the object that could be disposed of by the consumer?
Thank you.
C++ principally a pay-for-what-you-need eco-system.
Any regular queue will let you choose the storage semantics (by value or by reference).
However, this time you ordered something special: you ordered a lock free queue.
In order to be lock free, it must be able to perform all the observable modifying operations as atomic operations. This naturally restricts the types that can be used in these operations directly.
You might doubt whether it's even possible to have a value-type that exceeds the system's native register size (say, int64_t).
Good question.
Enter Ringbuffers
Indeed, any node based container would just require pointer swaps for all modifying operations, which is trivially made atomic on all modern architectures.
But does anything that involves copying multiple distinct memory areas, in non-atomic sequence, really pose an unsolvable problem?
No. Imagine a flat array of POD data items. Now, if you treat the array as a circular buffer, one would just have to maintain the index of the buffer front and end positions atomically. The container could, at leisure update in internal 'dirty front index' while it copies ahead of the external front. (The copy can use relaxed memory ordering). Only as soon as the whole copy is known to have completed, the external front index is updated. This update needs to be in acq_rel/cst memory order[1].
As long as the container is able to guard the invariant that the front never fully wraps around and reaches back, this is a sweet deal. I think this idea was popularized in the Disruptor Library (of LMAX fame). You get mechanical resonance from
linear memory access patterns while reading/writing
even better if you can make the record size aligned with (a multiple) physical cache lines
all the data is local unless the POD contains raw references outside that record
How Does Boost's spsc_queue Actually Do This?
Yes, spqc_queue stores the raw element values in a contiguous aligned block of memory: (e.g. from compile_time_sized_ringbuffer which underlies spsc_queue with statically supplied maximum capacity:)
typedef typename boost::aligned_storage<max_size * sizeof(T),
boost::alignment_of<T>::value
>::type storage_type;
storage_type storage_;
T * data()
{
return static_cast<T*>(storage_.address());
}
(The element type T need not even be POD, but it needs to be both default-constructible and copyable).
Yes, the read and write pointers are atomic integral values. Note that the boost devs have taken care to apply enough padding to avoid False Sharing on the cache line for the reading/writing indices: (from ringbuffer_base):
static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(size_t);
atomic<size_t> write_index_;
char padding1[padding_size]; /* force read_index and write_index to different cache lines */
atomic<size_t> read_index_;
In fact, as you can see, there are only the "internal" index on either read or write side. This is possible because there's only one writing thread and also only one reading thread, which means that there could only be more space at the end of write operation than anticipated.
Several other optimizations are present:
branch prediction hints for platforms that support it (unlikely())
it's possible to push/pop a range of elements at once. This should improve throughput in case you need to siphon from one buffer/ringbuffer into another, especially if the raw element size is not equal to (a whole multiple of) a cacheline
use of std::unitialized_copy where possible
The calling of trivial constructors/destructors will be optimized out at instantiation time
the unitialized_copy will be optimized into memcpy on all major standard library implementations (meaning that e.g. SSE instructions will be employed if your architecture supports it)
All in all, we see a best-in-class possible idea for a ringbuffer
What To Use
Boost has given you all the options. You can elect to make your element type a pointer to your message type. However, as you already raised in your question, this level of indirection reduces locality of reference and might not be optimal.
On the other hand, storing the complete message type in the element type could become expensive if copying is expensive. At the very least try to make the element type fit nicely into a cache line (typically 64 bytes on Intel).
So in practice you might consider storing frequently used data right there in the value, and referencing the less-of-used data using a pointer (the cost of the pointer will be low unless it's traversed).
If you need that "attachment" model, consider using a custom allocator for the referred-to data so you can achieve memory access patterns there too.
Let your profiler guide you.
[1] I suppose say for spsc acq_rel should work, but I'm a bit rusty on the details. As a rule, I make it a point not to write lock-free code myself. I recommend anyone else to follow my example :)

Does functional programming take up more memory?

Warning! possibly a very dumb question
Does functional programming eat up more memory than procedural programming?
I mean ... if your objects(data structures whatever) are all imutable. Don't you end up having more object in the memory at a given time.
Doesn't this eat up more memory?
It depends on what you're doing. With functional programming you don't have to create defensive copies, so for certain problems it can end up using less memory.
Many functional programming languages also have good support for laziness, which can further reduce memory usage as you don't create objects until you actually use them. This is arguably something that's only correlated with functional programming rather than a direct cause, however.
Persistent values, that functional languages encourage but which can be implemented in an imperative language, make sharing a no-brainer.
Although the generally accepted idea is that with a garbage collector, there is some amount of wasted space at any given time (already unreachable but not yet collected blocks), in this context, without a garbage collector, you end up very often copying values that are immutable and could be shared, just because it's too much of a mess to decide who is responsible for freeing the memory after use.
These ideas are expanded on a bit in this experience report which does not claim to be an objective study but only anecdotal evidence.
Apart from avoiding defensive copies by the programmer, a very smart implementation of pure functional programming languages like Haskell or Standard ML (which lack physical pointer equality) can actively recover sharing of structurally equal values in memory, e.g. as part of the memory management and garbage collection.
Thus you can have automatic hash consing provided by your programming language runtime-system.
Compare this with objects in Java: object identity is an integral part of the language definition. Even just exchanging one immutable String for another poses semantic problems.
There is indeed at least a tendency to regard memory as affluent ressource (which, in fact, it really is in most cases), but this applies to modern programming as a whole.
With multiple cores, parallel garbage collectors and available RAM in the gigabytes, one used to concentrate on different aspects of a program than in earlier times, when every byte one could save counted. Remember when Bill Gates said "640K should be enough for every program"?
I know that I'm a lot late on this question.
Functional languages does not in general use more memory than imperative or OO languages. It depends more on the code you write. Yes F#, SML, Haskell and such has immutable values (not variables), but for all of them it goes without saying that if you update f.x. a single linked list, it re-compute only what is necessary.
Say you got a list of 5 elements, and you are removing the first 3 and adding a new one in front of it. it will simply get the pointer that points to the fourth element and let the new list point to that point of data i.e. reusing data. as seen below.
old list
[x0,x1,x2]
\
[x3,x4]
new list /
[y0,y1]
If it was an imperative language we could not do this because the values x3 and x4 could very well change over time, the list [x3,x4] could change too. Say that the 3 elements removed are not used afterward, the memory they use can be cleaned up right away, in contrast to unused space in an array.
That all data are immutable (except IO) are a strength. It simplifies the data flow analysis from a none trivial computation to a trivial one. This combined with a often very strong type system, will give the compiler a bunch of information about the code it can use to do optimization it normally could not do because of indicability. Most often the compiler turn values that are re-computed recursively and discarded from each iteration (recursion) into a mutable computation. These two things gives you the proof that if your program compile it will work. (with some assumptions)
If you look at the language Rust (not functional) just by learning about "borrow system" you will understand more about how and when things can be shared safely. it is a language that is painful to write code in unless you like to see your computer scream at you that your are an idiot. Rust is for the most part the combination of all the study made of programming language and type theory for more than 40 years. I mention Rust, because it despite the pain of writing in it, has the promise that if your program compile, there will be NO memory leaking, dead locking, dangling pointers, even in multi processing programs. This is because it uses much of the research of functional programming language that has been done.
For a more complex example of when functional programming uses less memory, I have made a lexer/parser interpreter (the same as generator but without the need to generate a code file) when computing the states of the DFA (deterministic finite automata) it uses immutable sets, because it compute new sets of already computed sets, my code allocate less memory simply because it borrow already known data points instead of copying it to a new set.
To wrap it up, yes functional programming can use more memory than imperative once. Most likely it is because you are using the wrong abstraction to mirror the problem. i.e. If you try to do it the imperative way in a functional language it will hurt you.
Try this book, it has not much on memory management but is a good book to start with if you will learn about compiler theory and yes it is legal to download. I have ask Torben, he is my old professor.
http://hjemmesider.diku.dk/~torbenm/Basics/
I'll throw my hat in the ring here. The short answer to the question is no, and this is because immutability does not mean the same thing as stored in memory. For example, let's take this toy program :
x = 2
x = x * 3
x = x * 2
print(x)
Which uses mutation to compute new values. Compare this to the same program which does not use mutation:
x = 2
y = x * 3
z = y * 2
print(z)
At first glance, it appears this requires 3x the memory of the first program! However, just because a value is immutable doesn't mean it needs to be stored in memory. In the case of the second program, after y is computed, x is no longer necessary, because it isn't used for the rest of the program, and can be garbage collected, or removed from memory. Similarly, after z is computed, y can be garbage collected. So, in principle, with a perfect garbage collector, after we execute the third line of code, I only need to have stored z in memory.
Another oft-worried about source of memory consumption in functional languages is deep recursion. For example, calculating a large Fibonacci number.
calc_fib(x):
if x > 1:
return x * calc_fib(x-1)
else:
return x
If I run calc_fib(100000), I could implement this in a way which requires storing 100000 values in memory, or I could use Tail-Call Elimination (basically storing only the most-recently computed value in memory instead of all function calls). For less straightforward recursion you can resort to trampolining. So for functional languages which support this, recursion does not need to be a source of massive memory consumption, either. However, not all nominally functional languages do (for example, JavaScript does not).

How does a virtual machine work?

I've been looking into how programming languages work, and some of them have a so-called virtual machines. I understand that this is some form of emulation of the programming language within another programming language, and that it works like how a compiled language would be executed, with a stack. Did I get that right?
With the proviso that I did, what bamboozles me is that many non-compiled languages allow variables with "liberal" type systems. In Python for example, I can write this:
x = "Hello world!"
x = 2**1000
Strings and big integers are completely unrelated and occupy different amounts of space in memory, so how can this code even be represented in a stack-based environment? What exactly happens here? Is x pointed to a new place on the stack and the old string data left unreferenced? Do these languages not use a stack? If not, how do they represent variables internally?
Probably, your question should be titled as "How do dynamic languages work?."
That's simple, they store the variable type information along with it in memory. And this is not only done in interpreted or JIT compiled languages but also natively-compiled languages such as Objective-C.
In most VM languages, variables can be conceptualized as pointers (or references) to memory in the heap, even if the variable itself is on the stack. For languages that have primitive types (int and bool in Java, for example) those may be stored on the stack as well, but they can not be assigned new types dynamically.
Ignoring primitive types, all variables that exist on the stack have their actual values stored in the heap. Thus, if you dynamically reassign a value to them, the original value is abandoned (and the memory cleaned up via some garbage collection algorithm), and the new value is allocated in a new bit of memory.
The VM has nothing to do with the language. Any language can run on top of a VM (the Java VM has hundreds of languages already).
A VM enables a different kind of "assembly language" to be run, one that is more fit to adapting a compiler to. Everything done in a VM could be done in a CPU, so think of the VM like a CPU. (Some actually are implemented in hardware).
It's extremely low level, and in many cases heavily stack based--instead of registers, machine-level math is all relative to locations relative to the current stack pointer.
With normal compiled languages, many instructions are required for a single step. a + might look like "Grab the item from a point relative to the stack pointer into reg a, grab another into reg b. add reg a and b. put reg a into a place relative to the stack pointer.
The VM does all this with a single, short instruction, possibly one or two bytes instead of 4 or 8 bytes PER INSTRUCTION in machine language (depending on 32 or 64 bit architecture) which (guessing) should mean around 16 or 32 bytes of x86 for 1-2 bytes of machine code. (I could be wrong, my last x86 coding was in the 80286 era.)
Microsoft used (probably still uses) VMs in their office products to reduce the amount of code.
The procedure for creating the VM code is the same as creating machine language, just a different processor type essentially.
VMs can also implement their own security, error recovery and memory mechanisms that are very tightly related to the language.
Some of my description here is summary and from memory. If you want to explore the bytecode definition yourself, it's kinda fun:
http://java.sun.com/docs/books/jvms/second_edition/html/Instructions2.doc.html
The key to many of the 'how do VMs handle variables like this or that' really comes down to metadata... The meta information stored and then updated gives the VM a much better handle on how to allocate and then do the right thing with variables.
In many cases this is the type of overhead that can really get in the way of performance. However, modern day implementations, etc have come a long way in doing the right thing.
As for your specific questions - treating variables as vanilla objects / etc ... comes down to reassigning / reevaluating meta information on new assignments - that's why x can look one way and then the next.
To answer a part of your questions, I'd recommend a google tech talk about python, where some of your questions concerning dynamic languages are answered; for example what a variable is (it is not a pointer, nor a reference, but in case of python a label).

Resources