What is the main difference between mutationobserver and addeventlistener? - addeventlistener

I don't want to write my understanding of the difference so to bias the question so I'll just ask: What is the main difference between mutationobserver and addeventlistener?
I ask because I'm not sure I understand it correctly and both terms seem to me quite similar.

Related

How to recover a valuation from a satifsiable formula, a question about model

I'm using Z3 with the ml interface. I had created a formula
f(x_i)
that is satisfiable, according to the solver
Solver.mk_simple_solver ctxr.
The problem is: I can get a model, but he find me values only for some variables of the formula, and not all (some of my Model.get_const_interp_er end with a type None)
How can it be possible that the model can give me only a part of the x_ir? In my understanding, if the model work for one of the values, it means that the formula was satisfiable (in my case, it is) and so all the values can be given...
I don't understand something..
Thanks for reading me!
You should always post full examples so people can help with actual coding issues; without seeing your actual code, it's impossible to know what might be the actual reason.
Having said that, this sounds very much like the following question: Why Z3Py does not provide all possible solutions So, perhaps the answer given there will help you.
Long story short: Z3 models will only contain values for variables that matter for the model. For anything that is not explicitly assigned, any value will do. There are ways to get "full" models as explained in that answer of course; which I'm sure is also possible from the ML interface.

How to use incremental solving with z3py

I am using the python API of the Z3 solver to search for optimized schedules. It works pretty well apart from that it sometimes is very slow even for small graphs (sometimes its very quick though). The reason for that is probably that the constraints of my scheduling problem are quite complex.
I am trying to speed things up and stumbled on some articles about incremental solving.
As far I understood, you can use incremental solving to prune some of the search space by only applying parts of the constraints.
So my original code was looking like that:
for constraint in constraint_set:
self._opt_solver.add(constraint)
self._opt_solver.minimize(some_objective)
self._opt_solver.check()
model = self._opt_solver.mode()
I changed it now to the following:
for constraint in constraint_set:
self._opt_solver.push(constraint)
self._opt_solver.check()
self._opt_solver.minimize(some_objective)
self._opt_solver.check()
model = self._opt_solver.mode()
I basically substituted the "add" command by the "push" command and added a check() after each push.
So first of all: is my general approach correct?
Furthermore, I get an exception which I can't get rid of:
self._opt_solver.push(constraint) TypeError: push() takes 1 positional argument but 2 were given
Can anyone give me a hint, what I am doing wrong. Also is there maybe a z3py tutorial that explains (with some examples maybe) how to use incremental solving with the python api.
My last question is: Is that at all the right way of minimizing the execution time of the solver or is there a different/better way?
The function push doesn't take an argument. It creates a "backtracking" point that you can pop to later on. See here: http://z3prover.github.io/api/html/classz3py_1_1_solver.html#abc4ae989afee7ad164844640537107d9
So, it seems push isn't really what you want/need here at all. You should simply add your constraints one-by-one and call check. However, I very much doubt checking after each addition is going to speed anything up significantly. The optimizing solver (as opposed to the regular one), in particular, usually solves everything from scratch. (See the relevant discussion here: https://github.com/Z3Prover/z3/issues/1577)
Regarding incremental: The python API is automatically "incremental." Incremental simply means the ability to call the command check() multiple times, without the solver forgetting what it has seen before. (i.e., call check, assert more facts, call check again; the second check will take into account all the assertions from the very beginning.) You shouldn't make any assumptions regarding this will give you speed over calling check just once at the very end: It entirely depends on the heuristics and the decision procedures involved, which is dependent on the problem at hand.

do record_info and tuple_to_list return the same key order in Erlang?

I.e, if I have a record
-record(one, {frag, left}).
Is record_info(fields, one) going to always return [frag,
left]?
Is tl(tuple_to_list(#one{frag = "Frag", left = "Left"}))
always gonna be ["Frag", "Left"]?
Is this an implementation detail?
Thanks a lot!
The short answer is: yes, as of this writing it will work. The better answer is: it may not work that way in the future, and the nature of the question concerns me.
It's safe to use record_info/2, although relying on the order may be risky and frankly I can't think of a situation where doing so makes sense which implies that you are solving a problem the wrong way. Can you share more details about what exactly you are trying to accomplish so we can help you choose a better method? It could be that simple pattern matching is all you need.
As for the example with tuple_to_list/1, I'll quote from "Erlang Programming" by Cesarini and Thompson:
"... whatever you do, never, ever use the tuple representations of records in your programs. If you do, the authors of this book will disown you and deny any involvement in helping you learn Erlang."
There are several good reasons why, including:
Your code will become brittle - if you later change the number of fields or their order, your code will break.
There is no guarantee that the internal representation of records will continue to work this way in future versions of erlang.
Yes, order is always the same because records represented by tuples for which order is an essential property. Look also on my other answer about records with examples: Syntax Error while accessing a field in a record
Yes, in both cases Erlang will retain the 'original' order. And yes it's implementation as it's not specifically addressed in the function spec or documentation, though it's a pretty safe bet it will stay like that.

Basic Ruby/Rails question about understanding blocks & block variables

I'm starting to get comfortable with Ruby/Rails but must admit I still look askance when I see an unfamiliar block. take the following code:
(5..10).reduce(0) do |sum, value|
sum + value
end
I know what it does...but, how does one know the order of the parameters passed into a block in Ruby? Are they taken in order? How do you quickly know what they represent?
I'm assuming one must look at the source (or documentation) to uncover what's being yielded...but is there a shortcut? I guess I'm wondering how the old vets quickly discern what a block is doing?!? How should one approach looking at/interpreting blocks?
You just have to look it up in the documentation until you have it memorized. I still have trouble with reduce and a couple others. It's just like trying to remember the argument order for ordinary methods. Programmers have to deal with this problem in pretty much every language.
When you write code, there's no other way than checking the documentation - even if Ruby is quite consistent and coherent in this kind of things, so often you just expect things to work on a particular way.
On the other hand, when you read code, you can just hope that the coder has been smart and kind enough to use consistent variable names. In your example
(5..10).reduce(0) do |sum, value|
sum + value
end
There is a reason if the variables are called sum and value! :-) Something like
(5..10).reduce(0) {|i,j|i+j}
is of course the same, but much less readable. So the lesson here is: write good code and you'll convey some piece of information more than just instructions to a computer!

Does functional programming avoid state?

According to wikipedia: functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. (emphasis mine).
Is this really true? My personal understanding is that it makes the state more explicit, in the sense that programming is essentially applying functions (transforms) to a given state to get a transformed state. In particular, constructs like monads let you carry the state explicitly through functions. I also don't think that any programming paradigm can avoid state altogether.
So, is the Wikipedia definition right or wrong? And if it is wrong what is a better way to define functional programming?
Edit: I guess a central point in this question is what is state? Do you understand state to be variables or object attributes (mutable data) or is immutable data also state? To take an example (in F#):
let x = 3
let double n = 2 * n
let y = double x
printfn "%A" y
would you say this snippet contains a state or not?
Edit 2: Thanks to everyone for participating. I now understand the issue to be more of a linguistic discrepancy, with the use of the word state differing from one community to the other, as Brian mentions in his comment. In particular, many in the functional programming community (mainly Haskellers) interpret state to carry some state of dynamism like a signal varying with time. Other uses of state in things like finite state machine, Representational State Transfer, and stateless network protocols may mean different things.
I think you're just using the term "state" in an unusual way. If you consider adding 1 and 1 to get 2 as being stateful, then you could say functional programming embraces state. But when most people say "state," they mean storing and changing values, such that calling a function might leave things different than before the function was called, and calling the function a second time with the same input might not have the same result.
Essentially, stateless computations are things like this:
The result of 1 + 1
The string consisting of 's' prepended to "pool"
The area of a given rectangle
Stateful computations are things like this:
Increment a counter
Remove an element from the array
Set the width of a rectangle to be twice what it is now
Rather than avoids state, think of it like this:
It avoids changing state. That word "mutable".
Think in terms of a C# or Java object. Normally you would call a method on the object and you might expect it to modify its internal state as a result of that method call.
With functional programming, you still have data, but it's just passed through each function, creating output that corresponds to the input and the operation.
At least in theory. In reality, not everything you do actually works functionally so you frequently end up hiding state to make things work.
Edit:
Of course, hiding state also frequently leads to some spectacular bugs, which is why you should only use functional programming languages for purely functional situations. I've found that the best languages are the ones that are object oriented and functional, like Python or C#, giving you the best of both worlds and the freedom to move between each as necessary.
The Wikipedia definition is right. Functional programming avoids state. Functional programming means that you apply a function to a given input and get a result. However, it is guaranteed that your input will not be modified in any way. It doesn't mean that you cannot have a state at all. Monads are perfect example of that.
The Wikipedia definition is correct. It may seem puzzling at first, but if you start working with say Haskell, you'll note that you don't have any variables laying around containing values.
The state can still be sort of represented using state monads.
At some point, all apps must have state (some data that changes). Otherwise, you would start the app up, it wouldn't be able to do anything, and the app would finish.
You can have immutable data structures as part of your state, but you will swap those for other instances of other immutable data structures.
The point of immutability is not that data does not change over time. The point is that you can create pure functions that don't have side effects.

Resources