I am verifying a very small model. But I receive the memory exhaust message. I changed the model several times but having same problem.
I thought that problem would be due to using a user defined function or using the select option to get the random number. Then I changed the model and didn't call the function nor used the Selection option but still....
I am wondering either it's UPPAAL's issue or in my model. There is no error other than memory exhaust. Once the value of "r1" and "r2" are changed after that ctl property doesn't work.
CTL works for all values of r1 and r2 before the increment.
The model increments several variables (r1, r2 and cntr): if there is no upper bound for them (and it seems there is not, but I cannot see all the functions), then the state space is going to be huge (all values multiplied times the number of locations, times clock zones) and thus exhaust all the memory.
Either make those variables bounded (do not allow increments passed some value), or declare them as meta (don't do it if you do not understand the consequences).
Related
The following exercise comes from p. 234 of Ierusalimschy's Programming in Lua (4th edition). (NB: Earlier in the book, the author explicitly rejects the word memoization, and insists on using memorization instead. Keep this in mind as you read the excerpt below.)
Exercise 23.3: Imagine you have to implement a memorizing table for a function from strings to strings. Making the table weak will not do the removal of entries, because weak tables do not consider strings as collectable objects. How can you implement memorization in that case?
I am stumped!
Part of my problem is that I have not been able to devise a way to bring about the (garbage) collection of a string.
In contrast, with a table, I can equip it with finalizer that will report when the table is about to be collected. Is there a way to confirm that a given string (and only that string) has been garbage-collected?
Another difficulty is simply figuring out what the desired function's specification is. The best I can do is to figure out what it isn't. Earlier in the book (p. 225), the author gave the following example of a "memorizing" function:
Imagine a generic server that takes requests in the form of strings with Lua code. Each time it gets a request, it runs load on the string, and then calls the resulting function. However, load is an expensive function, and some commands to the server may be quite frequent. Instead of calling load repeatedly each time it receives a common command like "closeconnection()", the server can memorize the results from load using an auxiliary table. Before calling load, the server checks in the table whether the given string already has a translation. If it cannot find a match then (and only then) the server calls load and stores the result into the table. We can pack this behavior in a new function:
[standard memo(r)ized implementation omitted; see variant using a weak-value table below]
The savings with this scheme can be huge. However, it may also cause unsuspected waste. ALthough some commands epeat over and over, many other commands happen only once. Gradually, the ["memorizing"] table results accumulates all commands the server has ever received plus their respective codes; after enough time, this behavior will exhaust the server's memory.
A weak table provides a simple solution to this problem. If the results table has weak values, each garbage-collection cycle will remove all translations not in use at that moment (which means virtually all of them)1:
local results = {}
setmetatable(results, {__mode = "v"}) -- make values weak
function mem_loadstring (s)
local res = results[s]
if res == nil then -- results not available?
res = assert(load(s)) -- compute new results
result[s] = res -- save for later reuse
end
return res
end
As the original problem statement notes, this scheme won't work when the function to be memo(r)ized returns strings, because the garbage collector does not treat strings as "collectable".
Of course, if one is allowed to change the desired function's interface so that instead of returning a string, it returns a singleton table whose sole item is the real result string, then the problem becomes almost trivial, but I find it hard to believe that the author had such a crude "solution" in mind2.
In case it matters, I am using Lua 5.3.
1 As an aside, if the rationale for memo(r)ization is to avoid invoking load more often than necessary, the scheme proposed by the author does not make sense to me. It seems to me that this scheme is based on the assumption (a heuristic, really) that a translation that is used frequently, and thus would pay to memo(r)ize, is also one that is always reachable (and hence not collectable). I don't see why this should necessarily, or even likely, be the case.
2 One may be able to put lipstick on this pig in the form of a __tostring method that would allow the table (the one returned by the memo(r)ized function) to masquerade as a string in certain contexts; it's still a pig, though.
Your idea is correct: wrap string into a table (because table is collectable).
function memormoize (func_from_string_to_string)
local cached = {}
setmetatable(cached, {__mode = "v"})
return
function(s)
local c = cached[s] or {func_from_string_to_string(s)}
cached[s] = c
return c[1]
end
end
And I see no pigs in this solution :-)
one that is always reachable (and hence not collectable). I don't see why this should necessarily, or even likely, be the case.
There will be no "always reachable" items in a weak table.
But most frequent items will be recalculated only once per GC cycle.
The ideal solution (never collect frequently used items) would require more complex implementation.
For example, you can move items from normal cache to weak cache when item's "inactivity timer" reaches some threshold.
It is known that the modifications on a single atomic variable form a total order. Suppose we have an atomic read operation on some atomic variable v at wall-clock time T. Then, is this read guaranteed to acquire the current value of v that is wrote by the last one in the modification order of v at time T? To put it in another way, if an atomic write is done before an atomic read in natural time, and there is no other writes in between, then is the read guaranteed to return the value just written?
My accepted answer is the 6th comment made by Cubbi to his answer.
Wall-clock time is irrelevant. However, what you're describing sounds like the write-read coherence guarantee:
$1.10[intro.multithread]/20
If a side effect X on an atomic object M happens before a value computation B of M, then the evaluation B shall take its value from X or from a side effect Y that follows X in the modification order of M.
(translating the standardese, "value computation" is a read, and "side effect" is a write)
In particular, if your relaxed write and your relaxed read are in different statements of the same function, they are connected by a sequenced-before relationship, therefore they are connected by a happens-before relationship, therefore the guarantee holds.
Depends on the memory order which you can specify for the load() operation.
By default, it is std::memory_order_seq_cst and the answer is yes, it guarantees the current value stored by another thread (if stored at all, i.e. it must use std::memory_order_release memory order at least, otherwise the store visibility is not guaranteed).
But if you specify std::memory_order_relaxed for the load operation the documentation says Relaxed ordering: there are no synchronization or ordering constraints, only atomicity is required of this operation. I.e. the program could end up not reading from the memory at all.
Is a read on an atomic variable guaranteed to acquire the current value of it
No
Even though each atomic variable has a single modification order (which is observed by all threads), that does not mean that all threads observe modifications at the same time scale.
Consider this code:
std::atomic<int> g{0};
// thread 1
g.store(42);
// thread 2
int a = g.load();
// do stuff with a
int b = g.load();
A possible outcome is (see diagram):
thread 1: 42 is stored at time T1
thread 2: the first load returns 0 at time T2
thread 2: the store from thread 1 becomes visible at time T3
thread 2: the second load returns 42 at time T4.
This outcome is possible even though the first load at time T2 occurs after the store at T1 (in clock time).
The standard says:
Implementations should make atomic stores visible to atomic loads within a reasonable amount of time.
It does not require a store to become visible right away and it even allows room for a store to remain invisible (e.g. on systems without cache-coherency).
In that case, an atomic read-modify-write (RMW) is required to access the last value.
Atomic read-modify-write operations shall always read the last value (in the modification order) written
before the write associated with the read-modify-write operation.
Needless to say, RMW's are more expensive to execute (they lock the bus) and that is why a regular atomic load is allowed to return an older (cached) value.
If a regular load was required to return the last value, performance would be horrible while there would be hardly any benefit.
Is there a safe way to ask if a single assignment variable in OZ is bound or not?
Using an unassigned data flow variable in a way that requires the value will cause the program to wait until a value is assigned. In a sequential environment, this means the the program hangs. Assigning a different value to a variable will cause the program to fail. So both ways "tell" me if the variable was bound but not in a safe way.
I'm looking for some function "Bound" where
local X Y=1 Xbound YBound in
Xbound={Bound? X}
Ybound={Bound? Y}
end
gives false and true for Xbound and Ybound respectively.
My use case involves processing a list where values are added incrementally with the last value always being unbound. I want to use the last bound item (the one before unbound one.) And I'm trying to work in the OZ paradigm with the least concepts added (so no mutable variables or exceptions.)
You can check whether a variable is bound with the function IsDet.
See here: http://mozart.github.io/mozart-v1/doc-1.4.0/base/node4.html (also works in Mozart 1.3.0)
A word of caution: if you are using multiple threads, this opens the door for race conditions.
Is it preferable to store redundant information, (which can be otherwise generated from existing data,) or to instead convert the existing data each time you need access?
I've simplified my specific problem as best as I can below, hoping that the provided answers are useful as future-reference material.
Example:
Let's say we've developed a program that places data into Squares on a grid (like a super-descriptive game of Tic-Tac-Toe or something) and assigns various details, and a unique identification number to each:
Throughout our program, we often perform logic based on a square's X and/or Y coordinates (checking for 3 in a row) and other times we only need the ID (perhaps to access a string at "SquareName[ID]") - We aren't exactly certain which of these two is accessed more often, but it's a rather close competition.
Up until now we've simply stored the ID inside the square class, and converted it with some simple formulas whenever just the X or Y are needed. Say we want to get coordinates for one square in particular:
int CurrentX = (this.Square.ID - 1) % 3) + 1; // X coordinate, 1 through 3
int CurrentY = (this.Square.ID + 1) / 3; // Y, 1 through 3
Since the squares don't move around or change ID after setup, part of me believes it would be simpler just to store all 3 values inside the Square class, but my other part cringes at the redundancy since access to X and Y is already easy enough to calculate from the existing ID.
(Note, This program itself is not very memory or resource intensive, nor does the size of the grid get much larger, so it mostly comes down to which option is a better practice or rule of thumb.)
What would you do?
As a rule of thumb, for a system where the data is read/write, store your basic data without redundancy.
When performance or other considerations become a practical issue, then you should denormalize as necessary. (i.e. wait for it to be a problem, don't pre-optimize overly much).
Your goal should be the most maintainable code possible. That usually means writing the least code possible. Having extra code to maintain redundant copies of data points will make your code more brittle.
If those are values which can be determined at the moment of creation and then do not change anymore, I would go for variables populated in the constructor. It's not redundant info in so far as that it isn't stored anywhere else, but that's not my main point. When reading my code, I'd usually expect that whenever something is computed at the time of request, it might change per request. It is easy to find the point in the source where the field is populated and where it is changed, especially if it does never change, but you might end up slightly confused when looking at some calculation which will return always the same result, as it's variables can't change, and wonder whether you're just missing a case or this is really static.
Also, using a descriptive variable name, you can get rid of the comments. Not that I generally aim at not commenting, but source code which doesn't even need comments is a pretty save signal for easy to understand code, which might (/should) be your aim.
I am developing a billing system using PowerBuilder 12.5 Classic and I need to set 0 for a textbox; like in vb.net txtchange.Text = 0
i have two drop down list boxes
ddlb_price (defines the price value of an item)
ddlb_cash (the cash amount given by the customer)
sle_change (the change that the cashier is to give the customer)
the system should set the value for sle_change when the cashier inputs the cash.
1. this gives me syntax error;
if cash=price then
sle_fare.settext=0
end if
2. this gives 'incompatible types in assinment
if cash=price then
sle_fare.text=0
end if
The single line edit (sle) control is designed to hold text. You're trying to assign it a numeric value. You will have to change the number into a string if you want the sle to display it:
sle_fare.text = "0"
or
sle_fare.text = string(variableHere)
Once again, I'm going to step back, ignore the actual questions, and look at how a DataWindow would help as an alternative.
You seem to want a control with a data type behind it. The DataWindow has those types of controls. Don't forget that a DataWindow doesn't have to have a SELECT statement behind it; it can have a stored procedure, web service, or nothing at all (external DataWindow) behind the data set. Once you have a control with a numeric data type behind it, you get (for free) some basic editing controls, such as not allowing alpha characters in the field and making sure the entered value is really a number (e.g. "0-.2.1" would fail).
A step beyond that is looking at one of your coming requirements: calculating change. On a DataWindow, you can create a compute with an expression that will automagically calculate your change for you, once price and cash are entered.
I certainly don't want to say you can't do things the way you're proceeding, but there are many issues that a DataWindow would remove over some other approach. The strength of PowerBuilder is in the DataWindow.
Good luck,
Terry