Add flags on condition - highcharts

How can I add a flag, where the value of x - 1 is different then value of x, can it be done directly on the chart?
x is [1, 1, 1, 3, 3] so when I get the pair 1, 3 I want to put a flag on 3, and say that something has changed.

Related

Parsing a dimension field with variable formatting in Teradata?

I have a dimension field that holds data in the below format. I am using teradata to query this field.
10 x 10 x 10
5.0x6x7
10 x 12x 1
6.0 x6.0 x6.0
0 X 0 X 0
I was wondering how should I go about parsing this filed to only obtain the numbers into 3 different columns.
Something like this should work or at least get you close.
REGEXP_SUBSTR(DATA, '(.*?)(x ?|$)', 1, 1, 'i', 1) AS length,
REGEXP_SUBSTR(DATA, '(.*?)(x ?|$)', 1, 2, 'i', 1) AS width,
REGEXP_SUBSTR(DATA, '(.*?)(x ?|$)', 1, 3, 'i', 1) AS height
Return the first captured group of a set of characters that are followed by a case-insensitive 'x' and an optional space or the end of the line. The 4th argument is the instance of this match to return.

How to pass multiple arguments to dask.distributed.Client().map?

import dask.distributed
def f(x, y):
return x, y
client = dask.distributed.Client()
client.map(f, [(1, 2), (2, 3)])
Does not work.
[<Future: status: pending, key: f-137239e2f6eafbe900c0087f550bc0ca>,
<Future: status: pending, key: f-64f918a0c730c63955da91694fcf7acc>]
distributed.worker - WARNING - Compute Failed
Function: f
args: ((1, 2))
kwargs: {}
Exception: TypeError("f() missing 1 required positional argument: 'y'",)
distributed.worker - WARNING - Compute Failed
Function: f
args: ((2, 3))
kwargs: {}
Exception: TypeError("f() missing 1 required positional argument: 'y'",)
You do not quite have the signature right - perhaps the doc is not clear (suggestions welcome). Client.map() takes (variable number of) sets of arguments for each task submitted, not a single iterable thing. You should phrase this as
client.map(f, (1, 2), (2, 3))
or, if you wanted to stay closer to your original pattern
client.map(f, *[(1, 2), (2, 3)])
Ok, the documentation is definitely a bit confusing on this one. And I couldn't find an example that clearly demonstrated this problem. So let me break it down below:
def test_fn(a, b, c, d, **kwargs):
return a + b + c + d + kwargs["special"]
futures = client.map(test_fn, *[[1, 2, 3, 4], (1, 2, 3, 4), (1, 2, 3, 4), (1, 2, 3, 4)], special=100)
output = [f.result() for f in futures]
# output = [104, 108, 112, 116]
futures = client.map(test_fn, [1, 2, 3, 4], (1, 2, 3, 4), (1, 2, 3, 4), (1, 2, 3, 4), special=100)
output = [f.result() for f in futures]
# output = [104, 108, 112, 116]
Things to note:
Doesn't matter if you use lists or tuples. And like I did above, you can mix them.
You have to group arguments by their position. So if you're passing in 4 sets of arguments, the first list will contain the first argument from all 4 sets. (In this case, the "first" call to test_fn gets a=b=c=d=1.)
Extra **kwargs (like special) are passed through to the function. But it'll be the same value for all function calls.
Now that I think about it, this isn't that surprising. I think it's just following Python's concurrent.futures.ProcessPoolExecutor.map() signature.
PS. Note that even though the documentation says "Returns:
List, iterator, or Queue of futures, depending on the type of the
inputs.", you can actually get this error: Dask no longer supports mapping over Iterators or Queues. Consider using a normal for loop and Client.submit

How can I configure Maxima to index lists from 0 instead of 1?

If I create a list in Maxima:
(%i1) a: [2, 3, 5, 7, 11];
(a) [2, 3, 5, 7, 11]
Then if I index into that list and ask for element 4:
(%i2) a[4];
(%o2) 7
This shows that Maxima uses 1-indexing rather than 0-indexing. I would prefer to use 0-indexing for lists rather than 1-indexing. How can I do this?
It is not possible to change the indexing for lists in Maxima; it always starts at 1.

Dijkstra algorithm under constraint

I have N vertices one being the source. I would like to find the shortest path that connects all the vertices together (so a N-steps path) with the constraint that all the vertices cannot be visited at whichever step.
A network is defined by N the number of vertices, the source, the cost to travel between each pair of vertices and, for each step the list of vertices that can be visited
For example, if N=5 and the vertices are 1(the source),2,3,4 and 5, the list [[2, 3, 4], [2, 3, 4, 5], [2, 3, 4, 5], [3, 4, 5]] means that for step 2 only vertices 2,3 and 4 can be visited and so forth...
I can't figure out how to adapt the Dijkstra algorithm to my problem. I would really like some ideas Or maybe a better solution is to find something else, are there others algorithm that can handle this problem ?
Note : I posted the same question at math.stackexchange, I apologize if it is considered as a duplicate
You don't need any adaptation. Dijkstra algorithm will work fine under these constraints.
Following your example:
Starting from the vertex 1 we can get to 2 (let's suppose distance d = 2), 3 (d = 7) and 4 (d = 11) - current values of distance is [0, 2, 7, 11, N/A]
Next, pick the vertex with the shortest distance (vertex 2) - we can get from it to 2 again (shouldn't be counted), 3 (d = 3), 4 (d = 4) or 5 (d = 9). We see, that we can get to the vertex 3 with distance 2 + 3 = 5 < 7, which is shorter than 7, so update the value. The same is for the vertex 4 (2 + 4 = 6 < 11) - current values are [0, 2, 5, 6, 9]
Mark all the vertices we visited and follow the algorithm until all the vertices are selected.

How to compute the mean over rows till a variable changes and repeat?

Given a very huge table of the following format (e.g. snippet):
Subject, Condition, VPH, Task, Round, Item, Decision, Self, Other, RT
1, 1, 1, SVO, 0, 0, 4, 2.5, 2.0, 8.598
1, 1, 1, SVO, 1, 5, 3, 4.1, 3.4, 7.785
1, 1, 1, SVO, 2, 4, 3, 3.2, 3.4, 15.713
2, 2, 1, SVO, 0, 0, 4, 2.5, 2.0, 15.439
2, 2, 1, SVO, 1, 2, 7, 4.9, 2.3, 30.777
2, 2, 1, SVO, 2, 3, 8, 4.3, 4.3, 13.549
3, 3, 1, SVO, 0, 0, 5, 2.8, 1.5, 9.066
... (And so on)
Needed: Compute the mean over all rounds for self and others for each subject.
What i have so far:
I sorted the about 100mb .txt file using bash sort so the subject and the related rounds appear after each other (like the example shows). After that i imported the .txt file into SPSS24. Right now i have no idea to write a function that computes for each subject the mean of variable self and others over the three rounds. E.g.: (some pseudo-code)
for n = 1 to last_subject do:
get row self where lines have line_subject as n
compute mean over these content
write result as new variable self_mean as new variable after variabel RT at line n
increase n by one
As i am totally new to SPSS i really appreciate detailed help. I am also satisfied with references that specifically attend to computation over rows (i found lots of stuff over columns).
Thank you very much!
Edit: example output
After computing the table should look like this:
Subject, Mean_Self, Mean_Others
1, 3.27, 2.9
2, ..., ...
3,
... (And so on)
So now we computed the Mean_Self from the top example like so:
mean(2.5 + 4.1 + 3.2)
where:
2.5 was used from line 1 of Variable Self
4.1 was used from line 2 of Variable Self
3.2 was used from line 3 of Variable Self
2.5 was not used from line 4 of Variable Self because Variable Subject changed, there for we want to repeat the process with the new Subject (here 2) until it changes again. The results should create a table like the one above. Same procedure for Variable Other.
If I understand right what you need is the aggregate command. aggregate can create a new dataset/file with your aggregated data, or add the aggregated data to your active dataset, like you described above:
AGGREGATE
/OUTFILE=* MODE=ADDVARIABLES
/BREAK=Subject
/Self_mean=MEAN(Self)
/Other_mean=MEAN(Other).
In order to get the new variables in a new, separate tabe, look up other AGGREGATE options, e.g. /OUTFILE=* (removing MODE=ADDVARIABLES) will result in the new aggregated data replacing the original file in the window, while /OUTFILE="path/filename" will save the aggregated data to a file.

Resources