How to extract the basis vectors from nullspace - maxima

Is there a way to extract the basis vectors from nullspace(A)? For example, when I ran
A : matrix([1,2,3,4], [2,2,4,4]);
nullspace(A);
I got
span(v1, v2)
where v1 and v2 are the transpose of [0, -4, 0, 2] and [2,2,-2,0], respectively.
What I want to do is to use v1 & v2 to create another variable, e.g.
B : matrix(v1, v2)
Is there a way to do this, so that I don't need to read the screen and then manually enter v1 & v2 to create matrix B? Thanks a lot!

addcol pastes together columns. Try this:
foo : nullspace (A);
B : apply (addcol, args (foo));
args(foo) returns the list of columns from the span expression (what you have labeled v1 and v2 above).

Related

How to get a path for script-fu function: gimp-drawable-edit-stroke-item

I am struggling with getting the actual path (or vector) object from an id. I want to stroke a path and the currently advised way of doing so seems to be the method gimp-drawable-edit-stroke-item. This needs an item as input. By the way I tried to find a list of all predefined types in script-fu but also didn't find anything. So I am not sure what the typ Item really is but it looks like you can pass a vector to it.
All I can find so far to identify a path is using (cadr(gimp-image-get-vectors p-image)) which seems to only give me an id. As the following (gimp-drawable-edit-stroke-item p-drawable (cadr(gimp-image-get-vectors p-image))) leads to an "Error: Invalid type for argument 2 to gimp-drawable-edit-stroke-item".
Having to navigate lists instead of using names for fields is the reason why I never bothered with Scheme/script-fu since Gimp can be scripted in Python.
This said, with my limited Lisp knowledge:
(gimp-image-get-vectors p-image) returns a (count (v1 v2 v3 ...)) list
so (cadr (gimp-image-get-vectors p-image)) returns the list, and not a single item of the list.
You can get the "active path" directly with (gimp-image-get-vectors p-image) (using the paths list doesn't tell you which path in the list is meant by the user anyway).
"I want to stroke a path"
Rather than gimp-drawable-edit-stroke-item, gimp-pencil (or gimp-brush) can do it:
(define (stroke-path drawable color width . path)
(gimp-context-set-line-miter-limit 5) ; default: 10, default mitre up to 60 pixels
(gimp-context-set-stroke-method STROKE-LINE) ; default STROKE-PAINT-METHOD
(gimp-context-set-line-cap-style CAP-BUTT) ; CAP-ROUND, CAP-SQUARE
(gimp-context-set-line-join-style JOIN-ROUND) ; JOIN-MITER, JOIN-ROUND, JOIN-BEVEL
(gimp-context-set-foreground color)
(gimp-context-set-line-width width) ; default: 6
(let ((vec (apply vector path)))
(gimp-pencil drawable (vector-length vec) vec)
)
)

Recoding a variable based on a list or set of values?

Is there syntax in SPSS that looks in a list or set of values? I have been unable to find a syntax that will use a list/set as reference.
One use example is recode;
DATASET ACTIVATE DataSet1.
STRING V2 (A8).
RECODE V1 ('a' = 'group1') ('b' = 'group1') ('c' = 'group1') INTO V2.
EXECUTE.
Instead of typing each value like above, I would to use an function like the SQL IN, if it exists.
Logic:
if V1 IN (a,b,c,e...) then V = "group1"...
Thank you!
Here are some possibilities and examples to get you started:
Your recode command could be more compact, like this:
recode V1 ('a' 'b' 'c'='group1') ('d' 'e' 'f'='group2') INTO V2.
The any function gives you a logical value. For example:
if any(V1,'a', 'b', 'c') [do something]. /* can also use DO IF.
or
compute group1=any(V1,'a', 'b', 'c').
If you want to search for strings within values, you can use char.index this way (in this example the search string 'abc' is split into one character strings, so V1 is tested for containing each of 'a', 'b' and 'c' separately):
if char.index(V1,'abc',1)>0 V2='group1'.
for more complex options you can loop over values with loop or do repeat. For example, this loop will give V2 the value of 'grp1' for every value of V1 that contains 'a', 'b' or 'c', 'grp2' if V1 contains 'd', 'e' or 'f':
do repeat MyStr='a' 'b' 'c' 'd' 'e' 'f'/grp='grp1' 'grp1' 'grp1' 'grp2' 'grp2' 'grp2'.
if char.index(V1,Mystr)>0 v2=grp.
end repeat.

How to modify the data in a column using Wolfram Mathematica?

I am working on a Dataset object with one column, named Property.
The data is given as shown in the following picture:
Based on the range, I would like to assign a new value, and eventually replace the whole column in question. For example if the range is 500-5000, I would like to get the value 1, and for 5000-50000, I would like to give the value 2, and so on.
As I understand it, you want to recode one column of a dataset by modifying the dataset. To my knowledge, datasets are not really designed to be mutable types. If you can accept that, here are two ways to proceed.
First, let's get some artifical data.
ds = Dataset[<|"x" -> RandomInteger[10],
"y" -> Interval[{10^#, 10^(# + 1)}]|> & /# Range[5]]
Now suppose we want to recode the second column with a function f:
ds[All, {2 -> f}]
Note that the original dataset is unchanged. (Usually a good thing.)
Here's an example function to try out.
f[x_Interval] := Log[10, x[[1, 1]]]
ds[All, {2 -> f}]
Now a big problem with this is that your new dataset has a column with exactly the same name but entirely different interpretation. If this bothers you, you can instead append to the dataset with a new name.
Append[#, "y2" -> f[#y]] & /# ds
Edit:
What about those dollar signs? Unless you show us the full form of an entry, I'll have to guess. So I'll guess that the following artificial data gets us close enough to be useful:
ds = Dataset[<|"x" -> RandomInteger[10],
"y" -> Quantity[Interval[{10^#, 10^(# + 1)}], "USDollars"]|> & /# Range[5]]
This just means we need to make a small change in f:
f[Quantity[Interval[{x_, _}], _]] := Log[10, x]
Then we can replace or append as before:
ds[All, {2 -> f}]
Append[#, "y2" -> f[#y]] & /# ds
If we have grid stuff with column integer x (starting from 1 as we are in mathematica) named "Property", the code to get the column of transformed ranges in x -- to what I think want you -- is below:
Replace[#1[[1]] & /# stuff, x_ :> IntegerLength[x[[1, 1]]] - 2, {1}]
It takes all the ranges in the specified column, and subtracts 2 from the length of the lower part of the range to give you your result.
For example, if we take your sample ranges:
stuff = {{$Interval[{500, 50000}], things, things},
{$Interval[{5000, 5000000}], things, things}}
And run it through our Replace:
Replace[#1[[1]] & /# stuff, x_ :> IntegerLength[x[[1, 1]]] - 2, {1}]
We get an Out: of:
{1, 2}
You can then easily modify the Replace above to give you the transformed column in situ of stuff.

Spark: join key-tuple pairs into key-list value

I have many RDDs ( let say 4 ) of this kind: K,(v1,v2,..,vN) and I have to join them, so I simply run
r1.join(r2).join(r3).join(r4)
The result will be something like K,((v1,v2,..,vN),(v1,v2,...,vN)),(v1,v2,...,vN))... and so on. Basically, I will get a nested structure of tuples, one for each join operation.
I was wondering if there exists a way to tell Spark to output as result of the join a union of the values of each RDD. In other words, I would like to get something like:
K, [ v1,v2,..., vN,v1,v2,..., vN,v1,v2,..., v1,v2,...,vN ]
You could do a multi-join or you could save yourself from nested syntax and apply a version of cogroup instead. However, since cogroup() only allows you to group up to 4 RDD's you can kind of monkey patch it to group more. Below is an example of a multiCogroup() function:
def multiCogroup[K : ClassTag, V : ClassTag](numPartitions: Int, inputRDDs: RDD[(K, V)]*) : RDD[(K, Seq[V])] = {
val cg = new CoGroupedRDD[K](inputRDDs.toSeq, new HashPartitioner(numPartitions))
cg.mapValues { case iterables => iterables.foldLeft(Seq[V]())(_ ++ _.asInstanceOf[Iterable[V]].toSeq) }
}
Run on an example, you can see the following:
import org.apache.spark.rdd._
import org.apache.spark.HashPartitioner
import scala.reflect.ClassTag
val rdd1 = sc.parallelize(Seq(("a", 1),("b", 2),("c", 3),("d", 4)))
val rdd2 = sc.parallelize(Seq(("a", 4),("b", 3),("c", 2),("d", 1)))
val rdd3 = sc.parallelize(Seq(("c", 0),("d", 0),("e", 0)))
val rdd4 = sc.parallelize(Seq(("a", 5),("b", 5),("e", 5)))
val rdd5 = sc.parallelize(Seq(("b", -1),("c", -1),("d", -1)))
val combined = multiCogroup[String, Int](2, rdd1, rdd2, rdd3, rdd4, rdd5)
combined.foreach(println)
// (d,List(4, 1, 0, -1))
// (b,List(2, 3, 5, -1))
// (e,List(0, 5))
// (a,List(1, 4, 5))
// (c,List(3, 2, 0, -1))
A few things to note:
If your input RDD value types are non-uniform, you could umbrella the output type V into type a super type (e.g. Int and Long into Integral, String and Int into Any). This might not be recommendable though as it could cause some ambiguity issues later on in your program. In general, I think the best use case of this is when all input value types are the same.
I've defined the function to use a HashPartitioner with the number of partitions being the parameter numPartitions. It may make sense to tunnel your own Partitioner in by replacing the numPartitions argument. You can then pass the input partitioner directly to CoGroupedRDD[K](), similarly done in the implementation of cogroup.
I would probably apply some caution on using this method on large RDDs. Joins themselves can be kind of tricky depending on the size of the input data as well as the distribution of the key set. Expanding this to grouping multiple RDDs in a single cogroup could lead to similar memory issues quicker.

SPSS: Looping over Several Variables

I am working in SPSS and have a large number of variables, call them v1 to v7000.
I want to perform a series of "complex operations" on each variable to create a new set of variables: t1 to t7000.
For the sake of illustration, let's just say the "complex operation" is to have t1 be the square of v1, t2 be the square of v2, etc.
My thought is to write some code like this.
do repeat t=t1 to t7000
compute t = v*v;
end repeat.
But, I don't think this will work.
What is the right way to do this? Thanks so much in advance.
Multiple stand-in variables can be speciļ¬ed on a DO REPEAT command.
do repeat t = t1 to t7000
/v = v1 to v7000.
compute t = v**2.
end repeat.

Resources