SAS print variables from cluster analysis - printing

I have made a cluster anaysis in SAS using proc cluster.
How do I get SAS to print the number of chosen clusters?
If I have chosen clusters = 7, I want to print the 7 clusters with the observations that lie in every cluster.
HOw do I do?

Use the OUT= option on PROC CLUSTER to create a SAS data set and use PROC TREE to associate the source records into the number of clusters you want. Then you can sort the result and print by cluster:
proc tree data=Tree /* Data set created by PROC CLUSTER */
out=New /* New data set to create */
nclusters=7 /* Number of clusters you want */
noprint;
id idvar; /* ID variable from PROC CLUSTER */
copy a b c; /* Other variables from input data */
run;
proc sort data=new;
by cluster idvar;
run;
proc print data=new;
by cluster;
id cluster;
run;
See this example in the SAS documentation for much more info.

Related

How to merge labels of two Prometheus queries? [duplicate]

I am using the consul exporter to ingest the health and status of my services into Prometheus. I'd like to fire alerts when the status of services and nodes in Consul is critical and then use tags extracted from Consul when routing those alerts.
I understand from this discussion that service tags are likely to be exported as a separate metric, but I'm not sure how to join one series with another so I can leverage the tags with the health status.
For example, the following query:
max(consul_health_service_status{status="critical"}) by (service_name, status,node) == 1
could return:
{node="app-server-02",service_name="app-server",status="critical"} 1
but I'd also like 'env' from this series:
consul_service_tags{node="app-server-02",service_name="app-server",env="prod"} 1
to get joined along node and service_name to pass the following to the Alertmanager as a single series:
{node="app-server-02",service_name="app-server",status="critical",env="prod"} 1
I could then match 'env' in my routing.
Is there any way to do this? It doesn't look to me like any operations or functions give me the ability to group or join like this. As far as I can see, the tags would already need to be labels on the consul_health_service_status metric.
You can use the argument list of group_left to include extra labels from the right operand (parentheses and indents for clarity):
(
max(consul_health_service_status{status="critical"})
by (service_name,status,node) == 1
)
+ on(service_name,node) group_left(env)
(
0 * consul_service_tags
)
The important part here is the operation + on(service_name,node) group_left(env):
the + is "abused" as a join operator (fine since 0 * consul_service_tags always has the value 0)
group_left(env) is the modifier that includes the extra label env from the right (consul_service_tags)
The answer in this question is accurate. I want to also share a clearer explanation on joining two metrics preserving SAME Labels (might not be directly answering the question). In these metrics following label is there.
name (eg: aaa, bbb, ccc)
I have a metric name metric_a, and if this returns no data for some of the labels, I wish to fetch data from metric_b. i.e:
metric_a has values for {name="aaa"} and {name="bbb"}
metric_b has values for {name="ccc"}
I want the output to be for all three name labels. The solution is to use or in Prometheus.
sum by (name) (increase(metric_a[1w]))
or
sum by (name) (increase(metric_b[1w]))
The result of this will have values for {name="aaa"}, {name="bbb"} and {name="ccc"}.
It is a good practice in Prometheus ecosystem to expose additional labels, which can be joined to multiple metrics, via a separate info-like metric as explained in this article. For example, consul_service_tags metric exposes a set of tags, which can be joined to metrics via (service_name, node) labels.
The join is usually performed via on() and group_left() modifiers applied to * operation. The * doesn't modify values for time series on the left side because info-like metrics usually have constant 1 values. The on() modifier is used for limiting the labels used for finding matching time series on the left and the right side of *. The group_left() modifier is used for adding additional labels from time series on the right side of *. See these docs for details.
For example, the following PromQL query adds env label from consul_service_tags metric to consul_health_service_status metric with the same set of (service_name, node) labels:
consul_health_service_status
* on(service_name, node) group_left(env)
consul_service_tags
Additional label filters can be added to consul_health_service_status if needed. For example, the following query returns only time series with status="critical" label:
consul_health_service_status{status="critical"}
* on(service_name, node) group_left(env)
consul_service_tags

Google DataFlow: Read from BigQuery, combine three string fields, write key/value fields to Google Cloud Spanner

None of the provided DataFlow templates match what I need to do, so I'm trying to write my own. I managed to run the example code like word count example without issue, so I tried to butcher together parts separate examples that read from BigQuery and writes to Spanner but there's just so many things in the source code I don't understand and cannot adapt to my own problem.
I'm REALLY lost on this and any help is greatly appreciated!
The goal is to use DataFlow and Apache Beam SDK to read from a BigQuery table with 3 string fields and 1 integer field, then concatenate the content of the 3 string fields into one string and put that new string in a new field called "key", then I want to write the key field and the integer field (which is unchanged) to a Spanner table that already exists, ideally append rows with a new key and update the integer field of rows with a key that already exists.
I'm trying to do this in Java because there is no i/o connector for Python. Any advice on doing this with Python are much appreciated.
For now I would be super happy if I could just read a table from BigQuery and write whatever I get from that table to a table in Spanner, but I can't even make that happen.
Problems:
I'm using Maven and I don't know what dependencies I need to put in the pom file
I don't know which package and import I need at the beginning of my java file
I don't know if I should use readTableRows() or read(SerializableFunction) to read from BigQuery
I have no idea how to access the string fields in the PCollection to concatenate them or how to make the new PCollection with only the key and integer field
I somehow need to make the PCollection into a Mutation to write to Spanner
I want to use an INSERT UPDATE query to write to the Spanner table, which doesn't seem to be an option in the Spanner i/o connector.
Honestly, I'm too embarrassed to even show that code I'm trying to run.
public class SimpleTransfer {
public static void main(String[] args) {
// Create and set your PipelineOptions.
DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
// For Cloud execution, set the Cloud Platform project, staging location, and specify DataflowRunner.
options.setProject("myproject");
options.setStagingLocation("gs://mybucket");
options.setRunner(DataflowRunner.class);
// Create the Pipeline with the specified options.
Pipeline p = Pipeline.create(options);
String tableSpec = "database.mytable";
// read whole table from bigquery
rowsFromBigQuery =
p.apply(
BigQueryIO.readTableRows()
.from(tableSpec);
// Hopefully some day add a transform
// Somehow make a Mutation
PCollection<Mutation> mutation = rowsFromBigQuery;
// Only way I found to write to Spanner, not even sure if that works.
SpannerWriteResult result = mutation.apply(
SpannerIO.write().withInstanceId("myinstance").withDatabaseId("mydatabase").grouped());
p.run().waitUntilFinish();
}
}
It's intimidating to deal with these strange data types, but once you get used to the TableRow and Mutation types, you'll be able to code robust pipelines.
The first thing you need to do is take your PCollection of TableRows, and convert those into an intermediate format that is convenient for you. Let's use Beam's KV, which defines a key-value pair. In the following snippet, we're extracting the values from the TableRow, and concatenating the string you want:
rowsFromBigQuery
.apply(
MapElements.into(TypeDescriptors.kvs(TypeDescriptors.strings()
TypeDescriptors.integers()))
.via(tableRow -> KV.of(
(String) tableRow.get("myKey1")
+ (String) tableRow.get("myKey2")
+ (String) tableRow.get("myKey3"),
(Integer) tableRow.get("myIntegerField"))))
Finally, to write to Spanner, we use Mutation-type objects, which define the kind of mutation that we want to apply to a row in Spanner. We'll do it with another MapElements transform, which takes N inputs, and returns N outputs. We define the insert or update mutations there:
myKvPairsPCollection
.apply(
MapElements.into(TypeDescriptor.of(Mutation.class))
.via(elm -> Mutation.newInsertOrUpdateBuilder("myTableName)
.set("key").to(elm.getKey())
.set("value").to(elm.getValue()));
And then you can pass the output to that to SpannerIO.write. The whole pipeline looks something like this:
Pipeline p = Pipeline.create(options);
String tableSpec = "database.mytable";
// read whole table from bigquery
PCollection<TableRow> rowsFromBigQuery =
p.apply(
BigQueryIO.readTableRows().from(tableSpec));
// Take in a TableRow, and convert it into a key-value pair
PCollection<Mutation> mutations = rowsFromBigQuery
// First we make the TableRows into the appropriate key-value
// pair of string key and integer.
.apply(
MapElements.into(TypeDescriptors.kvs(TypeDescriptors.strings()
TypeDescriptors.integers()))
.via(tableRow -> KV.of(
(String) tableRow.get("myKey1")
+ (String) tableRow.get("myKey2")
+ (String) tableRow.get("myKey3"),
(Integer) tableRow.get("myIntegerField"))))
// Now we construct the mutations
.apply(
MapElements.into(TypeDescriptor.of(Mutation.class))
.via(elm -> Mutation.newInsertOrUpdateBuilder("myTableName)
.set("key").to(elm.getKey())
.set("value").to(elm.getValue()));
// Now we pass the mutations to spanner
SpannerWriteResult result = mutations.apply(
SpannerIO.write()
.withInstanceId("myinstance")
.withDatabaseId("mydatabase").grouped());
p.run().waitUntilFinish();
}

Dataflow: How to create a pipeline from an already existing PCollection spewed by another pipeline

I am trying split my pipeline into many smaller pipelines so they execute faster. I am partitioning a PCollection of Google Cloud Storage blobs (PCollection)so that I get a
PCollectionList<Blob> collectionList
from there I would love to be able to something like:
Pipeline p2 = Pipeline.create(collectionList.get(0));
.apply(stuff)
.apply(stuff)
Pipeline p3 = Pipeline.create(collectionList.get(1));
.apply(stuff)
.apply(stuff)
But I haven't found any documentation about creating an initial PCollection from an already existing PCollection, I'd be very grateful if anyone can point me the right direction.
Thanks!
You should look into the Partition transform to split a PCollection into N smaller ones. You can provide a PartitionFn to define how the split is done. You can find below an example from the Beam programming guide:
// Provide an int value with the desired number of result partitions, and a PartitionFn that represents the partitioning function.
// In this example, we define the PartitionFn in-line.
// Returns a PCollectionList containing each of the resulting partitions as individual PCollection objects.
PCollection<Student> students = ...;
// Split students up into 10 partitions, by percentile:
PCollectionList<Student> studentsByPercentile =
students.apply(Partition.of(10, new PartitionFn<Student>() {
public int partitionFor(Student student, int numPartitions) {
return student.getPercentile() // 0..99
* numPartitions / 100;
}}));
// You can extract each partition from the PCollectionList using the get method, as follows:
PCollection<Student> fortiethPercentile = studentsByPercentile.get(4);

Is it possible to read a message from a PubSub and separate its data in different elements of a PCollection<String>? If so, how?

Now, I have the below code:
PCollection<String> input_data =
pipeline
.apply(PubsubIO
.Read
.withCoder(StringUtf8Coder.of())
.named("ReadFromPubSub")
.subscription("/subscriptions/project_name/subscription_name"));
Looks like you want to read some messages from pubsub and convert each of them to multiple parts by splitting a message on space characters, and then feed the parts to the rest of your pipeline. No special configuration of PubsubIO is needed, because it's not a "reading data" problem - it's a "transforming data you have already read" problem - you simply need to insert a ParDo which takes your "composite" record and breaks it down in the way you want, e.g.:
PCollection<String> input_data =
pipeline
.apply(PubsubIO
.Read
.withCoder(StringUtf8Coder.of())
.named("ReadFromPubSub")
.subscription("/subscriptions/project_name/subscription_name"))
.apply(ParDo.of(new DoFn<String, String>() {
public void processElement(ProcessContext c) {
String composite = c.element();
for (String part : composite.split(" ")) {
c.output(part);
}
}}));
}));
I take it you mean that the data you want is present in different elements of the PCollection and want to extract and group it somehow.
A possible approach is to write a DoFn function that processes each String in the PCollection. You output a key value pair for each piece of data you want to group. You can then use the GroupByKey transform to group all the relevant data together.
For example you have the following messages from pubsub in your PCollection:
User 1234 bought item A
User 1234 bought item B
The DoFn function will output a key value pair with the user id as key and the item bought as value. ( <1234,A> , <1234, B> ).
Using the GroupByKey transform you group the two values together in one element. You can then perform further processing on that element.
This is a very common pattern in bigdata called mapreduce.
You can output an Iterable<A> then use Flatten to squash it. Unsurprisingly this is termed flatMap in many next-gen data processing platforms, c.f. spark / flink.

Can a lua script, that is run on one node, get keys from another node in Redis cluster

Can a lua script, that is run on one node, get keys from another node in Redis cluster
Example
Node A
key1 val1
key2 val2
Node B
key3 val3
Script
return redis.call('get', 'key1') + redis.call('get', 'key2')
Furthermore are there any attempts to support map-reduce in redis-cluster?
Unfortunately it is not possible to operate on keys from multiple shards in a lua script - you have to make sure you create your sharding rules so that they will guarantee that all keys involved in your scripts run on single shard. Otherwise you will have to apply the reduce phase by yourself in your client side code.
http://grokbase.com/t/gg/redis-db/136q7m853y/atomicity-of-lua-scripts-against-cluster

Resources