Gumbo Apache Storm Monitoring Metrics - monitoring

I am trying to configure Gumbo for storm topology monitoring from here
There is not clear example or usage given on the site, need some clarification on what are the parameter and where to add this code given on the site above
MonitorClient mclient = MonitorClient.forConfig(conf);
// There are multiple metric groups, each with multiple metrics.
// Components have names and multiple instances, each of which has an integer ID
mclient.declare(metricGroup,metric,task_id,component_id);
mclient.increment(metricGroup,metric, 1L , task_id);
TaskHook.registerTo(config);
Now what values we need to provide for MetricGroup, metric, task_id and component_id? If need to find it from each Spout and Bolt how can we do it? Where should this code be placed, is it in topology builder before submitting the topology or in individual Spout/Bolt class under open/prepare methods or somewhere else. Appreciate any help on this question.

I tried few option and below is the config that worked for me, the Group Name can be anything, Metric name is the name of the stream going out from one component to other, taskid can be any unique task number,
conf.put("gumbo.server.kind", "local");
conf.put("gumbo.local.port", 8086); //Any port it must be same in the html file
conf.put("gumbo.start", System.currentTimeMillis()); // should be the same for all calls
conf.put("gumbo.bucketSize", 1000L);
conf.put("gumbo.enabled", true);
conf.put("gumbo.http.host", "hostname");
conf.put("gumbo.http.port", 8086);//Any port it must be same in the html file
conf.put("gumbo.http.app", "gumbo");
conf.put("gumbo.enabled", true);
conf.put("gumbo.server.key", topology_id);
MonitorClient mclient = MonitorClient.connect(conf);
GumboTaskHook.registerTo(conf);
mclient.declare("Backlog",RTConstants.MATCH_LEFT_STREAM,3,RTConstants.TRANSFORM_LEFT_BOLT);
mclient.increment("Backlog",RTConstants.MATCH_LEFT_STREAM, 1L , 3);
mclient.declare("Backlog",RTConstants.MATCH_RIGHT_STREAM,4,RTConstants.TRANSFORM_RIGHT_BOLT);
mclient.increment("Backlog",RTConstants.MATCH_RIGHT_STREAM, 1L , 4);

Related

Is it possible to call an URL passing website parameters?

I am writing code for a custom SAP program regarding some Vendor information. In my program flow, there is a possibility of me trying to use a Vendor VAT Number that belongs to an unknown Vendor. There is a Web site (EU Based - https://ec.europa.eu/taxation_customs/vies/) for such purposes that requires a country key and the specified VAT Number in order for it to provide an answer with the available Company information (only works for company VAT numbers of course). My problem is that I cannot seem to find any way to pass those parameters dynamically to the Web site without needing the user to interfere during this process. Manually, the process would be to select a country key, type in a VAT number and press 'Verify'.
Is there any way for me to call this specific Web site URL and "bypass" this process to only display the result page? For now, I'm using the following Function Module to just call the specified URL, in lack of any better choices.
call function 'CALL_INTERNET_ADRESS'
exporting
pi_adress = 'https://ec.europa.eu/taxation_customs/vies/'
exceptions
no_input_data = 1
others = 2.
You can use CL_HTTP_CLIENT class or HTTP_POST/HTPP_GET FM.
You need to install given web page SSL root certificate to your system with STRUST t-code.
Example usage of CL_HTTP_CLIENT below.
DATA: lv_url TYPE string VALUE 'http://mkysoft.com/ip.php'.
DATA: o_client TYPE REF TO if_http_client.
DATA: lv_http_rc TYPE i.
DATA: lv_reason TYPE string.
DATA: lt_fields TYPE tihttpnvp.
TRY.
cl_http_client=>create_by_url( EXPORTING
url = lv_url
IMPORTING
client = o_client
EXCEPTIONS
OTHERS = 0 ).
o_client->request->get_header_fields( CHANGING fields = lt_fields ).
o_client->request->set_header_field( name = '~request_uri' value = '/ip.php' ).
o_client->request->set_header_field( name = '~host' value = 'mkysoft.com' ).
o_client->request->set_method( if_http_request=>co_request_method_get ).
o_client->send( ).
o_client->receive( ).
o_client->response->get_status( IMPORTING
code = lv_http_rc
reason = lv_reason ).
* Error check
IF lv_http_rc = 200.
DATA(lv_xml) = o_client->response->get_cdata( ).
* Handle error
ELSE.
WRITE: / 'Fehler: ', lv_http_rc.
ENDIF.
o_client->close( ).
CATCH cx_root INTO DATA(e_txt).
WRITE: / e_txt->get_text( ).
ENDTRY.
EU Commission has a SOAP service for vat numbers.
See the info page
https://ec.europa.eu/taxation_customs/vies/technicalInformation.html
and that it even supports http
http://ec.europa.eu/taxation_customs/vies/checkVatTestService.wsdl
You have a non screen scrape method, proper interface you should look at.
On the other point of Avoiding SSL.
Make a basic guide for customers to add the European commission cert to their SAP system. If someone is complaining about that, then they are a serious user of the internet. Every sap on premise user, that needs to call the internet adds certs.
Http is dead....

Prometheus return nothing after while

We are using Prometheus and Grafana for our monitoring and we have a panel for response time however I noticed after while the metrics are missing and there are a lots of gap in the panel (only for response time panel) and they comeback as soon as I restart the app (redeploying it in openshift). the service has been written in Go and the logic for the gathering response time is quite simple.
we declared the metric
var (
responseTime = promauto.NewSummaryVec(prometheus.SummaryOpts{
Namespace: "app",
Subsystem: "rest",
Name: "response_time",
}, []string{
"path",
"code",
"method",
})
)
and fill it in our handler
func handler(.......) {
start := time.Now()
// do stuff
....
code := "200"
path := r.URL.Path
method := r.Method
elapsed := float64(time.Since(start)) / float64(time.Second)
responseTime.WithLabelValues(path, code, method).Observe(elapsed)
}
and query in the Grafana panel is like:
sum(rate(app_rest_response_time_sum{path='/v4/content'}[5m]) /
rate(app_rest_response_time_count{path='/v4/content'}[5m])) by (path)
but the result is like this!!
can anyone explain what do we do wrong or how to fix this issue? is it possible that we facing some kind of overflow issue (the average RPS is about 250)? I'm suspecting this because this happen more often to the routes with higher RPS and response time!
Prometheus records the metrics continuously normally and if you query it, it returns all the metrics it collected for the time you queried.
If there is no metric when you query, that has typically three reasons:
the metric was not there (it happens when the instance restarts and you have a dynamic set of labels and there was no request yet for the label value you queried (in your case there was no query for path='/v4/content'). In such case you should see other metrics of the same job (at least up).
Prometheus had problems storing the metrics. (see the log files of prometheus for that timeframe).
Prometheus was down for that timeframe and therefore did not collect any metrics. (In that case you should have no metrics at all for that timeframe.

Variable to change label on thingsboard

I would like to see if they can help me with the creation of a variable, where I can change the labels of the MQTT message that is sent from my IoT devices, in order to make it easier and to select the correct parameters when creating a dashboard. .
Example:
This is the message received to my server.
[{"n": "model", "d": "iot-zigbee1783"}, {"n": "Relay", "ap": true}, {"t": "gateway", "ma": "0035DDf45VAIoT215"}]
What I want is to change the label "d" for "deviceIoT" and "ap" for "door sensor" also if it is possible to change the true or false of the door sensor for open and closed.
You can do this with the help of Thingsboards rule-chain.
There is also an official tutorial for this:
https://thingsboard.io/docs/user-guide/rule-engine-2-0/tutorials/transform-incoming-telemetry/
They use the transformation-rule-node called script to convert temperatures from [°F] to [°C].
While this is not your use case, it shows you how to handle incoming telemetry before it is saved to the database.
You could do a mapping of value-keys like this:
var theCustomizedMessage = {};
theCustomizedMessage['customizedKey'] = msg['originalIncomingKey'];
return {msg: theCustomizedMessage, metadata: metadata, msgType: msgType};
Keep in mind that this might be contra-productive since you have to update the rule-node scripts, when something changes.
As an alternative option you can rename the key labels in the widget configuration. This will not help your dashboard developers. But a documentation document will do :)
I strongly recommend against the replacement of boolean values with strings ('closed', 'opened'). This is a job for the widgets (e.g. their value format functions).

Is it possible to apply [Rule Chain] after [Data Converter]?

I am currently working on a POC by using ThingsBoard PE.
Our raw data contains [Asset] [Attributes].
Data flow:
[IoT cloud] --https webhook carry raw data--> [ThingsBoard PE HTTP INTEGRATION] --uplink--> [ThingsBoard PE Data Converter]
My question is: is it possible to apply [Rule Chain] after [ThingsBoard PE Data Converter]? Therefore, the device can auto create relationship with [Asset] by the [Attribute], instead of [AssetName].
Example data after data converter process:
{
"deviceName": "ABC",
"deviceType": "temperature",
"attributes": {
"asset_id": 6 // <-- the id is used in asset attribute
},
"telemetry": {
"temperature": 39.43
}
}
Answering your two, separate questions:
is it possible to apply [Rule Chain] after [ThingsBoard PE Data Converter]?
Yes it is possible. Once your data is successfully integrated and you are receiving it, you can access it using the [Input] Rule Node (the default green one that is always there when you create a Rule) and route it to any other node you need.
Therefore, the device can auto create relationship with [Asset] by the [Attribute], instead of [AssetName].
So, you want the relationship to take your custom attribute and use that as the pattern that identifies the Asset you want to create the relationship from.
The PE edition already has the Create Relation Node. However, seems that as it is one is not able to do what you seek (has no option to specify custom Asset id).
However, two options you got are:
Create a Custom Rule Node that does what you want. For that try checking the Rule Node Development page from Thingsboard. You can use the Create Relation Node as base and work from there. This can be a longer solution than doing...
Enrich your incoming message's metadata, adding your desired attribute. The Create Relation Node allows you to use variables on your message's metadata in your Name and Type patterns, as seen from this screenshot I took from that node:
This allows us a workaround to what you want to do: Add a Script Transformation Node that adds attributes.asset_id to the metadata, for example as metadata.asset_id, so you can then use it as ${asset_id} on your Name and Type patterns.
For example, your Transform() method of such Script Transformation Node should look something like this:
function Transform(msg, metadata, msgType){
//assumming your desired id is msg.attributes.asset_id, add it to the metadata
metadata.asset_id = msg.attributes.asset_id;
//return the message, in your case to the Create Relation Node
return {msg: msg, metadata:metadata, msgType:msgType};
}
Finally, your Rule should be connected like this:
[Input] -> [Script Node] -> [Create Relation Node] -> [...whatever else you like]

DBF Large Char Field

I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.
For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.
The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.
Here is my connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;
Can anyone tell me the secret to reading larger fields from a DBF file?
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
if (dr.Read())
{
stuff.StuffId = dr["ITEM"].ToString();
// OPTION 1
string ctrlNums = dr["CTRLNUMS"].ToString();
// OPTION 2
char[] buffer = new char[750];
int index = 0;
int readSize = 5;
while (index < 750)
{
long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);
index += (int)charsRead;
if (charsRead < readSize)
{
break;
}
}
}
}
}
}
You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html
What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.
I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.
Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?
I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.
Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.
3 suggestions that might be worth a shot...
1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.
2 - try the FoxPro OLEDB provider
3 - parse the DBF file by hand. Example is here.
My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)

Resources