How can I use native Flume sinks with fiware-cygnus? - flume

Fiware-cygnus documentation mentions that it is based on Apache Flume. However, it is not clear whether I can use native Flume sinks to persist events arriving from Orion Context Broker. Is this something I can easily do, with little (or ideally zero) coding? If not -- would be good to know why (and whether this can be supported going forward). Thanks!

You can use native Flume sinks by simply configuring them. Nothing has been changed in Cygnus in terms of configuration management, thus you can configure a Orion-like sink or a native one.
Nevertheless, there are differences between Orion-like and native Flume sinks.
The first one is the Orion-like sinks store the relevant data with certain structure, and the Flume native sinks will store the notified raw data. I mean, if you receive a Json-based notification such as:
{
"subscriptionId" : "51c0ac9ed714fb3b37d7d5a8",
"originator" : "localhost",
"contextResponses" : [
{
"contextElement" : {
"attributes" : [
{
"name" : "speed",
"type" : "float",
"value" : "112.9",
"metadatas": []
},
{
"name" : "oil_level",
"type" : "float",
"value" : "74.6",
"metadatas": []
}
],
"type" : "car",
"isPattern" : "false",
"id" : "car1"
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
]
}
OrionHDFSSink will store something like:
{"recvTimeTs":"1429535775","recvTime":"2015-04-20T12:13:22.41.124Z","fiware-servicePath":"4wheels","entityId":"car1","entityType":"car","attrName":"speed","attrType":"float","attrValue":"112.9","attrMd":[]}
But a native HDFS sink (or any other one) will persist the entire notified json.
The second main difference if the handling of the notified fiware-service and fiware-servicePath. Cygnus's sinks are able to deal with these values in order to map the notified data into specific data structures (folders, databases, tables, resources, queues...). This is very important for multi-tenancy purposes.
Third, Cygnus adds sinks for storages not covered by native Flume, such as CKAN, STH, MongoDB, MySQL or DynamoDB.
There are many other differences:
The usage of the Grouping Rules.
The Management Interface.
OAuth2 authentication, which is the official FIWARE's mechanism.
...

Related

MQTT - How to change the behaviour of the default parser for JSON on the CE (SenML)?

I have a gateway fromKhomp manufacturer which delivers packages in the following format (SenML):
message: [
{
"bn": "000D6FFFFE642E09",
"bt": 1611339204
},
{
"n": "model",
"vs": "nir21z"
},
{
"n": "Geladeira Temp",
"u": "Cel",
"v": 4.0
}
When I connect to the Thingsboard platform, the internal GW/Parser breaks as an array before the Input in the Root Rule Chain, and threats as individual packets, but since the first position in this array corresponds to the device ID (MAC) I need to have the whole message to be parser in a script. Does anyone know a way to get the information before the GW parses the message?
If you're using Thingsboard CE then I think you will need to first forward the data to a middleware service to restructure the payload. If you are familiar with AWS Lambda you can do it there.
It would just be a simple script that takes an input payload, restructures, and then forwards to your Thingsboard deployment.
If you're using Thingsboard PE then you can use Integration/Data Converters to do this.

Retrieve parameter from a Jenkins REST query

The following REST query will return parameters of the last successful build of a job:
https://localhost/job/test1/lastSuccessfulBuild/api/json
I'd be interested to retrieve one of the parameters of this build, the BUILD_VERSION:
{
"_class": "org.jenkinsci.plugins.workflow.job.WorkflowRun",
"actions": [
{
"_class": "hudson.model.CauseAction",
"causes": [
{
"_class": "hudson.model.Cause$UpstreamCause",
"shortDescription": "Started by upstream project \"continuous-testing-pipeline-for-nightly\" build number 114",
"upstreamBuild": 114,
"upstreamProject": "continuous-testing-pipeline-for-nightly",
"upstreamUrl": "job/continuous-testing-pipeline-for-nightly/"
}
]
},
{ },
{
"_class": "hudson.model.ParametersAction",
"parameters": [
{
"_class": "hudson.model.StringParameterValue",
"name": "BUILD_VERSION",
"value": "1.1.15"
Is there a way to retrieve the BUILD_VERSION (1.1.15) directly using the REST Api or do I have to parse manually the json string ?
Thanks
Yeah you can get the value,But it will only work for XML API :(
The JSON API will return a simplified json object using Tree :)
So Jenkins provides you with api (XML,JSON,PYTHON) from which you can read the Jenkins related data of any project. Documentation in detail is provide in https://localhost/job/test1/lastSuccessfulBuild/api
In that it clearly states that
XML API - Use XPath to control the fragment you want.For example, ../api/xml?xpath=//[0]
JSON API - Use tree
Python API - Use st.literal_eval(urllib.urlopen("...").read())
All the above can be used to get a specific fragment/piece from the entire messy data that you get from the API.
In your case, we will use tree for obvious reasons :)
Syntax : tree=keyname[field1,field2,subkeyname[subfield1]]
In order to retrieve BUILD_VERSION i.e. value
//jenkins/job/myjob/../api/json?tree=lastSuccessfulBuild[parameters[value]]
The above should get you what you want, but a bit of trail and error is required :)
You can also refer here for a better understanding of how to use Tree in JSON API
https://www.cloudbees.com/blog/taming-jenkins-json-api-depth-and-tree
Hope it helps :)
Short answer: No.
Easiest way to programmatically access any attribute exposed via the JSON API is to take the JSON from one of Jenkins supported JSON APIs (in your case: https://localhost/job/<jobname>/lastSuccessfulBuild/api/json)
Copy the resultant JSON into http://json2csharp.com
Generate the corresponding C# code. Don't forget to create a meaningful name for top level class.
Call RestAPI programmatically from C# using RestSharp.
Deserialise the json to the C# class you defined in 2 above.
Wammo, you have access to the entire object tree and all its values.
I used this approach to write an MVC5 ASP.NET site I called "BuildDashboard" to provide all the information a development team could want and answered every question they had.
Here is an example with a public jenkins instance and one of its builds in order to get "candidate_revision" parameter for "lastSuccessfulBuild" build:
https://jenkins.qa.ubuntu.com/view/All/job/account-plugins-vivid-i386-ci/lastSuccessfulBuild/parameters/
https://jenkins.qa.ubuntu.com/view/All/job/account-plugins-vivid-i386-ci/lastSuccessfulBuild/api/xml?xpath=/freeStyleBuild/action/parameter[name=%22candidate_revision%22]/value

Use Container Metrics from Prometheus

I deployed Prometheus on my cluster as well as cAdvisor and Grafana. It works tremendously well. I get all the data I need on Grafana's UI.
I started using Prometheus Java API in order to use this data. For example get the CPU usage and if it has a certain value something will be done.
What I display on Grafana is the Container CPU usage for each container. Now I would like to get that information with the Java API if possible (or something if not). But of course the PromQL queries aren't usable from a Java program (from what I tried but I may be wrong).
I thought of several ways:
Clone the cAdvisor project and directly implement what I want to do in Go
Create a bash script with the docker stat command that would get me the container and CPU usage associated
Or maybe there is actually a way to send PromQL queries.
For instance we get the metric by its name via Java or the Prometheus interface:
ex: node_cpu would get me some data.
But if I want something more precise, I need to send a request, for example irate(node_cpu{job="prometheus"}[5m]) which is not possible via Java.
Is there a way for me to get more precise metrics ?
Prometheus supports REST API requests, which are language-agnostic. You just need to send an HTTP request with your query and process the response.
see an example below, copied from their site.
the following HTTP GET request:
http://localhost:9090/api/v1/query?query=up&time=2015-07-01T20:10:51.781Z
returns something like this:
{
"status" : "success",
"data" : {
"resultType" : "vector",
"result" : [
{
"metric" : {
"__name__" : "up",
"job" : "prometheus",
"instance" : "localhost:9090"
},
"value": [ 1435781451.781, "1" ]
},
{
"metric" : {
"__name__" : "up",
"job" : "node",
"instance" : "localhost:9100"
},
"value" : [ 1435781451.781, "0" ]
}
]
}
}
lots more details, here

how to use odata services to create model dynamically from manifest.json

I am very new to UI5, I am working on an application which requires me to create models based on the request made from the browser (client).
If I consume all the odata services beforehand & use them according to the request made, it will become too heavy unnecessarily.
is there any way, this can be done dynamically?
I think your question title and the question content might be contradictory so I am placing my suggestions separately.
how to use odata services to create model dynamically from manifest.json
In your manifest.json file, locate the "sap.app" section/property and then add a datasource as follows:
"dataSources": { //used data sources -> ui5-related information stored in sap.ui5 namespace (unique inside the app)
"modelalias": { //key is alias which is used below in e.g. sap.ui5 ...
"uri": "/sap/opu/odata/snce/PO_S_SRV;v=2/" , //mandatory; version is part of uri, e.g. ";v=2", default is 1
"type": "OData" , //OData (default)|ODataAnnotation|INA|XML|JSON
"settings": { //data-source-type-specific attributes (key, value pairs)
"odataVersion": "2.0" , //possible values: 2.0 (default), 4.0
"annotations": [ "equipmentanno" ], //filled e.g. for Smart Template
"localUri": "model/metadata.xml" //relative url to local metadata
"maxAge": 360 //time in seconds
}
}
To instantiate this model with the alias "mymodel", you can add an entry into the manifest.json under "sap.ui5" as follows:
"models": {
...
"mymodel": { //empty string "" is the default model
"preload": true; //indicator that the model will be created immediately after the manifest is loaded by component factory and before the component instance is created
"dataSource": "modelalias", //reference of dataSource under sap.app - only enhance it with more settings for UI5 if needed
"settings": {
}
}
},
Now the manifest file will instantiate "mymodel" based on your odata uri in "datasources" and then set the model onto your Component.js. So when your application starts, you can access the model in any controller using:
this.getOwnerComponent().getModel("mymodel")
If I consume all the odata services beforehand & use them according to
the request made, it will become too heavy unnecessarily. is there any
way, this can be done dynamically?
Your assumption is that creating a model will slow the app startup. This may not always be true since:
The model creation is very quick
Reading data is what takes time and model instantiation
ODataModels work asyncronously be default so calling .read or .write are operations that can be managed asynchronously
Special case: If you wish to pre-fetch all your data in advance (at startup) I would advise that you ensure that you utilize filters like $select, $top and $skip on your Gateway service to implement growing list like behavior.
Hope that helps you.
More information on manifest.json: Link
Growing List: Link
ODataModel Examples: Link

How do I connect my database to API.AI?

How do I connect my database to API.AI
Making every sentence into INTENT and creating entities for each doesn't seem to be a good idea? So what is the best possible way to go about?
As far as I know it is not possible yet, but you can switch to row mode and past your entities inCVS or JSON format OR import a JSON/CSV file containing all your entities.
The file should look like below (JSON format):
[
{
"value": "val1",
"synonyms": [
"syn1",
"syn2"
]
},
{
"value": "val2",
"synonyms": [
"syn21",
"syn22"
]
},
]
So you can image of writing a small job that reads entities from you DB and make a JSON/CSV file according the wanted format.
Once the job done, this process may dramatically facilitate the creation of your entities on api.ai.
If you use a webhook for an intent, you can pass params to your endpoint where you can do all the queries to your db
I did a demo where I was querying news (cheating as I was getting it from the web, but I could plug a DB).
The was getting requests such as:
"What are the latest news about France"
latest and France would be params that I send through to the webhook endpoint.
You would get the following JSON sent your endpoint by API.AI
"result": {
"source": "agent",
"resolvedQuery": "latest news about France",
"action": "show.news",
"actionIncomplete": false,
"parameters": {
"adjective": "latest",
"subject": "France"
}
Then you can query all the news for France and order them by latest
In my understanding the idea is to create entities that are "placeholders" for the values you need to query.
Then you teach the AI with few examples by tagging in the request what did the person ask. Let say someone asks:
"what is the oldest news about France?"
The AI may not know what is oldest thus you tell it is is an adjective and from now on you can get oldest as a param

Resources