Telegraf splits input data into different outputs - influxdb

I want to write Telegraf config file which will:
Uses openweathermap input or custom http request result
{
"fields": {
...
"humidity": 97,
"temperature": -11.34,
...
},
"name": "weather",
"tags": {...},
"timestamp": 1675786146
}
Splits result on two similar JSONs:
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": 97,
"type": "humidity"
}
and
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": -11.34,
"type": "temperature"
}
Sends this JSONs into MQTT queue
Is it possible or I must create two different configs and make two api calls?

I found next configuration which solves my problem:
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"temperature","value":fields.main_temp,"timestamp":timestamp}'
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"humidity","value":fields.main_humidity,"timestamp":timestamp}'
[[inputs.http]]
urls = [
"https://api.openweathermap.org/data/2.5/weather?lat={$LAT}&lon={$LON}2&appid=${API_KEY}&units=metric"
]
data_format = "json"
Here we:
Retrieve data from OWM in input plugin.
Transform received data structure in needed structure in two different output plugins. We use this language https://jsonata.org/ for this aim.

Related

Need to print certain letters after a particular word using shell script

I am new to shell scripting and I want to print the IPs inside current cidr range and proposed cidr range below is the output
{
"acknowledgeRequiredBy": xxxxxx000,
"acknowledged": false,
"acknowledgedBy": "name#email.com",
"acknowledgedOn": xxxxxx00000,
"contacts": [
"dl#email.com",
"dl2#gmail.com"
],
"currentCidrs": [
"1.2.3.4/24",
"5.6.7.8/24",
"9.10.11.12/24",
"13.14.15.16/24",
"17.18.19.20/24",
"21.22.23.24/24",
"25.26.27.28/24",
],
"id": 1x4x3xx,
"latestTicketId": 0000,
"mapAlias": "sample name",
"mcmMapRuleId": 111xxx,
"proposedCidrs": [
"25.26.27.28/24",
"1.2.3.4/24",
"5.6.7.8/24",
"9.10.11.12/24",
],
"ruleName": "namerule.com",
"service": "S",
"shared": na,
"sureRouteName": "example",
I want the output as only the IPs please help me with the logic

How to maintain data type in respond after connecting with Datastax Astra DB?

I am using Datastax Astra database. I am uploading csv file and I am setting all the columns data type as float, as per column. I connected with my db via python http_method.
res = astra_client.request(
method=http_methods.GET,
path=f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{ASTRA_DB_COLLECTION}/rows")
This gives me all the rows, but the data type is not maintained in the respond. All the float converted to string. Why and How can I solve this ?
{'PetalWidthCm': '0.2', 'Id': 23, 'PetalLengthCm': '1.0', 'SepalWidthCm': '3.6', 'Species': 'Iris-setosa', 'SepalLengthCm': '4.6'}
How are you creating your table and adding data? For example, the following works for me.
import http.client
import json
ASTRA_TOKEN = ""
ASTRA_DB_ID = ""
ASTRA_DB_REGION = ""
ASTRA_DB_KEYSPACE = "testks"
ASTRA_DB_COLLECTION = "float_test"
conn = http.client.HTTPSConnection(f"{ASTRA_DB_ID}-{ASTRA_DB_REGION}.apps.astra.datastax.com")
headers = {
'X-Cassandra-Token': ASTRA_TOKEN,
'Content-Type': 'application/json'
}
# Create the table
createTablePayload = json.dumps({
"name": f"{ASTRA_DB_COLLECTION}",
"columnDefinitions": [
{"name": "id", "typeDefinition": "text"},
{"name": "floatval", "typeDefinition": "float"},
{"name": "intval", "typeDefinition": "int"},
{"name": "doubleval", "typeDefinition": "double"}
],
"primaryKey": {"partitionKey": ["id"]},
"ifNotExists": True
})
# Add some data
conn.request("POST", f"/api/rest/v2/schemas/keyspaces/{ASTRA_DB_KEYSPACE}/tables", createTablePayload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
addRowPayload = json.dumps({
"id": "af2603d2-8c03-11eb-a03f-0ada685e0000",
"floatval": 1.1,
"intval": 2,
"doubleval": 3.3
})
conn.request("POST", f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{ASTRA_DB_COLLECTION}", addRowPayload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
# Read back the data
conn.request("GET", f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{ASTRA_DB_COLLECTION}/rows", "", headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
The response from this is
$ python3 get_rows.py
{"name":"float_test"}
{"id":"af2603d2-8c03-11eb-a03f-0ada685e0000"}
{"count":1,"data":[{"id":"af2603d2-8c03-11eb-a03f-0ada685e0000","doubleval":3.3,"intval":2,"floatval":1.1}]}

Is there a way to use OCR to extract specific data from a CAD technical drawing?

I'm trying to use OCR to extract only the base dimensions of a CAD model, but there are other associative dimensions that I don't need (like angles, length from baseline to hole, etc). Here is an example of a technical drawing. (The numbers in red circles are the base dimensions, the rest in purple highlights are the ones to ignore.) How can I tell my program to extract only the base dimensions (the height, length, and width of a block before it goes through the CNC)?
The issue is that the drawings I get are not in a specific format, so I can't tell the OCR where the dimensions are. It has to figure out on its own contextually.
Should I train the program through machine learning by running several iterations and correcting it? If so, what methods are there? The only thing I can think of are Opencv cascade classifiers.
Or are there other methods to solving this problem?
Sorry for the long post. Thanks.
I feel you... it's a very tricky problem, and we spent the last 3 years finding a solution for it. Forgive me for mentioning the own solution, but it will certainly solve your problem: pip install werk24
from werk24 import Hook, W24AskVariantMeasures
from werk24.models.techread import W24TechreadMessage
from werk24.utils import w24_read_sync
from . import get_drawing_bytes # define your own
def recv_measures(message: W24TechreadMessage) -> None:
for cur_measure in message.payload_dict.get('measures'):
print(cur_measure)
if __name__ == "__main__":
# define what information you want to receive from the API
# and what shall be done when the info is available.
hooks = [Hook(ask=W24AskVariantMeasures(), function=recv_measures)]
# submit the request to the Werk24 API
w24_read_sync(get_drawing_bytes(), hooks)
In your example it will return for example the following measure
{
"position": <STRIPPED>
"label": {
"blurb": "ø30 H7 +0.0210/0",
"quantity": 1,
"size": {
"blurb": "30",
"size_type":" "DIAMETER",
"nominal_size": "30.0",
},
"unit": "MILLIMETER",
"size_tolerance": {
"toleration_type": "FIT_SIZE_ISO",
"blurb": "H7",
"deviation_lower": "0.0",
"deviation_upper": "0.0210",
"fundamental_deviation": "H",
"tolerance_grade": {
"grade":7,
"warnings":[]
},
"thread": null,
"chamfer": null,
"depth":null,
"test_dimension": null,
},
"warnings": [],
"confidence": 0.98810
}
or for a GD&T
{
"position": <STRIPPED>,
"frame": {
"blurb": "[⟂|0.05|A]",
"characteristic": "⟂",
"zone_shape": null,
"zone_value": {
"blurb": "0.05",
"width_min": 0.05,
"width_max": null,
"extend_quantity": null,
"extend_shape": null,
"extend": null,
"extend_angle": null
},
"zone_combinations": [],
"zone_offset": null,
"zone_constraint": null,
"feature_filter": null,
"feature_associated": null,
"feature_derived": null,
"reference_association": null,
"reference_parameter": null,
"material_condition": null,
"state": null,
"data": [
{
"blurb": "A"
}
]
}
}
Check the documentation on Werk24 for details.
Although a managed offering, Mixpeek is one free option:
pip install mixpeek
from mixpeek import Mixpeek
mix = Mixpeek(
api_key="my-api-key"
)
mix.upload(file_name="design_spec.dwg", file_path="s3://design_spec_1.dwg")
This /upload endpoint will extract the contents of your DWG file, then when you search for terms it will include the file_path so you can render it in your HTML.
Behind the scenes it uses the open source LibreDWG library to run a number of AutoCAD native commands such as DATAEXTRACTION.
Now you can search for a term and the relevant DWG file (in addition to the context in which it exists) will be returned:
mix.search(query="retainer", include_context=True)
[
{
"file_id": "6377c98b3c4f239f17663d79",
"filename": "design_spec.dwg",
"context": [
{
"texts": [
{
"type": "text",
"value": "DV-34-"
},
{
"type": "hit",
"value": "RETAINER"
},
{
"type": "text",
"value": "."
}
]
}
],
"importance": "100%",
"static_file_url": "s3://design_spec_1.dwg"
}
]
More documentation here: https://docs.mixpeek.com/

Array to JSON Array in Swift

I have two array of Integers and I like to send it to my mongodb database. When I send it to database in Alamofire as paramter and in code data_array 1 and 2 refers to Int arrays.
let parameters_post: Parameters = [
"sensor_id": "ecg_raw",
"member_id": "58d3f509e48f4ca90dd218e4",
"esignal": "3.5V",
"ts": "emre",
"value1" : data_array1,
"value2" : data_array2
]
Alamofire.request("https://api.mlab.com/api/1/databases/mysignal/collections/Cecgraw?apiKey=2ABdhQTy1GAWiwfvsKfJyeZVfrHeloQI", method: .post, parameters: parameters_post,encoding: JSONEncoding.default, headers: nil).responseData{ response in
print(response.request)
print(response.response)
print(response.result)
}
However, It is seen as like this in mongodb. I think which is incorrect;
{
"_id": {
"$oid": "58f9d0e7c2ef162ad3000cb6"
},
"sensor_id": "ecg_raw",
"member_id": "58d3f509e48f4ca90dd218e4",
"value2": [
[
240,
279,
555,
547,
504
]
],
"value1": [
[
135,
91,
101,
115,
106
]
],
"esignal": "3.5V",
"ts": "emre"
}
As per you said you want to send value for key value as [(Int, Int)]. But actually you are sending as [[(Int, Int)]], which means array of array of tuples(Hope you need to send as array of tuples).
Try to send list below,
let parameters_post: Parameters = [
"sensor_id": "ecg_raw",
"member_id": "58d3f509e48f4ca90dd218e4",
"esignal": "3.5V",
"ts": "emre",
"value" : data_array
]
Thanks.

Handling groups of key, value pairs while parsing lxc container file with python

I have lxc configuration file looking like ini/prop file but it contains duplicate key-value pairs that i need to group, I wish to convert in dict and json, here is the sample:
lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /opt/mail0/rootfs/
lxc.network.type = veth
lxc.network.name = eth7
lxc.network.link = br7
lxc.network.ipv4 = 192.168.144.215/24
lxc.network.type = veth
lxc.network.name = eth9
lxc.network.link = br9
lxc.network.ipv4 = 10.10.9.215/24
here is desired python data, that also converts keys from cfg to key paths
data = {
"lxc":{ "tty": 4, "pts": 1024 , "rootfs": "/opt/mail0/rootfs",
"network": [
{"type": "veth", "name": "eth7", "link": "br7", "ipv4":"192.168.144.215/24"},
{"type": "veth", "name": "eth9", "link": "br9", "ipv4":"10.10.9.215/24"}
]
}
}
Can you share/suggest your method & rules for handling such key, value pairs groups. Thank you in advance!

Resources