Handling groups of key, value pairs while parsing lxc container file with python - parsing

I have lxc configuration file looking like ini/prop file but it contains duplicate key-value pairs that i need to group, I wish to convert in dict and json, here is the sample:
lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /opt/mail0/rootfs/
lxc.network.type = veth
lxc.network.name = eth7
lxc.network.link = br7
lxc.network.ipv4 = 192.168.144.215/24
lxc.network.type = veth
lxc.network.name = eth9
lxc.network.link = br9
lxc.network.ipv4 = 10.10.9.215/24
here is desired python data, that also converts keys from cfg to key paths
data = {
"lxc":{ "tty": 4, "pts": 1024 , "rootfs": "/opt/mail0/rootfs",
"network": [
{"type": "veth", "name": "eth7", "link": "br7", "ipv4":"192.168.144.215/24"},
{"type": "veth", "name": "eth9", "link": "br9", "ipv4":"10.10.9.215/24"}
]
}
}
Can you share/suggest your method & rules for handling such key, value pairs groups. Thank you in advance!

Related

Telegraf splits input data into different outputs

I want to write Telegraf config file which will:
Uses openweathermap input or custom http request result
{
"fields": {
...
"humidity": 97,
"temperature": -11.34,
...
},
"name": "weather",
"tags": {...},
"timestamp": 1675786146
}
Splits result on two similar JSONs:
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": 97,
"type": "humidity"
}
and
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": -11.34,
"type": "temperature"
}
Sends this JSONs into MQTT queue
Is it possible or I must create two different configs and make two api calls?
I found next configuration which solves my problem:
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"temperature","value":fields.main_temp,"timestamp":timestamp}'
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"humidity","value":fields.main_humidity,"timestamp":timestamp}'
[[inputs.http]]
urls = [
"https://api.openweathermap.org/data/2.5/weather?lat={$LAT}&lon={$LON}2&appid=${API_KEY}&units=metric"
]
data_format = "json"
Here we:
Retrieve data from OWM in input plugin.
Transform received data structure in needed structure in two different output plugins. We use this language https://jsonata.org/ for this aim.

Folium-ChoropletMap issues with key_on: does not overlay the choroplet map

I have a strange issue with the following piece of code:
m10=folium.Map(location=[41.9027835,12.4963655],tiles='openstreetmap',zoom_start=5)
df.reset_index(inplace = True)
folium.Choropleth(
geo_data = df.to_json(),
data = df,
columns=['TERRITORIO', var],
key_on='feature.properties.TERRITORIO',
fill_color='YlGnBu',
fill_opacity=0.6,
line_opacity=1,
nan_fill_color='black',
legend_name= get_title_(file_name),
smooth_factor=0).add_to(m10)
folium.features.GeoJson(df,
name='Labels',
style_function=lambda x: {'color':'transparent','fillColor':'transparent','weight':0},
tooltip=folium.features.GeoJsonTooltip(fields=[var],
aliases = [indicator],
labels=True,
sticky=False
)
).add_to(m10)
I use the same piece of code on two different geodataframes. With the first (smaller) dataframe I have no issues.
However, when I try to do the same with the other one I do not see the choroplet map layer.
This is the first dataset (after the reset of the index):
TERRITORIO ... geometry
0 Nord ... MULTIPOLYGON (((9.85086 44.02340, 9.85063 44.0...
1 Centro ... MULTIPOLYGON (((10.31417 42.35043, 10.31424 42...
2 Mezzogiorno ... MULTIPOLYGON (((8.41112 38.86296, 8.41127 38.8...
This is the second dataset (after the reset of the index)
TERRITORIO ... geometry
0 Abruzzo ... MULTIPOLYGON (((930273.425 4714737.743, 930147...
1 Basilicata ... MULTIPOLYGON (((1073851.435 4445828.604, 10738...
2 Calabria ... MULTIPOLYGON (((1083350.847 4416684.239, 10833...
3 Campania ... MULTIPOLYGON (((1037266.901 4449456.848, 10372...
4 Emilia-Romagna ... MULTIPOLYGON (((618335.211 4893983.160, 618329...
These are insted the json files:
first:
{"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "properties": {"INDICATORE": "Densit\\u00e0 di verde storico", "NOTE": null, "Shape_Area": 57926800546.7, "Shape_Leng": 2670893.51269, "TERRITORIO": "Nord", "UNITA_MISURA": "per 100 m2", "V_2004": null, "V_2005": null, "V_2006": null, "V_2007": null, "V_2008": null, "V_2009": null, "V_2010": null, "V_2011": 2.4, "V_2012": 2.4, "V_2013": 2.4, "V_2014": 2.4, "V_2015": 2.4, "V_2016": 2.4, "V_2017": 2.4, "V_2018": 2.4, "V_2019": null, "index": 0}, "geometry": {"type": "MultiPolygon", "coordinates": ...
second:
{"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "properties": {"INDICATORE": "Densit\\u00e0 e rilevanza del patrimonio museale", "NOTE": null, "Shape_Area": 10831496151.0, "Shape_Leng": 664538.009079, "TERRITORIO": "Abruzzo", "UNITA_MISURA": "per 100 km2", "V_2004": null, "V_2005": null, "V_2006": null, "V_2007": null, "V_2008": null, "V_2009": null, "V_2010": null, "V_2011": null, "V_2012": null, "V_2013": null, "V_2014": null, "V_2015": 0.22, "V_2016": null, "V_2017": 0.13, "V_2018": 0.11, "V_2019": null}, "geometry": {"type": "MultiPolygon", "coordinates":...
I really do not understand why one works and the other does not.
Do you have any suggestion?
Thank you in advance for your time!
I can't find any geometry for Nord, Centro, Mezzogiorno Italy, so have sythesized by dissolving regions geometry
have setup functions and variables used by your code to make this a MWE
can switch between geometries by # regions==False, north/central/south==True if True: both generate appropriate folium maps
it's clear in your question that your two sets of geometry are using different CRS. First data set looks like EPSG:4326 (hence works). Second looks like a UTM CRS (points in meters not degrees) that would need to be projected to EPSG:4326
import folium
import geopandas as gpd
import numpy as np
# make SO code runnable, get some geometry and set columns / variables used by code
df = gpd.read_file("https://github.com/openpolis/geojson-italy/raw/master/geojson/limits_IT_regions.geojson").sort_values("reg_name")
df["var"] = np.random.randint(1,10,len(df))
df["TERRITORIO"] = df["reg_istat_code_num"]
df["NCM"] = np.where(df["reg_istat_code_num"]<9,"Nord", np.where(df["reg_istat_code_num"]<15,"Centro", "Mezzogiorno"))
var = "var"
file_name="regions"
indicator = "some number"
def get_title_(file_name):
return file_name
# regions==False, north/central/south==True
if True:
df = df.dissolve("NCM")
file_name = "ncm"
# unchanged code
m10 = folium.Map(location=[41.9027835, 12.4963655], tiles="openstreetmap", zoom_start=5)
df.reset_index(inplace=True)
folium.Choropleth(
geo_data=df.to_json(),
data=df,
columns=["TERRITORIO", var],
key_on="feature.properties.TERRITORIO",
fill_color="YlGnBu",
fill_opacity=0.6,
line_opacity=1,
nan_fill_color="black",
legend_name=get_title_(file_name),
smooth_factor=0,
).add_to(m10)
folium.features.GeoJson(
df,
name="Labels",
style_function=lambda x: {
"color": "transparent",
"fillColor": "transparent",
"weight": 0,
},
tooltip=folium.features.GeoJsonTooltip(
fields=[var], aliases=[indicator], labels=True, sticky=False
),
).add_to(m10)
m10
Modifying the crs I was able to overcome the above issiue.
# Create the folium map
m10=folium.Map(location=[41.9027835,12.4963655],tiles='openstreetmap',zoom_start=5)
# Data
df.to_crs(crs = 4326, inplace = True)
df.reset_index(inplace = True)
folium.Choropleth(
geo_data = df.to_json(),
data = df,
columns=['TERRITORIO', var],
key_on='feature.properties.TERRITORIO',
fill_color='YlGnBu',
fill_opacity=0.6,
line_opacity=1,
nan_fill_color='black',
legend_name= get_title_(file_name),
smooth_factor=0).add_to(m10)

OPA REGO: How to find all not matching items in another dictionary?

Given input as follow:
{
"source": "serverA",
"destination": "serverB",
"rules": {
"tcp": {
"ssh": [
22
],
"https": [
8443
]
},
"udp": [
53
]
}
}
and data source:
{
"allowedProtocolTypeToPortMapping": {
"tcp": {
"ssh": [22],
"https": [443]
},
"udp": {
"dns": [53]
},
"icmp": {
"type": [0]
}
}
}
I want to create a policy that checks all rules and shows the ones that are not compliant to data source. In this example that would be port 8443 with protocol type https that is not compliant (allowed only 443). What is the best way to achieve it using rego language?
Here's one way of doing it.
package play
violations[msg] {
# Traverse input object stopping only at array values
walk(input.rules, [path, value])
is_array(value)
# Get corresponding list of allowed ports from matching path
allowedPorts := select(data.allowedProtocolTypeToPortMapping, path)
# At this point, we have one array of ports from the input (value)
# and one array of allowed ports. Converting both to sets would allow
# us to compare the two using simple set intersection - i.e. checking
# that all ports in the input are also in the allowed ports.
ip := {port | port := value[_]}
ap := {port | port := allowedPorts[_]}
# Unless all ports in the input array are in both the input and the
# allowed ports, it's a violation
count(ip) != count(ip & ap)
msg := concat(".", path)
}
select(o, path) = value {
walk(o, [path, value])
}
Rego Playground example available here.

How to maintain data type in respond after connecting with Datastax Astra DB?

I am using Datastax Astra database. I am uploading csv file and I am setting all the columns data type as float, as per column. I connected with my db via python http_method.
res = astra_client.request(
method=http_methods.GET,
path=f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{ASTRA_DB_COLLECTION}/rows")
This gives me all the rows, but the data type is not maintained in the respond. All the float converted to string. Why and How can I solve this ?
{'PetalWidthCm': '0.2', 'Id': 23, 'PetalLengthCm': '1.0', 'SepalWidthCm': '3.6', 'Species': 'Iris-setosa', 'SepalLengthCm': '4.6'}
How are you creating your table and adding data? For example, the following works for me.
import http.client
import json
ASTRA_TOKEN = ""
ASTRA_DB_ID = ""
ASTRA_DB_REGION = ""
ASTRA_DB_KEYSPACE = "testks"
ASTRA_DB_COLLECTION = "float_test"
conn = http.client.HTTPSConnection(f"{ASTRA_DB_ID}-{ASTRA_DB_REGION}.apps.astra.datastax.com")
headers = {
'X-Cassandra-Token': ASTRA_TOKEN,
'Content-Type': 'application/json'
}
# Create the table
createTablePayload = json.dumps({
"name": f"{ASTRA_DB_COLLECTION}",
"columnDefinitions": [
{"name": "id", "typeDefinition": "text"},
{"name": "floatval", "typeDefinition": "float"},
{"name": "intval", "typeDefinition": "int"},
{"name": "doubleval", "typeDefinition": "double"}
],
"primaryKey": {"partitionKey": ["id"]},
"ifNotExists": True
})
# Add some data
conn.request("POST", f"/api/rest/v2/schemas/keyspaces/{ASTRA_DB_KEYSPACE}/tables", createTablePayload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
addRowPayload = json.dumps({
"id": "af2603d2-8c03-11eb-a03f-0ada685e0000",
"floatval": 1.1,
"intval": 2,
"doubleval": 3.3
})
conn.request("POST", f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{ASTRA_DB_COLLECTION}", addRowPayload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
# Read back the data
conn.request("GET", f"/api/rest/v2/keyspaces/{ASTRA_DB_KEYSPACE}/{ASTRA_DB_COLLECTION}/rows", "", headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
The response from this is
$ python3 get_rows.py
{"name":"float_test"}
{"id":"af2603d2-8c03-11eb-a03f-0ada685e0000"}
{"count":1,"data":[{"id":"af2603d2-8c03-11eb-a03f-0ada685e0000","doubleval":3.3,"intval":2,"floatval":1.1}]}

Metric math alarms: How can I use a for_each expression to loop over metrics within a dynamic block?

I am trying to create dynamic metric math alarms, that are configurable with a JSON.
I am struggling with looping over the metric alarm with a for_each expression as this is a loop within a loop.
Here is an example of what I am trying to do:
resource "aws_cloudwatch_metric_alarm" "Percentage_Alert" {
for_each = var.percentage_error_details
locals { alarm_details = each.value }
alarm_name = "${terraform.workspace}-${each.key}"
comparison_operator = local.alarm_details["Comparison_Operator"]
evaluation_periods = "1"
threshold = local.alarm_details["Threshold"]
metric_query {
id = "e1"
expression = local.alarm_details["Expression"]
label = local.alarm_details["Label"]
return_data = "true"
}
dynamic "metric_query" {
for metric in each.value["Metrics"]{
id = metric.key
metric_name = metric.value
period = local.alarm_details["Period"]
stat = local.alarm_details["Statistic"]
namespace = local.full_namespace
unit = "Count"
}
}
}
And this is the sample JSON
{
"locals": {
"Name": {
"Name": "metric_math",
"Metrics": {
"m1": "Sucess",
"m2": "Failure"
},
"Expression": "100*(m2/(m1+m2))",
"Threshold" : 1,
"Period": 25,
"Priority": "critical",
"Statistic": "Sum",
"Label": "label",
"Comparison_Operator": "GreaterThanOrEqualToThreshold"
}
}
}
And this is the error message i'm getting:
Error: Invalid block definition
On ../modules/cloudwatch/metriclogfilter/main.tf line 89: Either a quoted
string block label or an opening brace ("{") is expected here.
Any help would be much appreciated.

Resources