Rename subkey in json output - fluentd

Is it possible to rename a subkey in JSON or add a new subkey?
For example, I have this log output:
{
"kubernetes": {
"pod_name": "kube-apiserver-tst",
"namespace_name": "kube-system",
"pod_id": "93a2b43a-46e6-4539-8674-06dede2619fa",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
}
I know that with record_transformer I can add a new key:
<record>
pod_labels "something ..."
</record>
but it seems that it can only create a new key at the root of JSON:
{
"kubernetes": {
"pod_name": "kube-apiserver-tst",
"namespace_name": "kube-system",
"pod_id": "93a2b43a-46e6-4539-8674-06dede2619fa",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
"pod_labels": "something ..."
}
But can I make it look like this?
{
"kubernetes": {
"pod_name": "kube-apiserver-tst",
"namespace_name": "kube-system",
"pod_id": "93a2b43a-46e6-4539-8674-06dede2619fa",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
"pod_labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
}
or this:
{
"kubernetes": {
"pod_name": "kube-apiserver-tst",
"namespace_name": "kube-system",
"pod_id": "93a2b43a-46e6-4539-8674-06dede2619fa",
"pod_labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
}

The JSON is not valid. A closing curly brace is missing.
Here's the valid JSON:
{
"kubernetes": {
"pod_name": "kube-apiserver-tst",
"namespace_name": "kube-system",
"pod_id": "93a2b43a-46e6-4539-8674-06dede2619fa",
"labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
}
}
Minified JSON (echo '{JSON}' | jq -c .):
{"kubernetes":{"pod_name":"kube-apiserver-tst","namespace_name":"kube-system","pod_id":"93a2b43a-46e6-4539-8674-06dede2619fa","labels":{"component":"kube-apiserver","tier":"control-plane"}}}
The record_transformer filter plugin can be used with Ruby support (via enable_ruby option) to manipulate an existing key and then the unwanted keys can be removed with its remove_keys option.
Here's the sample config:
<filter debug.test>
#type record_transformer
enable_ruby true
<record>
temp ${ l = record["kubernetes"]["labels"]; record["kubernetes"]["pod_labels"] = l; nil; }
</record>
remove_keys temp, $.kubernetes.labels
</filter>
Here's the complete test:
fluent.conf
<source>
#type forward
</source>
<filter debug.test>
#type record_transformer
enable_ruby true
<record>
temp ${ l = record["kubernetes"]["labels"]; record["kubernetes"]["pod_labels"] = l; nil; }
</record>
remove_keys temp, $.kubernetes.labels
</filter>
<match debug.test>
#type stdout
</match>
Start fluentd with this config:
fluentd -c fluent.conf
On another terminal, send an event with fluent-cat (echo '{JSON}' | fluent-cat debug.test):
echo '{"kubernetes":{"pod_name":"kube-apiserver-tst","namespace_name":"kube-system","pod_id":"93a2b43a-46e6-4539-8674-06dede2619fa","labels":{"component":"kube-apiserver","tier":"control-plane"}}}' | fluent-cat debug.test
In fluentd logs, you should see the desired output:
2022-02-16 23:08:25.919967225 +0500 debug.test: {"kubernetes":{"pod_name":"kube-apiserver-tst","namespace_name":"kube-system","pod_id":"93a2b43a-46e6-4539-8674-06dede2619fa","pod_labels":{"component":"kube-apiserver","tier":"control-plane"}}}
Formatted output with jq (echo '{JSON}' | jq .):
echo '{"kubernetes":{"pod_name":"kube-apiserver-tst","namespace_name":"kube-system","pod_id":"93a2b43a-46e6-4539-8674-06dede2619fa","pod_labels":{"component":"kube-apiserver","tier":"control-plane"}}}' | jq .
Output:
{
"kubernetes": {
"pod_name": "kube-apiserver-tst",
"namespace_name": "kube-system",
"pod_id": "93a2b43a-46e6-4539-8674-06dede2619fa",
"pod_labels": {
"component": "kube-apiserver",
"tier": "control-plane"
}
}
}

Related

how can JAVA_OPTIONS added in deployconfig in OpenshiftContainer

I am trying to add below JAVA_OPTIONS in deployconfig in OpenshiftContainer but is throwing syntax error .Could anyone help me how to add parameters in OpenshiftContainer please
JAVA_OPTIONS
-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts,
-Djavax.net.ssl.trustStorePassword=changeit,
Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
deploymentConfig as json:
{
"apiVersion": "apps.openshift.io/v1",
"kind": "DeploymentConfig",
"metadata": {
"labels": {
"app": "${APP_NAME}"
},
"name": "${APP_NAME}"
},
"spec": {
"replicas": 1,
"selector": {
"app": "${APP_NAME}",
"deploymentconfig": "${APP_NAME}"
},
"strategy": null,
"template": {
"metadata": {
"labels": {
"app": "${APP_NAME}",
"deploymentconfig": "${APP_NAME}"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "SPRING_PROFILE",
"value": "migration"
},
{
"name": "JAVA_MAIN_CLASS",
"value": "com.agcs.Application"
},
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts",
"-Djavax.net.ssl.trustStorePassword=changeit",
-Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12
-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
},
{
"name": "MONGO_AUTH_DB",
"valueFrom": {
"secretKeyRef": {
"key": "spring.data.mongodb.authentication-database",
"name": "mongodb-secret"
}
}
},
],
"image": "${IMAGE_NAME}",
"imagePullPolicy": "Always",
"name": "${APP_NAME}",
"ports": [
{
"containerPort": 8103,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "500m",
"memory": "500Mi"
}
},
"volumeMounts":[
{
"name": "secret-volume",
"mountPath": "/mnt/secrets",
"readOnly": true
}
]
}
],
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "keystore-new"
}
}
]
}
}
}
}
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts",
"-Djavax.net.ssl.trustStorePassword=changeit",
-Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12
-Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS}
-Djava.awt.headless=true,
},
This is invalid json, as the key value can only have one value, while you have provided multiple comma separated strings.
JAVA_OPTIONS isn't a standard environment variable, so we don't know how it's processed but maybe this will work?
{
"name": "JAVA_OPTIONS",
"value":"-Djavax.net.ssl.trustStore={KEYSTORE_PATH}/cacerts.ts -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.keyStore=${KEYSTORE_PATH}/keystore.pkcs12 -Djavax.net.ssl.keyStorePassword=${KEYSTORE_PASS} -Djava.awt.headless=true"
},
But there's still probably an issue, because it seems like {KEYSTORE_PATH} is supposed to be a variable. That's not defined or expanded in this file. For a first attempt, probably just hardcode the values of all these variables.
For secrets (such as passwords) you can hardcode some value for initial testing, but please use OpenShift Secrets for formal testing and the actual deployment.

Get the average of data coming from thousands of sensors

I've been trying to build a dataflow pipeline that takes in data from Pubsub and publishes it to Bigtable or Bigquery. I can write the raw data for 1 sensor, but I can't do that for thousands of sensors when I try to calculate the mean of a window of data (60 seconds).
To illustrate the scenario:
My data payload
data = {
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location": "location1",
"name" : "name1",
"datapoint1" : "Some Integer",
"datapoint2" : "Some Integer",
"datapoint3" : "Some String",
.....
"datapointN" : "Some Integer",
}
In my example there will be thousands of sensors with the fullname "{location}_{name}". For each sensor, I would like to window the data to 60 seconds and calculate the average of that data.
The final form I am expecting
I will take this final form which exists as 1 element in order to insert into Bigtable and Bigquery
finalform = {
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location": "location1",
"name" : "name1",
"datapoint1" : "Integer That Has Been Averaged",
"datapoint2" : "Integer That Has Been Averaged",
"datapoint3" : "String that has been left alone",
.....
"datapointN" : "Integer That Has Been Averaged",
}
My solution so far which needs help.
p = beam.Pipeline()
rawdata = p | "Read" >> beam.io.ReadFromPubSub(topic=topic)
jsonData = rawdata | "Parse Json" >> beam.Map(json.loads)
windoweddata = jsonData|beam.WindowInto(window.FixedWindows(60))
groupedData = windoweddata | beam.GroupBy(location=lambda s: s["location"], name=lambda s: s["name"])
Now after the last line I am stuck. I want to be able to apply the CombinedValues in order to use mean.
However, after applying GroupBy I get a tuple (namedkey,value). Then I run a ParDo to work on that to split the json up into (key,value) tuples to prepare it for CombinedValues, all the data is mixed up again, and sensor data from various locations are now mixed in the PCollection.
My challenges
So the in its clearest form I have 2 main challenges:
How do I apply combinedvalues to my pipeline
How do I apply mean onto the pipeline but ignore the "string" type entries
Any help will be greatly welcomed.
My partial solution so far with help from chamikara
import apache_beam as beam
import apache_beam.runners.interactive.interactive_beam as ib
from apache_beam import window
class AverageFn(beam.CombineFn):
def create_accumulator(self):
print(dir(self))
return (1,2,3,4,5,6)
def add_input(self, sum_count, input):
print("add input",sum_count,input)
return sum_count
def merge_accumulators(self, accumulators):
print(accumulators)
data = zip(*accumulators)
return data
def extract_output(self, sum_count):
print("extract_output",sum_count)
data = sum_count
return data
with beam.Pipeline() as pipeline:
total = (
pipeline
| 'Create plant counts' >> beam.Create([
{
"timestamp": "2021-01-27 13:55:41.634717+08:00",
"location":"L1",
"name":"S1",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:55:41.634717+08:00",
"location":"L1",
"name":"S2",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:55:41.634717+08:00",
"location":"L2",
"name":"S3",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:55:51.634717+08:00",
"location":"L1",
"name":"S1",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:55:51.634717+08:00",
"location":"L1",
"name":"S2",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:55:51.634717+08:00",
"location":"L2",
"name":"S3",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location":"L1",
"name":"S1",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location":"L1",
"name":"S2",
"data1":1,
"data2":"STRING",
"data3":3,
},
{
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location":"L2",
"name":"S3",
"data1":1,
"data2":"STRING",
"data3":3,
},
])
| beam.GroupBy(location=lambda s: s["location"], name=lambda s: s["name"])
| beam.CombinePerKey(AverageFn())
| beam.Map(print))
Please see Combine section (particularly, CombinePerKey) here. You should first arrange your data into a PCollection of KVs with an appropriate key (for example a combination of location and name). This PCollection can be followed by a CombinePerKey with a CombineFn implementation that combines given data objects (by averaging respective fields).
This should be done within your CombineFn implementation, where you should combine relavent fields and ignore string fields.
The final answer is as below. The breakthrough for me was to realise not to use GroupBy but instead to use beam.Map because beam.Map is 1 to 1 transformation. I am transforming 1 row of my data into a tuple with (key,data) where the key is basically whatever I specify to be the unique identifier using Beam.Row() for that row that later I will collect and act on using combineperkey
import apache_beam as beam
import apache_beam.runners.interactive.interactive_beam as ib
from apache_beam import window
DATA = [
{
"timestamp": "2021-01-27 13:55:41.634717+08:00",
"location":"L1",
"name":"S1",
"data1":1,
"data2":"STRING",
"data3":5,
"data4":5,
},
{
"timestamp": "2021-01-27 13:55:41.634717+08:00",
"location":"L1",
"name":"S2",
"data1":9,
"data2":"STRING",
"data3":2,
"data4":2,
},
{
"timestamp": "2021-01-27 13:55:41.634717+08:00",
"location":"L2",
"name":"S3",
"data1":10,
"data2":"STRING",
"data3":4,
"data4":1,
},
{
"timestamp": "2021-01-27 13:55:51.634717+08:00",
"location":"L1",
"name":"S1",
"data1":11,
"data2":"STRING",
"data3":2,
"data4":7,
},
{
"timestamp": "2021-01-27 13:55:51.634717+08:00",
"location":"L1",
"name":"S2",
"data1":1,
"data2":"STRING",
"data3":4,
"data4":8,
},
{
"timestamp": "2021-01-27 13:55:51.634717+08:00",
"location":"L2",
"name":"S3",
"data1":9,
"data2":"STRING",
"data3":7,
"data4":8,
},
{
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location":"L1",
"name":"S1",
"data1":2,
"data2":"STRING",
"data3":3,
"data4":5,
},
{
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location":"L1",
"name":"S2",
"data1":6,
"data2":"STRING",
"data3":7,
"data4":6,
},
{
"timestamp": "2021-01-27 13:56:01.634717+08:00",
"location":"L2",
"name":"S3",
"data1":8,
"data2":"STRING",
"data3":1,
"data4":2,
},
]
class AverageFn2(beam.CombineFn):
def create_accumulator(self):
accumulator = {},0 #Set accumulator to be payload and count
return accumulator
def add_input(self, accumulator, input):
rowdata, count = accumulator
# Go through each item and try to add it if it is a float if not it is a string
for key,value in input.items():
if key in rowdata:
try:
rowdata[key]+=float(value)
except:
rowdata[key]=None
else:
rowdata[key]=value
return rowdata , count+1
def merge_accumulators(self, accumulators):
rowdata, counts = zip(*accumulators)
payload = {}
# Combine all the accumulators
for dictionary in rowdata:
for key,value in dictionary.items():
if key in payload:
try:
payload[key]+=float(value)
except:
payload[key]=None
else:
payload[key]=value
return payload, sum(counts)
def extract_output(self, accumulator):
rowdata, count = accumulator
for key,value in rowdata.items():
try:
float(value)
rowdata[key] = rowdata[key]/count
except:
pass
return rowdata
with beam.Pipeline() as pipeline:
total = (
pipeline
| 'Create plant counts' >> beam.Create(DATA)
| beam.Map( lambda item: (beam.Row(location=item["location"],name=item["name"]),item) )
| beam.CombinePerKey(AverageFn2())
| beam.Map(print))
Hope this helps another Dataflow newbie like myself.

ARM template integration with Azure Key Vault

I am trying to retrieve keyVault values within my ARM template
I have enabled my keyVault for ARM template retrieval
My parameter file looks like this
"postleadrequesturl": {
"reference": {
"keyVault": {
"id": "/subscriptions/e0f18fe9-181d-4a38-90bc-f2e0101f8f05/resourceGroups/RG-DEV-SHAREDSERVICES/providers/Microsoft.KeyVault/vaults/MMSG-APIManagement"
},
"secretName": "DEV-POSTLEADREQUEST-URL"
}
}
My deploy file looks like this
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "MMS.CRM.PostLeadRequest",
"serviceUrl": "[parameters('postleadrequesturl')]",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
},
The error I recieve is
Error converting value "#{keyVault=; secretName=DEV-POSTLEADREQUEST-URL}" to type 'Microsoft.WindowsAzure.ResourceStack.Frontdoor.Data.Entities.Deployments.KeyVaultParameterReference
Any thoughts?
According to my test, If we want to integrate Azure Key Vault in your Resource Manager template deployment, please refer to the following steps
Create Azure Key vault
New-AzResourceGroup -Name $resourceGroupName -Location $location
New-AzKeyVault `
-VaultName $keyVaultName `
-resourceGroupName $resourceGroupName `
-Location $location `
-EnabledForTemplateDeployment
$secretvalue = ConvertTo-SecureString 'hVFkk965BuUv' -AsPlainText -Force
$secret = Set-AzKeyVaultSecret -VaultName $keyVaultName -Name 'ExamplePassword' -SecretValue $secretvalue
$userPrincipalName = "<Email Address of the deployment operator>"
Set-AzKeyVaultAccessPolicy `
-VaultName $keyVaultName `
-UserPrincipalName $userPrincipalName `
-PermissionsToSecrets set,delete,get,list
Grant access to the key vault
The user who deploys the template must have the Microsoft.KeyVault/vaults/deploy/action permission for the scope of the resource group and key vault. The Owner and Contributor roles both grant this access.
a. Create a custom role definition JSON file
{
"Name": "Key Vault resource manager template deployment operator",
"IsCustom": true,
"Description": "Lets you deploy a resource manager template with the access to the secrets in the Key Vault.",
"Actions": [
"Microsoft.KeyVault/vaults/deploy/action"
],
"NotActions": [],
"DataActions": [],
"NotDataActions": [],
"AssignableScopes": [
"/subscriptions/00000000-0000-0000-0000-000000000000"
]
}
b. Create the new role using the JSON file:
New-AzRoleDefinition -InputFile "<PathToRoleFile>"
New-AzRoleAssignment `
-ResourceGroupName $resourceGroupName `
-RoleDefinitionName "Key Vault resource manager template deployment operator" `
-SignInName $userPrincipalName
Create ARM template
template.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"service_testapi068_name": {
"defaultValue": "testapi068",
"type": "String"
},
"postleadrequesturl": {
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.ApiManagement/service",
"apiVersion": "2019-01-01",
"name": "[parameters('service_testapi068_name')]",
"location": "Southeast Asia",
"sku": {
"name": "Developer",
"capacity": 1
},
"properties": {
"publisherEmail": "v-wenxu#microsoft.com",
"publisherName": "test",
"notificationSenderEmail": "apimgmt-noreply#mail.windowsazure.com",
"hostnameConfigurations": [
{
"type": "Proxy",
"hostName": "[concat(parameters('service_testapi068_name'), '.azure-api.net')]",
"negotiateClientCertificate": false,
"defaultSslBinding": true
}
],
"customProperties": {
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls10": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls11": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Ssl30": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TripleDes168": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls10": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls11": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Ssl30": "False",
"Microsoft.WindowsAzure.ApiManagement.Gateway.Protocols.Server.Http2": "False"
},
"virtualNetworkType": "None"
}
},
{
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"name": "[concat(parameters('service_testapi068_name'), '/demo-conference-api')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('service_testapi068_name'))]"
],
"properties": {
"displayName": "Demo Conference API",
"apiRevision": "1",
"description": "A sample API with information related to a technical conference. The available resources include *Speakers*, *Sessions* and *Topics*. A single write operation is available to provide feedback on a session.",
"serviceUrl": "[parameters('postleadrequesturl')]",
"path": "conference",
"protocols": [
"http",
"https"
],
"isCurrent": true
}
}
],
"outputs":{
"postleadrequesturl" :{
"type":"String",
"value":"[parameters('postleadrequesturl')]"
}
}
}
paramaters.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"postleadrequesturl": {
"reference": {
"keyVault": {
"id": "/subscriptions/e5b0fcfa-e859-43f3-8d84-5e5fe29f4c68/resourceGroups/testkeyandstorage/providers/Microsoft.KeyVault/vaults/testkey08"
},
"secretName": "postleadrequesturl"
}
}
}
}
Deploy
$name = ""
$password = ""
$secpasswd = ConvertTo-SecureString $password -AsPlainText -Force
$mycreds = New-Object System.Management.Automation.PSCredential ($name, $secpasswd)
Connect-AzAccount -Credential $mycreds
New-AzResourceGroupDeployment -ResourceGroupName "testapi06" -TemplateFile "E:\template.json" -TemplateParameterFile "E:\parameters.json"
For more details, please refer to
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-keyvault-parameter#grant-access-to-the-secrets
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-tutorial-use-key-vault

Filter in logstash a log4j2-Jsonlayout from docker gelf

Hello (I hope my english doesn't fail), I want to filter the json-message in logstash to use the json(all tags) in "message" as field in kibana.
How I set my filter in logstash to include all json in the "message" in elasticsearch to show them in kibana as fields?
I'm using log4j2 in my app to output the message to console with jsonlayout, then using docker gelf to output to logstash, then to elasticsearch to show in kibana (thats the requiriement), is in this way beacuse I need the threadcontext and the docker-container information.
This is my complete log in kibana
{
"_index": "logstash-2019.04.30-000001",
"_type": "_doc",
"_id": "YRagb2oBpwGypU5SDzwG",
"_version": 1,
"_score": null,
"_source": {
"#version": "1",
"command": "/WildFlyUser.sh",
"#timestamp": "2019-04-30T19:02:01.550Z",
"type": "gelf",
"message": "\u001b[0m\u001b[0m19:02:01,549 INFO [stdout] (default task-1) {\"thread\":\"default task-1\",\"level\":\"DEBUG\",\"loggerName\":\"com.corporation.app.configuration.LoggerInterceptor\",\"message\":\"thread=INI\",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.log4j.spi.AbstractLogger\",\"instant\":{\"epochSecond\":1556650921,\"nanoOfSecond\":548899000},\"contextMap\":{\"path\":\"/appAPI/v2/operation/a661e1c6-01df-4fb6-bf35-0b07fc429f5d\",\"threadId\":\"54419181-ce43-4d06-b9f1-564e5092183d\",\"userIp\":\"127.17.0.1\"},\"threadId\":204,\"threadPriority\":5}\r",
"created": "2019-04-30T18:54:09.6802872Z",
"tag": "14cb73fd827b",
"version": "1.1",
"source_host": "172.17.0.1",
"container_id": "14cb73fd827b5d0dc0c9a991131f55b43a302539364bfc2b7fa0cd4431855ebf",
"image_id": "sha256:6af0623e35cedc362aadd875d2232d113be73fda3b1cb6dcd09b12d41cdadc70",
"host": "linuxkit-00155d0cba2d",
"image_name": "corporation/appapi:2.1",
"container_name": "appapi",
"level": 6
},
"fields": {
"created": [
"2019-04-30T18:54:09.680Z"
],
"#timestamp": [
"2019-04-30T19:02:01.550Z"
]
},
"sort": [
1556650921550
]
}
this is the json in the "message", I want to include all the fields:
{
"thread": "default task-1",
"level": "DEBUG",
"loggerName": "com.corporation.app.configuration.LoggerInterceptor",
"message": "thread=INI",
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
"instant": {
"epochSecond": 1556650921,
"nanoOfSecond": 548899000
},
"contextMap": {
"path": "/appAPI/v2/operation/a661e1c6-01df-4fb6-bf35-0b07fc429f5d",
"threadId": "54419181-ce43-4d06-b9f1-564e5092183d",
"userIp": "127.17.0.1"
},
"threadId": 204,
"threadPriority": 5
}
Thank you

Persisting EventType with Serilog

I'm having trouble getting the EventType feature working in Serilog, as blogged about here.
I am using the following Nuget packages:
Serilog 2.8
Serilog.Settings.Configuration 3.0.1
Serilog.Sinks.File 4.0.0
Serilog.Sinks.MSSqlServer 5.1.2
First up, I created an EventTypeEnricher:
public class EventTypeEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
var crypto = new SimpleCrypto.PBKDF2();
var hash = crypto.Compute(logEvent.MessageTemplate.Text);
var numericHash = BitConverter.ToUInt32(Encoding.UTF8.GetBytes(hash), 0);
var eventId = propertyFactory.CreateProperty("EventType", numericHash);
logEvent.AddPropertyIfAbsent(eventId);
}
}
This seems to work (more on that later, but at the end of that method, a property is added with an EventType value in the EventId variable can be observed while debugging).
I created an extension method which adds this enricher:
public static LoggerConfiguration WithEventType(this LoggerEnrichmentConfiguration enrichmentConfiguration)
{
if (enrichmentConfiguration == null) throw new ArgumentNullException(nameof(enrichmentConfiguration));
return enrichmentConfiguration.With<EventTypeEnricher>();
}
I then use that when I configure the Logger:
Log.Logger = new LoggerConfiguration()
.Enrich.WithEventType()
.ReadFrom.Configuration(configuration)
.CreateLogger();
I go to write the error like this:
logger.Write(LogEventLevel.Error,
contextFeature.Error,
MessageTemplates.LogEntryDetailMessageTemplate,
new LogEntryDetail
{
Exception = contextFeature.Error,
Message = "Bad Stuff",
Timestamp = DateTime.UtcNow,
MessageTemplate = MessageTemplates.LogEntryDetailMessageTemplate,
Severity = LogEventLevel.Error
});
My Serilog appsettings section is as follows:
"Serilog": {
"Using": [ "Serilog.Sinks.File", "Serilog.Sinks.MSSqlServer", "MyAssembly" ],
"Enrich": [ "EventTypeEnricher" ],
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "Logs//Errorlog.log",
"fileSizeLimitBytes": 1073741824,
"retainedFileCountLimit": 30,
"rollingInterval": "Day",
"rollOnFileSizeLimit": true
},
"restrictedToMinimumLevel": "Verbose"
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "Data Source=(local);Initial Catalog=ADb;User Id=Serilog;Password=securepwd;",
"tableName": "ErrorLogs",
"autoCreateSqlTable": false,
"period": 30,
"columnOptionsSection": {
"disableTriggers": true,
"clusteredColumnstoreIndex": false,
"primaryKeyColumnName": "Id",
"addStandardColumns": [ "LogEvent" ],
"removeStandardColumns": [ "Properties" ],
"additionalColumns": [
{
"ColumnName": "EventType",
"DataType": "int",
"AllowNull": true
}
],
"id": { "nonClusteredIndex": true },
"level": {
"columnName": "Level",
"storeAsEnum": false
},
"timeStamp": {
"columnName": "Timestamp",
"convertToUtc": true
},
"logEvent": {
"excludeAdditionalProperties": true,
"excludeStandardColumns": true
},
"message": { "columnName": "Message" },
"exception": { "columnName": "Exception" },
"messageTemplate": { "columnName": "MessageTemplate" }
}
},
"restrictedToMinimumLevel": "Verbose"
}
]
}
My database table looks like this:
CREATE TABLE [dbo].[ErrorLogs](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[EventType] [int] NULL,
[Message] [nvarchar](max) NULL,
[MessageTemplate] [nvarchar](max) NULL,
[Level] [nvarchar](128) NULL,
[TimeStamp] [datetime] NOT NULL,
[Exception] [nvarchar](max) NULL,
[Properties] [nvarchar](max) NULL,
[LogEvent] [nvarchar](max) NULL,
CONSTRAINT [PK_ErrorLogs] PRIMARY KEY NONCLUSTERED
The EventType column in the database is always null, despite the code in the custom enricher running.
It is not written to the file sink either.
Can anyone see what I am doing wrong or missing?
Cheers
Updating to Serilog.Sinks.MSSqlServer version 5.1.3 fixed the issue as current stable version 5.1.2 not reading all columnOptionsSection section
Install-Package Serilog.Sinks.MSSqlServer -Version 5.1.3
And below updated configuration will fix your issue as you miss table mapping for EventType field
"Serilog": {
"Using": [ "Serilog.Sinks.File", "Serilog.Sinks.MSSqlServer", "MyAssembly" ],
"Enrich": [ "WithEventType" ],
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "Logs//Errorlog.log",
"fileSizeLimitBytes": 1073741824,
"retainedFileCountLimit": 30,
"rollingInterval": "Day",
"rollOnFileSizeLimit": true
},
"restrictedToMinimumLevel": "Verbose"
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "Data Source=(local);Initial Catalog=ADb;User Id=Serilog;Password=securepwd;",
"tableName": "ErrorLogs",
"autoCreateSqlTable": false,
"columnOptionsSection": {
"disableTriggers": true,
"clusteredColumnstoreIndex": false,
"primaryKeyColumnName": "Id",
"addStandardColumns": [ "LogEvent" ],
"additionalColumns": [
{
"ColumnName": "EventType",
"DataType": "int",
"AllowNull": true
}
],
"id": {
"columnName": "Id",
"nonClusteredIndex": true
},
"eventType": {
"columnName": "EventType"
},
"message": {
"columnName": "Message"
},
"messageTemplate": {
"columnName": "MessageTemplate"
},
"level": {
"columnName": "Level",
"storeAsEnum": false
},
"timeStamp": {
"columnName": "TimeStamp",
"convertToUtc": true
},
"exception": {
"columnName": "Exception"
},
"properties": {
"columnName": "Properties"
},
"logEvent": {
"columnName": "LogEvent"
}
}
}
}
]
}
And Logger configuration as below
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();

Resources