I'd like to send my logs to elastic search via fluent-bit. I've configured values.yaml as follow.
parsers:
enabled: true
json:
- name: docker
timeKey: time
timeFormat: "%Y-%m-%dT%H:%M:%S.%L"
timeKeep: on
decodeFieldAs: json
backend:
type: es
es:
host: myhost
port: 9243
http_user: elastic
http_passwd: elastic
tls: "on"
Logs are coming in elastic search but log field is not decoded as JSON. Can you please help with this YAML to decode log field as JSON.
Sample log/document generated by fluent-bit
{
"_index": "kubernetes_cluster-2019.03.30",
"_type": "flb_type",
"_id": "xdTVzGkBmTc6-uH5QzgK",
"_version": 1,
"_score": null,
"_source": {
"#timestamp": "2019-03-30T04:09:02.259Z",
"log": "{\"time\":\"2019-03-30T04:09:02.258+00:00\",\"#version\":1,\"logger_name\":\"com.org.activemq.ActiveMQQueueUtility\",\"thread_name\":\"ORDER RESYNC TASK-0\",\"level\":\"INFO\",\"eventName\":\"syncOrder\",\"requestId\":\"LadyXhg0Hy8m7jJSQ2f\",\"eventMessage\":\"{\"sendEmail\":false,\"storeId\":61549}\",\"childRequestId\":\"LbBmKjCDuyaXQ-HwKL_\",\"action\":\"messagePublished\",\"isSent\":true,\"elapseTime\":101,\"queue\":\"HPT.SYNC.SYNC_O\",\"caller_class_name\":\"com.org.activemq.ActiveMQQueueUtility$ActiveMQProducer\",\"caller_method_name\":\"produce\",\"caller_file_name\":\"ActiveMQQueueUtility.java\",\"caller_line_number\":202}\n",
"stream": "stdout",
"time": "2019-03-30T04:09:02.259158471Z",
"kubernetes": {
"pod_name": "backend-c88bbb8f9-jtpfr",
"namespace_name": "dev",
"pod_id": "8700ba57-4d51-11e9-a90b-06fcff7cc9aa",
"labels": {
"app": "backend",
"pod-template-hash": "744666495",
"release": "dev"
},
"annotations": {
"checksum/config": "ceb71980bda81a95c3175a83f3d5cbe622c7e712d2c399a36d8045c8c4bcd467",
"checksum/secret": "eca5e141d20b020ec66cd82d784347e9550d01a139e494f9010ebd4e790538f1"
},
"host": "ip-xxx-xx-xx-xx.us-east-2.compute.internal",
"container_name": "backend",
"docker_id": "a2b8d61d0bd35e61f42a2524be8e1d04be96a2e7ce74b4620ce058cac2101357"
}
},
"fields": {
"#timestamp": [
"2019-03-30T04:09:02.259Z"
],
"time": [
"2019-03-30T04:09:02.259Z"
]
},
"sort": [
1553918942259
]
}
Related
I'm using filebeat 7.10.1 installed on host system (not docker container), running as service by root
according to https://www.elastic.co/guide/en/beats/filebeat/current/add-docker-metadata.html
and https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-container.html
filebeat config, filebeat.yml:
filebeat.inputs:
- type: container
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata: ~
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.kibana:
output.logstash:
hosts: ["<logstash_host>:5044"]
started container:
docker run --rm -d -l my-label --label com.example.foo=bar -p 80:80 nginx
filebeat get logs and successfully send them to endpoint (in my case to logstash, which resend to elasticsearch), but generated json by filebeat contains only container.id without container.name, container.labels and container.image
it looks like (copy-paste from kibana):
{
"_index": "logstash-2021.02.10",
"_type": "_doc",
"_id": "s4a4i3cB8j0XLXFVuyMm",
"_version": 1,
"_score": null,
"_source": {
"#version": "1",
"ecs": {
"version": "1.6.0"
},
"#timestamp": "2021-02-10T11:33:54.000Z",
"host": {
"name": "<some_host>"
},
"input": {
"type": "container"
},
"tags": [
"beats_input_codec_plain_applied"
],
"log": {
.....
},
"stream": "stdout",
"container": {
"id": "15facae2115ea57c9c99c13df815427669e21053791c7ddd4cd0c8caf1fbdf8c-json.log"
},
"agent": {
"version": "7.10.1",
"ephemeral_id": "adebf164-0b0d-450f-9a50-11138e519a27",
"id": "0925282e-319e-49e0-952e-dc06ba2e0c43",
"name": "<some_host>",
"type": "filebeat",
"hostname": "<some_host>"
}
},
"fields": {
"log.timestamp": [
"2021-02-10T11:33:54.000Z"
],
"#timestamp": [
"2021-02-10T11:33:54.000Z"
]
},
"highlight": {
"log.logger_name": [
"#kibana-highlighted-field#gw_nginx#/kibana-highlighted-field#"
]
},
"sort": [
1612956834000
]
}
what am I doing wrong? How to configure filebeat for send container.name, container.labels, container.image?
So after looking on filebeat-debug and paths on filesystem - issue closed
Reason: symlink /var/lib/docker -> /data/docker produces unexpected behavior
Solution:
filebeat.inputs:
- type: container
enabled: true
paths:
- '/data/docker/containers/*/*.log' #use realpath
processors:
- add_docker_metadata:
match_source_index: 3 #subfolder for extract container id from path
Trying to start up rabbitmq in K8s while attaching a configmap gives me the following error:
/usr/local/bin/docker-entrypoint.sh: line 367: rabbitmq-plugins: command not found
/usr/local/bin/docker-entrypoint.sh: line 405: exec: rabbitmq-server: not found
Exactly the same setup is working fine with docker-compose, so I am a bit lost. Using rabbitmq:3.8.3
Here is a snippet from my deployment:
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "rabbitmq"
}
},
"spec": {
"volumes": [
{
"name": "rabbitmq-configuration",
"configMap": {
"name": "rabbitmq-configuration",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "rabbitmq",
"image": "rabbitmq:3.8.3",
"ports": [
{
"containerPort": 5672,
"protocol": "TCP"
}
],
"env": [
{
"name": "RABBITMQ_DEFAULT_USER",
"value": "guest"
},
{
"name": "RABBITMQ_DEFAULT_PASS",
"value": "guest"
},
{
"name": "RABBITMQ_ENABLED_PLUGINS_FILE",
"value": "/opt/enabled_plugins"
}
],
"resources": {},
"volumeMounts": [
{
"name": "rabbitmq-configuration",
"mountPath": "/opt/"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
And here is the configuration:
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "rabbitmq-configuration",
"namespace": "e360",
"selfLink": "/api/v1/namespaces/default/configmaps/rabbitmq-configuration",
"uid": "28071976-98f6-11ea-86b2-0244a03303e1",
"resourceVersion": "1034540",
"creationTimestamp": "2020-05-18T10:55:58Z"
},
"data": {
"enabled_plugins": "[rabbitmq_management].\n"
}
}
That's because you're monting a volume in /opt, which is the rabbitmq home path.
So, the entrypoint script cannot find any of the rabbitmq binaries.
You can see the rabbitmq Dockerfile here
Hello (I hope my english doesn't fail), I want to filter the json-message in logstash to use the json(all tags) in "message" as field in kibana.
How I set my filter in logstash to include all json in the "message" in elasticsearch to show them in kibana as fields?
I'm using log4j2 in my app to output the message to console with jsonlayout, then using docker gelf to output to logstash, then to elasticsearch to show in kibana (thats the requiriement), is in this way beacuse I need the threadcontext and the docker-container information.
This is my complete log in kibana
{
"_index": "logstash-2019.04.30-000001",
"_type": "_doc",
"_id": "YRagb2oBpwGypU5SDzwG",
"_version": 1,
"_score": null,
"_source": {
"#version": "1",
"command": "/WildFlyUser.sh",
"#timestamp": "2019-04-30T19:02:01.550Z",
"type": "gelf",
"message": "\u001b[0m\u001b[0m19:02:01,549 INFO [stdout] (default task-1) {\"thread\":\"default task-1\",\"level\":\"DEBUG\",\"loggerName\":\"com.corporation.app.configuration.LoggerInterceptor\",\"message\":\"thread=INI\",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.log4j.spi.AbstractLogger\",\"instant\":{\"epochSecond\":1556650921,\"nanoOfSecond\":548899000},\"contextMap\":{\"path\":\"/appAPI/v2/operation/a661e1c6-01df-4fb6-bf35-0b07fc429f5d\",\"threadId\":\"54419181-ce43-4d06-b9f1-564e5092183d\",\"userIp\":\"127.17.0.1\"},\"threadId\":204,\"threadPriority\":5}\r",
"created": "2019-04-30T18:54:09.6802872Z",
"tag": "14cb73fd827b",
"version": "1.1",
"source_host": "172.17.0.1",
"container_id": "14cb73fd827b5d0dc0c9a991131f55b43a302539364bfc2b7fa0cd4431855ebf",
"image_id": "sha256:6af0623e35cedc362aadd875d2232d113be73fda3b1cb6dcd09b12d41cdadc70",
"host": "linuxkit-00155d0cba2d",
"image_name": "corporation/appapi:2.1",
"container_name": "appapi",
"level": 6
},
"fields": {
"created": [
"2019-04-30T18:54:09.680Z"
],
"#timestamp": [
"2019-04-30T19:02:01.550Z"
]
},
"sort": [
1556650921550
]
}
this is the json in the "message", I want to include all the fields:
{
"thread": "default task-1",
"level": "DEBUG",
"loggerName": "com.corporation.app.configuration.LoggerInterceptor",
"message": "thread=INI",
"endOfBatch": false,
"loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
"instant": {
"epochSecond": 1556650921,
"nanoOfSecond": 548899000
},
"contextMap": {
"path": "/appAPI/v2/operation/a661e1c6-01df-4fb6-bf35-0b07fc429f5d",
"threadId": "54419181-ce43-4d06-b9f1-564e5092183d",
"userIp": "127.17.0.1"
},
"threadId": 204,
"threadPriority": 5
}
Thank you
I am using kubernetes : v1.10.3 , i have one external NFS server which i am able to mount anywhere ( any physical machines). I want to mount this NFS directly to pod/container . I tried but every time i am getting error. don't want to use privileges, kindly help me to fix.
ERROR: MountVolume.SetUp failed for volume "nfs" : mount failed: exit
status 32 Mounting command: systemd-run Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
--scope -- mount -t nfs 10.225.241.137:/stagingfs/alt/ /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43393.scope. mount: wrong fs type,
bad option, bad superblock on 10.225.241.137:/stagingfs/alt/, missing
codepage or helper program, or other error (for several filesystems
(e.g. nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.
NFS server : mount -t nfs 10.X.X.137:/stagingfs/alt /alt
I added two things for volume here but getting error every time.
first :
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
Second :
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
---------------------complete yaml --------------------------------
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "jboss",
"namespace": "staging",
"selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/jboss",
"uid": "6a85e235-68b4-11e8-8181-00163eeb9788",
"resourceVersion": "609891",
"generation": 2,
"creationTimestamp": "2018-06-05T11:34:32Z",
"labels": {
"k8s-app": "jboss"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "jboss"
}
},
"template": {
"metadata": {
"name": "jboss",
"creationTimestamp": null,
"labels": {
"k8s-app": "jboss"
}
},
"spec": {
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
"containers": [
{
"name": "jboss",
"image": "my.abc.com/alt:7.1_1.1",
"resources": {},
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 2,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:45Z",
"lastTransitionTime": "2018-06-05T11:35:45Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:46Z",
"lastTransitionTime": "2018-06-05T11:34:32Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"jboss-8674444985\" has successfully progressed."
}
]
}
}
Regards
Anupam Narayan
As stated in the error log:
for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program
According to this question, you might be missing the nfs-commons package which you can install using sudo apt install nfs-common
I have a service with the name mongodb. According to the documentation, the service host and port should be available to other pods in the same cluster through $MONGODB_SERVICE_HOST and $MONGODB_SERVICE_PORT.
However, neither of these are set in my frontend pods. What are the requirements for this to work?
frontend-controller.json
{
"id": "frontend",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "spatula", "role": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend",
"containers": [{
"name": "frontend",
"image": "gcr.io/crafty_apex_841/spatula_frontend",
"cpu": 100,
"ports": [{"name": "spatula-server", "containerPort": 80}]
}]
}
},
"labels": { "name": "spatula", "role": "frontend" }
}
},
"labels": { "name": "spatula", "role": "frontend" }
}
frontend-service.json
{
"apiVersion": "v1beta1",
"kind": "Service",
"id": "frontend",
"port": 80,
"containerPort": "spatula-server",
"labels": { "name": "spatula", "role": "frontend" },
"selector": { "name": "spatula", "role": "frontend" },
"createExternalLoadBalancer": true
}
mongodb-service.json
{
"apiVersion": "v1beta1",
"kind": "Service",
"id": "mongodb",
"port": 27017,
"containerPort": "mongodb-server",
"labels": { "name": "spatula", "role": "mongodb" },
"selector": { "name": "spatula", "role": "mongodb" }
}
mongodb-controller.json
{
"id": "mongodb",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "spatula", "role": "mongodb"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "mongodb",
"containers": [{
"name": "mongodb",
"image": "dockerfile/mongodb",
"cpu": 100,
"ports": [{"name": "mongodb-server", "containerPort": 27017}]
}]
}
},
"labels": { "name": "spatula", "role": "mongodb" }
}
},
"labels": { "name": "spatula", "role": "mongodb" }
}
The service:
$ gcloud preview container services list
NAME LABELS SELECTOR IP PORT
mongodb name=spatula,role=mongodb name=spatula,role=mongodb 10.111.240.154 27017
The pod:
$ gcloud preview container pods list
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
9ffd980f-ab56-11e4-ad76-42010af069b6 10.108.0.11 mongodb dockerfile/mongodb k8s-spatula-node-1.c.crafty-apex-841.internal/104.154.44.77 name=spatula,role=mongodb Running
Because environment variables for pods are only created when the pod is started, the service has to exist before a given pod in order for that pod to see the service's environment variables. You should be able to see them from all new pods you create.
If you'd like to learn more, additional explanation of how services work can be found in the documentation.
Alternatively, all newly created clusters in Container Engine (version 0.9.2 and above) have a SkyDNS service running in the cluster that you can use to access services from pods, even those without the environment variables.