Drop irrelevant filebeat docker metadata before shipping to Logstash - docker

I am using filebeat to ship container logs to ELK. I have metadata that I don't need.
The docker metadata includes container.id, container.name, conatiner.labels and container.image. The only metadata I need is container.name, so I am trying to drop the rest.
This is my configuration file, but I can still see the metadata on kibana.
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /var/log/**/*.log
- type: container
enable: true
paths:
- "/var/lib/docker/containers/*/*.log"
filebeat.config.modules:
path: /etc/filebeat/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["${logstashIP}:5044"]
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- drop_fields:
fields: ["container.id", "container.labels", "container.image"]
- decode_json_fields:
fields: ["message"]
target: "json"
overwrite_keys: true
- add_host_metadata:
when.not.contains.tags: forwarded
- drop_fields:
fields: ["host" "input", "tags" "log", "agent", "ecs"]

Related

Drone Pipeline : Drone Cache mount path for Maven Repository not able to resolve

I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..
Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache

ECS fargate deployment got stuck and failed to deploy ECS Service

I am new to ECS and I am trying to deploy it with Cloudformation.
I was making the following cloudformation template by looking at the documentation and some examples I found from blogs and some articles.
However, for some reason, it got stuck in updating one of the resources and eventually timed out.
I am not sure why it gets stuck and fails.
Can someone spot the mistake I am making?
For now, my goal is to deploy and see the app on the internet. I am not really looking for the advanced configuration.
I also pass the ecr url to this upon deployment aws cli.
Thank you in advance.
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: >
ECS Service
Parameters:
Environment:
Type: String
Default: alpha
AllowedValues:
- alpha
- beta
- production
ECRDockerUri:
Type: String
Default: <url for ecr repo>
ContainerPort:
Type: Number
Default: 8080
Resources:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub "${Environment}-fake-user-api-logGroup"
RetentionInDays: 30
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: !Sub "${Environment}-MyFargateCluster"
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}-${AWS::AccountId}-ExecutionRole"
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
ECSService:
Type: AWS::ECS::Service
Properties:
ServiceName: !Sub "${Environment}-${AWS::AccountId}-ECSService"
Cluster: !Ref ECSCluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
TaskRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Sub "${Environment}-${AWS::AccountId}-Container"
Image: !Ref ECRDockerUri
Memory: 1024
Essential: true
DisableNetworking: false
Privileged: true
ReadonlyRootFilesystem: true
Environment:
- Name: SPRING_PROFILES_ACTIVE
Value: !Ref Environment
PortMappings:
- ContainerPort: !Ref ContainerPort
HostPort: !Ref ContainerPort
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: ca-central-1
I went through your CFN stack and found some things missing. I noticed that your Cluster name is ENV-MyFargateCluster so I am assuming your goal is to create a fargate service. To run a fargate service, you need to provide the networking configurations and notify that you want to create a Fargate service by specifying the Launch Type. Plus Fargate tasks cannot be Privileged.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html#cfn-ecs-taskdefinition-containerdefinition-privileged
Below is my snippet of the code :
AWSTemplateFormatVersion: 2010-09-09
Transform: 'AWS::Serverless-2016-10-31'
Description: |
ECS Service
Parameters:
Environment:
Type: String
Default: alpha
AllowedValues:
- alpha
- beta
- production
ECRDockerUri:
Type: String
Default: 'image'
ContainerPort:
Type: Number
Default: 80
Resources:
LogGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
LogGroupName: !Sub '${Environment}-fake-user-api-logGroup'
RetentionInDays: 30
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: !Sub '${Environment}-MyFargateCluster'
ExecutionRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Sub '${Environment}-${AWS::AccountId}-ExecutionRole'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
ECSService:
Type: 'AWS::ECS::Service'
Properties:
ServiceName: !Sub '${Environment}-${AWS::AccountId}-ECSService'
LaunchType: FARGATE
Cluster: !Ref ECSCluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- sg-XXXXXXXXXX
Subnets:
- subnet-XXXXXXXXXX
TaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
RequiresCompatibilities:
- FARGATE
TaskRoleArn: !Ref ExecutionRole
ExecutionRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Sub '${Environment}-${AWS::AccountId}-Container'
Image: !Ref ECRDockerUri
Memory: 1024
Essential: true
DisableNetworking: false
Privileged: false
ReadonlyRootFilesystem: true
Environment:
- Name: SPRING_PROFILES_ACTIVE
Value: !Ref Environment
PortMappings:
- ContainerPort: !Ref ContainerPort
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: ca-central-1
awslogs-stream-prefix: test
Cpu: '1024'
Memory: '2048'
NetworkMode: awsvpc

solution for filebeat paths

i have problem.
this is my filebeat and i have 6 containers in /var/lib/docker/containers:
filebeat.inputs:
- type: container
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
json.message_key: log
json.keys_under_root: true
output.elasticsearch:
hosts: ["elasticsearch:9200"]
and this is a part of my docker-compose.yml:
filebeat:
user: "root"
image: "docker.elastic.co/beats/filebeat:7.10.2"
command:
- "-e"
- "--strict.perms=false"
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
now i want to get one of logs from this containers, and my containerId will be changes, how can i solve this problem?
(i cant set "*" with containerId)
i find a solution and i changed my filebeat.yml file, this is my new filebeat.yml file:
filebeat:
autodiscover.providers:
- type: docker
templates:
- condition.contains:
docker.container.image: logger
config:
- module: nginx
access:
input:
type: container
paths:
- "/var/lib/docker/containers/${data.docker.container.id}/*.log"
json.message_key: log
json.keys_under_root: true
output.elasticsearch:
hosts: ["elasticsearch:9200"]
logger is my container name

Unable to start envoy in docker-compose

I have this envoy.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ['*']
routes:
- match: { prefix: '/' }
route:
cluster: echo_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: '*'
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: '1728000'
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: node-server
port_value: 9090
This file is copied from this official example.
But when I try to go ahead with the docs
$ docker-compose pull node-server envoy commonjs-client
$ docker-compose up node-server envoy commonjs-client
I get this:
ERROR: No such service: node-server
if I run docker-compose pull envoy I get ERROR: No such service: envoy
What did I miss?
It seems like I got the wrong assumption in my comments. This repository contains Dockerfiles with a context to create said image when running the docker compose command.
In your example, the command:
docker-compose pull node-server envoy commonjs-client
should check if the images are available locally. If not, they should be able to build them.
Confusing to me is that you pointed to a docker-compose.yaml file stashed away somewhere deep in the examples folder. If you were to run that command I can see why you'd get the error. The relative path to the envoy Dockerfile is ./net/grpc/gateway/docker/envoy/Dockerfile, which is not accessible from the echo location.
It should however be accessible from your project root (i.e. the directory of this file https://github.com/grpc/grpc-web/blob/master/docker-compose.yml). Have you tried running it from there?
Fyi: What should happen after a pull is compose notifying you that the image cannot be found in your local repository and proceeding to create it based on the Dockerfile it found in the path relative to root (./net/grpc/gateway/docker/envoy/Dockerfile).

Spring Cloud Data Flow Grafana (Prometheus) not showing stream data

Installed Spring Cloud Dataflow on Kubernetes (running on DockerDesktop).
Configured Grafana and Prometheus as the per the install guide https://dataflow.spring.io/docs/installation/kubernetes/kubectl/
Created and deployed a simple Stream with time (source) and log (sink) from starter apps .
On selecting Stream dashboard icon in UI, navigates to grafana dashboard but DON'T see the stream and related metrics.
Am I missing any configuration here?
Don't see any action in Prometheus proxy log since it started
scdf-server config map
kind: ConfigMap
apiVersion: v1
metadata:
name: scdf-server
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/scdf-server
uid: ce23d5a3-1cb9-4580-ba1a-bf51b09850dc
resourceVersion: '53607'
creationTimestamp: '2020-04-29T01:28:36Z'
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
applicationProperties:
stream:
management:
metrics:
export:
prometheus:
enabled: true
rsocket:
enabled: true
host: prometheus-proxy
port: 7001
task:
management:
metrics:
export:
prometheus:
enabled: true
rsocket:
enabled: true
host: prometheus-proxy
port: 7001
grafana-info:
url: 'http://localhost:3000'
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:mysql://${MYSQL_SERVICE_HOST}:${MYSQL_SERVICE_PORT}/mysql
username: root
password: ${mysql-root-password}
driverClassName: org.mariadb.jdbc.Driver
testOnBorrow: true
validationQuery: "SELECT 1"
[Following fixed the Issue]
I updated the stream definition set below in Applications.Properties it started working fine.
management.metrics.export.prometheus.rsocket.host=prometheus-proxy
This metrics collection flow diagram from https://github.com/spring-cloud/spring-cloud-dataflow-samples/tree/master/monitoring-samples helped to spot the issue quick. Thanks

Resources