i have problem.
this is my filebeat and i have 6 containers in /var/lib/docker/containers:
filebeat.inputs:
- type: container
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
json.message_key: log
json.keys_under_root: true
output.elasticsearch:
hosts: ["elasticsearch:9200"]
and this is a part of my docker-compose.yml:
filebeat:
user: "root"
image: "docker.elastic.co/beats/filebeat:7.10.2"
command:
- "-e"
- "--strict.perms=false"
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
now i want to get one of logs from this containers, and my containerId will be changes, how can i solve this problem?
(i cant set "*" with containerId)
i find a solution and i changed my filebeat.yml file, this is my new filebeat.yml file:
filebeat:
autodiscover.providers:
- type: docker
templates:
- condition.contains:
docker.container.image: logger
config:
- module: nginx
access:
input:
type: container
paths:
- "/var/lib/docker/containers/${data.docker.container.id}/*.log"
json.message_key: log
json.keys_under_root: true
output.elasticsearch:
hosts: ["elasticsearch:9200"]
logger is my container name
Related
Unable to routing external ip/port
I'm following this guide and unable to route the external IP into the subdomain, resulting to nothing while internal port (installed portainer in docker) seems working. Exposing to the public is my next plan. Following is my yml config
docker-compose.yml
Exposed 80 and 443 port from the router, DNS name server also pointed on my digitalocean then pointed into my traefik-dashboard.local.example.com -> public-IP by using A-record
version: '3'
services:
traefik:
image: traefik:latest
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
environment:
- DO_AUTH_TOKEN=TOKEN
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /root/container/traefik/data/traefik.yml:/traefik.yml:ro
- /root/container/traefik/data/traefik.yml:/traefik.yml:ro
- /root/container/traefik/data/acme.json:/acme.json
- /root/container/traefik/data/config.yml:/config.yml:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik-dashboard.local.example.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:password"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.local.example.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=digitalocean"
- "traefik.http.routers.traefik-secure.tls.domains[0].main=local.example.com"
- "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.local.example.com"
- "traefik.http.routers.traefik-secure.service=api#internal"
expose:
- 80
networks:
proxy:
external: true
data/traefik.yml
api:
dashboard: true
debug: true
insecure: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
filename: /config.yml
accessLog: {}
log:
level: debug
certificatesResolvers:
digitalocean:
acme:
email: email#example.com
storage: acme.json
dnsChallenge:
provider: digitalocean
delayBeforeCheck: 0
#resolvers:
# - "1.1.1.1:53"
# - "1.0.0.1:53"
pilot:
dashboard: false
metrics:
prometheus:
entryPoint: traefik
accessLog:
filePath: "/var/log/traefik/access.log"
filters:
statusCodes:
- "400-600"
` ``api:
dashboard: true
debug: true
insecure: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
serversTransport:
insecureSkipVerify: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
filename: /config.yml
accessLog: {}
log:
level: debug
certificatesResolvers:
digitalocean:
acme:
email: email#example.com
storage: acme.json
dnsChallenge:
provider: digitalocean
delayBeforeCheck: 0
#resolvers:
# - "1.1.1.1:53"
# - "1.0.0.1:53"
pilot:
dashboard: false
metrics:
prometheus:
entryPoint: traefik
accessLog:
filePath: "/var/log/traefik/access.log"
filters:
statusCodes:
- "400-600"
The dashboard appeared with healthy server,
data/config.yml
I'm toggling this config on/off and the problem still persists.
http:
routers:
pihole:
entryPoints:
- "https"
rule: "Host(`pihole.local.example.com`)"
middlewares:
- default-headers
- addprefix-pihole
tls: {}
service: pihole
services:
pihole:
loadBalancer:
servers:
- url: "http://192.168.0.20:80"
passHostHeader: true
middlewares:
addprefix-pihole:
addPrefix:
prefix: "/admin"
https-redirect:
redirectScheme:
scheme: https
default-headers:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
contentTypeNosniff: true
forceSTSHeader: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 15552000
customFrameOptionsValue: SAMEORIGIN
customRequestHeaders:
X-Forwarded-Proto: https
Already assigned A record in pi-hole, dns, seems doesn't do anything magic, the pihole.local.example.com still appeared blank.
Let me know if there's anything wrong with my config.
Attached below is my log
https://pastebin.com/raw/R1w9jR7U
I am running the WSO2is version 5.8.0 in Docker-Swarm, i script a compose for this mapping the files:
deployment.toml, wsocarbon.jks and directory in servers.
After change the keystore i receive the error on login admin:
System error while Authenticating/Authorizing User : Error when handling event : PRE_AUTHENTICATION
removing the mapping, the SSL Cert is not valid, but i login.
PS: i use traefik to redirect to container.
The stack deploy file:
#IS#
is-hml:
image: wso2/wso2is:5.8.0
ports:
- 4763:4763
- 4443:9443
volumes:
#- /docker/release-hml/wso2/full-identity-server-volume:/home/wso2carbon/wso2is-5.8.0
- /docker/release-hml/wso2/identity-server:/home/wso2carbon/wso2-config-volume
extra_hosts:
- "wso2-hml.valecard.com.br:127.0.0.1"
networks:
traefik_traefik:
aliases:
- is-hml
configs:
#- source: deployment.toml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/deployment.toml
#
- source: wso2carbon.jks
target: /home/wso2carbon/wso2is-5.8.0/repository/resources/security/wso2carbon.jks
#- source: catalina-server.xml
# target: /home/wso2carbon/wso2is-5.8.0/repository/conf/tomcat/catalina-server.xml
- source: carbon.xml
target: /home/wso2carbon/wso2is-5.8.0/repository/conf/carbon.xml
#environment:
# - "CATALINA_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JVM_OPTS=-Xmx2g -Xms2g -XX:MaxPermSize=1024m"
# - "JAVA_OPTS=-Xmx2g -Xms2g"
deploy:
#endpoint_mode: dnsrr
resources:
limits:
cpus: '2'
memory: '4096M'
replicas: 1
labels:
- "traefik.docker.network=traefik_traefik"
- "traefik.backend=is-hml"
- "traefik.port=4443"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.rule=Host:wso2-hml.valecard.com.br"
configs:
deployment.toml:
file: ./wso2-config/deployment.toml
catalina-server.xml:
file: ./wso2-config/catalina-server.xml
wso2carbon.jks:
file: ../../certs/wso2carbon-valecard.jks
carbon.xml:
file: ./wso2-config/carbon.xml
networks:
traefik_traefik:
external: true
The password is some from the deployment.toml
Thz.
I work with a compose file which looks like this:
version: '3.7'
services:
shinyproxy:
build: /home/shinyproxy
deploy:
#replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
networks:
- sp-example-net
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /home/shinyproxy/application.yml
target: /opt/shinyproxy/application.yml
....
networks:
sp-example-net:
driver: overlay
attachable: true
This shinyproxy application uses the following .yml file
proxy:
port: 5000
template-path: /opt/shinyproxy/templates/2col
authentication: keycloak
admin-groups: admins
users:
- name: jack
password: password
groups: admins
- name: jeff
password: password
container-backend: docker-swarm
docker:
internal-networking: true
container-network: sp-example-net
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-network: "${proxy.docker.container-network}"
access-groups: test
- id: euler
display-name: Euler's number
container-cmd: ["R", "-e", "shiny::runApp('/root/euler')"]
container-image: euler-docker
container-network: "${proxy.docker.container-network}"
access-groups: test
To deploy the stack I run the following command:
docker stack deploy -c docker-compose.yml test
This results in the following: Creating network test_sp-example-net
So indead of sp-example_net my network´s name is test_sp-example_net
Is there a way to prevent this kind of combination for my network name?
Thank you!
I want to run filebeat as a sidecar container next to my main application container to collect application logs. I'm using docker-compose to start both services together, filebeat depending on the application container.
This is all working fine. I'm using a shared volume for the application logs.
However I would like to collect docker container logs (stdout JSON driver) as well in filebeat.
Filebeat provides a docker/container input module for this purpose. Here is my configuration. First part is to get the application logs. Second part should get docker logs:
filebeat.inputs:
- type: log
paths:
- /path/to/my/application/*.log.json
exclude_lines: ['DEBUG']
- type: docker
containers.ids: '*'
json.message_key: message
json.keys_under_root: true
json.add_error_key: true
json.overwrite_keys: true
tags: ["docker"]
What I don't like it the containers.ids: '*'. Here I would want to point filebeat to the direct application container, ignoring all others.
Since I don't know the container ID before I run docker-compose up starting both containers, I was wondering if there is a easy way to get the container ID from my application container in my filebeat container (via docker-comnpose?) to filter on this ID?
I think you may work around the problem:
first set all the logs from the contianer to a syslog:
driver: "syslog"
options:
syslog-address: "tcp://localhost:9000"
then configure filebeat to get the logs from that syslog server like this:
filebeat.inputs:
- type: syslog
protocol.udp:
host: "localhost:9000"
This is also not really answering the question, but should work as a solution as well.
The main idea is to use label within the filebeat autodiscovery filter.
Taken from this post: https://discuss.elastic.co/t/filebeat-autodiscovery-filtering-by-container-labels/120201/5
filebeat.yml
filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.labels.somelabel: "somevalue"
config:
- type: docker
containers.ids:
- "${data.docker.container.id}"
output.console:
pretty: true
docker-compose.yml:
version: '3'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:6.2.1
command: "--strict.perms=false -v -e -d autodiscover,docker"
user: root
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
- /var/lib/docker/containers:/var/lib/docker/containers
- /var/run/docker.sock:/var/run/docker.sock
test:
image: alpine
command: "sh -c 'while true; do echo test; sleep 1; done'"
depends_on:
- filebeat
labels:
somelabel: "somevalue"
Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo