Syslog from AKS cluster nodes - Using App Armor & Seccomp - azure-aks

How can I retrieve /var/log/syslog logs from AKS cluster nodes?
Azure recommends the use of Linux security features App Armor and Seccomp. Both of them produce log entries at /var/log/syslog of each cluster node where a workload running has a profile attached.
I've run a test with a nginx container using both, and I can see the corresponding log entries in my node:
Apr 29 10:37:45 aks-agentpool-31529777-vmss000000 kernel: [ 6248.505152] audit: type=1326 audit(1651228665.090:23650): auid=4294967295 uid=1001 gid=2000 ses=4294967295 pid=2016 comm="docker-entrypoi" exe="/bin/dash" sig=0 arch=c000003e syscall=3 compat=0 ip=0x7f90c03fda67 code=0x7ffc0000
Apr 29 10:37:45 aks-agentpool-31529777-vmss000000 kernel: [ 6248.505154] audit: type=1326 audit(1651228665.090:23651): auid=4294967295 uid=1001 gid=2000 ses=4294967295 pid=2016 comm="docker-entrypoi" exe="/bin/dash" sig=0 arch=c000003e syscall=257 compat=0 ip=0x7f90c03fdb84 code=0x7ffc0000
Apr 29 10:42:46 aks-agentpool-31529777-vmss000000 kernel: [ 6550.189592] audit: type=1326 audit(1651228966.775:25032): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=9028 comm="runc:[2:INIT]" exe="/" sig=0 arch=c000003e syscall=257 compat=0 ip=0x561a3e78fa2a code=0x7ffc0000
Syslog's are supposed to be retrievable through configuration of the monitoring agent at the destination Log Analytics workspace, by enabling the syslog facility:
Agents Configuration.
I've also enabled Container Insights in the cluster and Diagnostic Settings for every log category available. Still, I have no way to see those logs without directly connecting to the node and the node VM appears as a not connected source for the Log Analytics workspace: Workspace Data Sources
Env:
Kubernetes v1.21.9
Node: Linux aks-agentpool-31529777-vmss000000 5.4.0-1074-azure #77~18.04.1-Ubuntu SMP
1 pod with one nginx container + two daemonsets and a configMap to deploy the profiles.
apiVersion: v1
kind: Namespace
metadata:
name: testing
---
apiVersion: v1
kind: ConfigMap
metadata:
name: seccomp-profiles-map
namespace: testing
data:
mynginx: |-
{
"defaultAction": "SCMP_ACT_LOG"
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: apparmor-profiles
namespace: testing
data:
mynginx: |-
# vim:syntax=apparmor
#include <tunables/global>
profile mynginx flags=(complain) {
#include <abstractions/base>
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: seccomp-profiles-loader
# Namespace must match that of the ConfigMap.
namespace: testing
labels:
daemon: seccomp-profiles-loader
spec:
selector:
matchLabels:
daemon: seccomp-profiles-loader
template:
metadata:
name: seccomp-profiles-loader
labels:
daemon: seccomp-profiles-loader
spec:
automountServiceAccountToken: false
initContainers:
- name: seccomp-profile-loader
image: busybox
command: ["/bin/sh"]
# args: ["/profiles/*", "/var/lib/kubelet/seccomp/profiles/"]
args: ["-c", "cp /profiles/* /var/lib/kubelet/seccomp/profiles/"]
securityContext:
privileged: true
volumeMounts:
- name: node-profiles-folder
mountPath: /var/lib/kubelet/seccomp/profiles
readOnly: false
- name: profiles
mountPath: /profiles
readOnly: true
containers:
- name: seccomp-profiles-pause
# https://github.com/kubernetes/kubernetes/tree/master/build/pause
image: gcr.io/google_containers/pause
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
volumes:
- name: node-profiles-folder
hostPath:
path: /var/lib/kubelet/seccomp/profiles
- name: profiles
configMap:
name: seccomp-profiles-map
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apparmor-loader
# Namespace must match that of the ConfigMap.
namespace: testing
labels:
daemon: apparmor-loader
spec:
selector:
matchLabels:
daemon: apparmor-loader
template:
metadata:
name: apparmor-loader
labels:
daemon: apparmor-loader
spec:
containers:
- name: apparmor-loader
image: google/apparmor-loader:latest
args:
# Tell the loader to pull the /profiles directory every 30 seconds.
- -poll
- 60s
- /profiles
securityContext:
# The loader requires root permissions to actually load the profiles.
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
readOnly: true
- name: apparmor-includes
mountPath: /etc/apparmor.d
readOnly: true
- name: profiles
mountPath: /profiles
readOnly: true
volumes:
# The /sys directory must be mounted to interact with the AppArmor module.
- name: sys
hostPath:
path: /sys
# The /etc/apparmor.d directory is required for most apparmor include templates.
- name: apparmor-includes
hostPath:
path: /etc/apparmor.d
# Map in the profile data.
- name: profiles
configMap:
name: apparmor-profiles
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: testing
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
container.apparmor.security.beta.kubernetes.io/nginx: localhost/mynginx
spec:
automountServiceAccountToken: false
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: nginx
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/mynginx
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: PORT
value: '3000'
---
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: nginx

Related

Chaincode Build Failed in Hyperledger Fabric on Kubernetes

Deploying Hyperledger Fabric v2.0 in Kubernetes
I am Trying to deploy a sample chaincode in a Private Kubernetes Cluster which is running in Azure Cloud. After creating the nodes and then running the Install chaincode operation is getting failed and throwing the below error. I am only using a single Kubernetes cluster.
Error:
chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: docker image inspection failed: cannot connect to Docker endpoint
command terminated with exit code 1
Below is the peer configuration template for Deployment, Service & ConfigMap
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: ${PEER}
name: ${PEER}
namespace: ${ORG}
spec:
replicas: 1
selector:
matchLabels:
app: ${PEER}
strategy: {}
template:
metadata:
labels:
app: ${PEER}
spec:
containers:
- name: couchdb
image: blockchainpractice.azurecr.io/hyperledger/fabric-couchdb
env:
- name: COUCHDB_USER
value: couchdb
- name: COUCHDB_PASSWORD
value: couchdb
ports:
- containerPort: 5984
- name: fabric-peer
image: blockchainpractice.azurecr.io/hyperledger/fabric-peer:2.0
resources: {}
envFrom:
- configMapRef:
name: ${PEER}
volumeMounts:
- name: dockersocket
mountPath: "/host/var/run/docker.sock"
- name: ${PEER}
mountPath: "/etc/hyperledger/fabric-peer"
- name: client-root-tlscas
mountPath: "/etc/hyperledger/fabric-peer/client-root-tlscas"
volumes:
- name: dockersocket
hostPath:
path: "/var/run/docker.sock"
- name: ${PEER}
secret:
secretName: ${PEER}
items:
- key: key.pem
path: msp/keystore/key.pem
- key: cert.pem
path: msp/signcerts/cert.pem
- key: tlsca-cert.pem
path: msp/tlsca/tlsca-cert.pem
- key: ca-cert.pem
path: msp/cacerts/ca-cert.pem
- key: config.yaml
path: msp/config.yaml
- key: tls.crt
path: tls/tls.crt
- key: tls.key
path: tls/tls.key
- key: orderer-tlsca-cert.pem
path: orderer-tlsca-cert.pem
- key: core.yaml
path: core.yaml
- name: client-root-tlscas
secret:
secretName: client-root-tlscas
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: ${PEER}
namespace: ${ORG}
data:
CORE_PEER_ADDRESSAUTODETECT: "true"
CORE_PEER_ID: ${PEER}
CORE_PEER_LISTENADDRESS: 0.0.0.0:7051
CORE_PEER_PROFILE_ENABLED: "true"
CORE_PEER_LOCALMSPID: ${ORG_MSP}
CORE_PEER_MSPCONFIGPATH: /etc/hyperledger/fabric-peer/msp
# Gossip
CORE_PEER_GOSSIP_BOOTSTRAP: peer0.${ORG}:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT: "${PEER}.${ORG}:7051"
CORE_PEER_GOSSIP_ORGLEADER: "false"
CORE_PEER_GOSSIP_USELEADERELECTION: "true"
# TLS
CORE_PEER_TLS_ENABLED: "true"
CORE_PEER_TLS_CERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_KEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
CORE_PEER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/msp/tlsca/tlsca-cert.pem"
CORE_PEER_TLS_CLIENTAUTHREQUIRED: "false"
ORDERER_TLS_ROOTCERT_FILE: "/etc/hyperledger/fabric-peer/orderer-tlsca-cert.pem"
CORE_PEER_TLS_CLIENTROOTCAS_FILES: "/etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.${ORG}-cert.pem"
CORE_PEER_TLS_CLIENTCERT_FILE: "/etc/hyperledger/fabric-peer/tls/tls.crt"
CORE_PEER_TLS_CLIENTKEY_FILE: "/etc/hyperledger/fabric-peer/tls/tls.key"
# Docker
CORE_PEER_NETWORKID: ${ORG}-fabnet
CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock
CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE: "bridge"
# CouchDB
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: 0.0.0.0:5984
CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME: couchdb
CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD: couchdb
# Logging
CORE_LOGGING_PEER: "info"
CORE_LOGGING_CAUTHDSL: "info"
CORE_LOGGING_GOSSIP: "info"
CORE_LOGGING_LEDGER: "info"
CORE_LOGGING_MSP: "info"
CORE_LOGGING_POLICIES: "debug"
CORE_LOGGING_GRPC: "info"
GODEBUG: "netdns=go"
---
apiVersion: v1
kind: Service
metadata:
name: ${PEER}
namespace: ${ORG}
spec:
selector:
app: ${PEER}
ports:
- name: request
port: 7051
targetPort: 7051
- name: event
port: 7053
targetPort: 7053
type: LoadBalancer
Can anyone help me out. Thanks in advance
I'd suggest that it would be good to look at the K8S test network deployment in fabric-samples (https://github.com/hyperledger/fabric-samples/tree/main/test-network-k8s)
Note that the classic way the peer creates chaincode is create a new docker container via the docker daemon. This really doesn't sit well with K8S. So the chaincode-as-a-service approach is strongly recommended.

Unable to reach Jenkins Pod from outside of kubernetes cluster under Metallb

I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the loadBalancerIP for the pod outside of my cluster I am unable to. I also have a kube-verify pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.
When I switch the service for the Jenkins pod to be of type NodePort it works but as soon as I switch it back to be of type LoadBalancer it stops working. Both the Jenkins pod and the working kube-verify pod are running on the same node.
Cluster Details:
The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The kube-proxy is set to use strictArp and ipvs mode.
Jenkins Manifest:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-sa
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
---
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secret
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
type: Opaque
data:
jenkins-admin-password: ***************
jenkins-admin-user: ********
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
data:
jenkins.yaml: |-
jenkins:
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${jenkins-admin-username}"
name: "Jenkins Admin"
password: "${jenkins-admin-password}"
disableRememberMe: false
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
plainText
clouds:
- kubernetes:
containerCapStr: "10"
defaultsProviderTemplate: "jenkins-base"
connectTimeout: "5"
readTimeout: 15
jenkinsUrl: "jenkins-ui:8080"
jenkinsTunnel: "jenkins-discover:50000"
maxRequestsPerHostStr: "32"
name: "kubernetes"
serverUrl: "https://kubernetes"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
templates:
- name: "default"
#id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f
containers:
- name: "jnlp"
alwaysPullImage: false
args: "^${computer.jnlpmac} ^${computer.name}"
command: ""
envVars:
- envVar:
key: "JENKINS_URL"
value: "jenkins-ui:8080"
image: "jenkins/inbound-agent:4.11-1"
ttyEnabled: false
workingDir: "/home/jenkins/agent"
idleMinutes: 0
instanceCap: 2147483647
label: "jenkins-agent"
nodeUsageMode: "NORMAL"
podRetention: Never
showRawYaml: true
serviceAccount: "jenkins-sa"
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: override
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
unclassified:
location:
adminAddress:
url: jenkins-ui:8080
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: devops-tools
capacity:
storage: 16Gi
accessModes:
- ReadWriteMany
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- heine-cluster1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-cr
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-role-schedule-agents
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-casc-reload
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-cr
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-schedule-agents
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-role-schedule-agents
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-watch-configmaps
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-casc-reload
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
annotations:
metallb.universe.tf/address-pool: default
spec:
type: LoadBalancer
loadBalancerIP: 172.16.1.5
ports:
- name: ui
port: 8080
targetPort: 8080
externalTrafficPolicy: Local
selector:
app: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-agent
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
ports:
- name: agents
port: 50000
targetPort: 50000
selector:
app: jenkins
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
version: v1
tier: backend
annotations:
checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
containers:
- name: jenkins
image: "heineza/jenkins-master:2.323-jdk11-1"
imagePullPolicy: Always
args: [ "--httpPort=8080"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
ports:
- containerPort: 8080
name: ui
- containerPort: 50000
name: agents
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 50m
memory: 256Mi
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
readOnly: false
- name: jenkins-config
mountPath: /var/jenkins_home/jenkins.yaml
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-username
subPath: jenkins-admin-user
readOnly: true
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-password
subPath: jenkins-admin-password
readOnly: true
serviceAccountName: "jenkins-sa"
volumes:
- name: jenkins-cache
emptyDir: {}
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pv-claim
- name: jenkins-config
configMap:
name: jenkins
- name: admin-secret
secret:
secretName: jenkins-secret
This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are base64 encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?
MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.
According to MetalLB documentation:
MetalLB respects the spec.loadBalancerIP parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.
If MetalLB does not own the requested address, or if the address is already in use by another service, assignment will fail and MetalLB will log a warning event visible in kubectl describe service <service name>.[1]
In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the metallb.universe.tf/allow-shared-ip annotation to services.
The value of the annotation is a “sharing key.” Services can share an IP address under the following conditions:
They both have the same sharing key.
They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical). [2]
UPDATE
I tested your setup successfully with one minor difference -
I needed to remove: externalTrafficPolicy: Local from Jenkins Service spec.
Try this solution, if it still doesn't work then it's a problem with your cluster environment.

How to push logs to fluentd deamonset using logrus

I have a simple golang application running inside kubernetes which just logs whatever text we place in request params. Code :
package main
import (
"github.com/sirupsen/logrus"
"net/http"
"os"
)
var (
logger = logrus.New()
)
func main() {
//logFile, err := os.OpenFile("/var/log/app.log", os.O_CREATE | os.O_WRONLY | os.O_APPEND, 0666)
//if err != nil {
// panic(err)
//}
logger.Out = os.Stdout
http.HandleFunc("/get", func(w http.ResponseWriter, r *http.Request) {
content := r.URL.Query().Get("m")
if content == "" {
logger.Warn("m is empty")
content = "nothing"
}
logger.Info("content is - ", content)
w.Write([]byte(content))
})
logger.Info("starting server")
http.ListenAndServe(":9999", nil)
}
I have fluentd deamonset running on my local which is connected to elasticsearch (also running in local but not in kubernetes). Now I want my logs to go to elasticsearch and ultimately appear in kibana but that's not happening and I am not able to debug.
My fluentd yaml looks like -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "minikube-host"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "MyPw123"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Here minikube-host is just a proxy service that fluentd uses to connect to host machine (elasticsearch runs directly on host).
fluentd rbac looks like -
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
fluentd is able to push logs to ES because I can see logs from fluentd pod in kibana. Actually I am able to see logs from fluentd pod only (no app logs, no logs from pods running in kube-system)
Not sure if it is relevant: my application runs inside docker with dockerfile
FROM golang:1.15 as base_image
WORKDIR /go/src/demo
ADD . .
RUN go get -d ./...
RUN go build -o /build/main-linux main.go
WORKDIR /build
CMD ["./main-linux"]
and my application deployment file is -
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: yashschandra/demo
imagePullPolicy: Always
ports:
- containerPort: 9999
volumes:
- name: varlog
hostPath:
path: /var/log
My question is -
What am I missing in my golang application because of which fluentd is not receiving logs from app (ie. why don't I see "starting server" in my kibana)?
I have tried both logging to stdout as well as logging to a file but none is working.
I feel it has something to do with pipelining logs correctly to fluentd but almost all articles I read on the topic does not do anything special
references -
https://www.metricfire.com/blog/logging-for-kubernetes-fluentd-and-elasticsearch/
https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a

Why can't find configmap in kubernetes?

I have defined a K8S configuration which deploy a metricbeat container. Below is the configuration file. But I got an error when run kubectl describe pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m24s default-scheduler Successfully assigned default/metricbeat-6467cc777b-jrx9s to ip-192-168-44-226.ap-southeast-2.compute.internal
Warning FailedMount 3m21s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[default-token-4dxhl config modules]: timed out waiting for the condition
Warning FailedMount 74s (x10 over 5m24s) kubelet MountVolume.SetUp failed for volume "config" : configmap "metricbeat-daemonset-config" not found
Warning FailedMount 67s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config modules default-token-4dxhl]: timed out waiting for the condition
based on the error message, it says configmap "metricbeat-daemonset-config" not found but it does exist in below configuration file. why does it report this error?
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
labels:
k8s-app: metricbeat
data:
aws.yml: |-
- module: aws
access_key_id: 'xxxx'
secret_access_key: 'xxxx'
period: 600s
regions:
- ap-southeast-2
metricsets:
- elb
- natgateway
- rds
- transitgateway
- usage
- vpn
- cloudwatch
metrics:
- namespace: "*"
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
containers:
- name: metricbeat
image: elastic/metricbeat:7.11.1
env:
- name: ELASTICSEARCH_HOST
value: es-entrypoint
- name: ELASTICSEARCH_PORT
value: '9200'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: metricbeat-daemonset-config
- name: modules
configMap:
defaultMode: 0640
name: metricbeat-daemonset-modules
There's a good chance that the other resources are ending up in namespace default because they do not specify a namespace, but the config does (kube-system). You should probably put all of this in its own metricbeat namespace.

Kubernetes: Modeling Jobs/Cron tasks for Postgres + Tomcat application

I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.
I am attempting to model this application with kubernetes.
Here is my first attempt.
apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
I have a configMap that is setting the hostname to the name of the pod.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
This application has a number of tasks that are run from the command line.
I have created a 3rd Docker image that contains the jars that are needed on the command line.
I am interested in modeling these command line tasks as Jobs in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?
Here is my first attempt at defining a job.
apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
After applying the configuration above, I have the following results.
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
I was pleased to see that the service name can be used for port forwarding.
$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
I am also able to run the following job using the defined service names in the configMap.
apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test#test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
Results
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
I still have some work to do persisting the volumes.

Resources