AKHQ ON AKS: Failed to read configuration file - docker

When I try to deploy AKHQ on AKS I get the following error:
2022-01-27 14:07:06,353 ←[1;31mERROR←[0;39m ←[35mmain ←[0;39m ←[36mi.m.runtime.Micronaut ←[0;39m Error starting Micronaut server: Failed to read configuration file: /app/application.yml
The configuration file(application.yml) looks like this:
micronaut:
security:
enabled: false
akhq:
server:
access-log: # Access log configuration (optional)
enabled: true # true by default
name: org.akhq.log.access # Logger name
format: "[Date: {}] [Duration: {} ms] [Url: {} {}] [Status: {}] [Ip: {}] [User: {}]" # Logger format
# list of kafka cluster available for akhq
connections:
kafka-cluster-ssl:
properties:
bootstrap.servers: "FQN-Address-01:9093,FQN-Address-02:9093,FQN-Address-03:909"
security.protocol: SSL
ssl.truststore.location: /app/truststore.jks
ssl.truststore.password: truststor-pwd
ssl.keystore.location: /app/keystore.jks
ssl.keystore.password: keystore-pwd
ssl.key.password: key-pwd
I passed also read permission to the file in Dockerfile but that didn't help.
Dockerfile
FROM tchiotludo/akhq:0.20.0
# add ssl producer/consumer config and root ca file
ADD ./resources/ /app/
USER root
RUN chmod +r application.yml
RUN chmod +x gen-certs.sh
RUN ./gen-certs.sh

I used an older version of AKHQ docker image and AKHQ is running now.

Related

Setting airflow connections using Values.yaml in Helm (Kubernetes)

Airflow Version - 2.3.0
Helm Chart - Apache-airflow/airflow
I have been working on setting up airflow using helm on kubernetes.
Currently, I am planning to set airflow connections using the values.yaml file and env variables instead of configuring them up on the webUI.
I believe the settings to tweak, to set the connections, are:
extraSecrets: {}
# eg:
# extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data: |
# AIRFLOW_CONN_GCP: 'base64_encoded_gcp_conn_string'
# AIRFLOW_CONN_AWS: 'base64_encoded_aws_conn_string'
# stringData: |
# AIRFLOW_CONN_OTHER: 'other_conn'
# '{{ .Release.Name }}-other-secret-name-suffix':
# data: |
# ...
I am not sure how to set all the key-value pairs for a databricks/emr connection, and how to use the kubernetes secrets (already set up as env vars in pods) to get the values
#extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data:
# AIRFLOW_CONN_DATABRICKS_DEFAULT_two:
# conn_type: "emr"
# host: <host_url>
# extra:
# token: <token string>
# host: <host_url>
It would be great to get some insights on how to resolve this issue
I looked up this link : managing_connection on airflow
Tried Changes in values.yaml file:
#extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data:
# AIRFLOW_CONN_DATABRICKS_DEFAULT_two:
# conn_type: "emr"
# host: <host_url>
# extra:
# token: <token string>
# host: <host_url>
Error Occurred:
While updating helm release:
extraSecrets.{{ .Release.Name }}-airflow-connections expects string, got object
Airflow connections can be set using Kubernetes secrets and env variables.
For setting secrets, directly from the cli, the easiest way is to
Create a kubernetes secret
The secret value (connection string) has to be in the URI format suggested by airflow
my-conn-type://login:password#host:port/schema?param1=val1&param2=val2
Create an env variable in the airflow-suggested-format
Airflow format for connection - AIRFLOW_CONN_{connection_name in all CAPS}
set the value of the connection env variable using the secret
How to manage airflow connections: here
Example,
To set the default databricks connection (databricks_default)in airflow -
create secret
kubectl create secret generic airflow-connection-databricks \
--from-literal=AIRFLOW_CONN_DATABRICKS_DEFAULT='databricks://#<DATABRICKS_HOST>?token=<DATABRICKS_TOKEN>'
In helm's (values.yaml), add new env variable using the secret:
envName: "AIRFLOW_CONN_DATABRICKS_DEFAULT"
secretName: "airflow-connection-databricks"
secretKey: "AIRFLOW_CONN_DATABRICKS_DEFAULT"
Some useful links:
Managing Airflow connections
Databricks databricks connection

Filebeat failing to send logs to logstash with log/harvester error

I'm following this tutorial to get logs from my docker containers stored in elasticsearch via filebeat and logstash Link to tutorial
However, nothing is being shown in kibana and when I run a docker-logs on my filebeat container, I'm getting the following error
2019-03-30T22:22:40.353Z ERROR log/harvester.go:281 Read line error: parsing CRI timestamp: parsing time "-03-30T21:59:16,113][INFO" as "2006-01-02T15:04:05Z07:00": cannot parse "-03-30T21:59:16,113][INFO" as "2006"; File: /usr/share/dockerlogs/data/2f3164397450efdd5851c3fad67fe405ab3dd822bbea1d807a993844e9143d5e/2f3164397450efdd5851c3fad67fe405ab3dd822bbea1d807a993844e9143d5e-json.log
My containers are hosted on a linux virtual machine where the virtual machine is running on a windows machine (Not sure if this could be causing the error due to the locations specified)
Below I'll describe what's running and some files in case the article is deleted in the future etc
One container is running which is simply running the following command, printing lines that filebeat should be able to read
CMD while true; do sleep 2 ; echo "{\"app\": \"dummy\", \"foo\": \"bar\"}"; done
My filebeat.yml file is as follows
filebeat.inputs:
- type: docker
combine_partial: true
containers:
path: "/usr/share/dockerlogs/data"
stream: "stdout"
ids:
- "*"
exclude_files: ['\.gz$']
ignore_older: 10m
processors:
# decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
fields: ["log", "message"]
target: ""
# overwrite existing target elasticsearch fields while decoding json fields
overwrite_keys: true
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
# setup filebeat to send output to logstash
output.logstash:
hosts: ["logstash"]
# Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: error
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
Any suggestions on why filebeat is failing to forward my logs and how to fix it would be appreciated. Thanks

IBM Cloud Private 2.1.0.3 The conditional check failed

I am trying to install the IBM Private Cloud Community Edition but struggle with the execution of the sudo docker run command from the installation instructions:
> sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
When I execute it, it returns following output with error message (below):
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$ sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [127.0.0.1]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
ok: [127.0.0.1]
TASK [docker-engine-check : Getting Docker engine version] *********************
changed: [127.0.0.1]
TASK [docker-engine-check : Checking docker engine if installed] ***************
changed: [127.0.0.1]
TASK [docker-engine : include] *************************************************
TASK [docker-engine : include] *************************************************
TASK [containerd-engine-check : Getting containerd version] ********************
TASK [containerd-engine-check : Checking cri-containerd if installed] **********
TASK [containerd-engine : include] *********************************************
TASK [containerd-engine : include] *********************************************
TASK [network-check : Checking for the network pre-check file] *****************
ok: [127.0.0.1 -> localhost]
TASK [network-check : include_tasks] *******************************************
included: /installer/playbook/roles/network-check/tasks/calico.yaml for 127.0.0.1
TASK [network-check : Calico Validation - Verifying hostname for lowercase] ****
TASK [network-check : Calico Validation - Initializing interface list to be verified] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is first-found] ***
TASK [network-check : Calico Validation - Updating regex string to match interfaces to be excluded] ***
TASK [network-check : Calico Validation - Getting list of interfaces to be considered] ***
TASK [network-check : Calico Validation - Excluding default interface if defined] ***
TASK [network-check : Calico Validation - Finding Interface reg-ex when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Domain/IP when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding IP for the Domain when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when lo is found] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding MTU for the detected Interface(s)] ***
fatal: [127.0.0.1]: FAILED! => {"msg": "The conditional check 'hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined' failed. The error was: error while evaluating conditional (hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined): 'dict object' has no attribute u'ansible_'\n\nThe error appears to have been in '/installer/playbook/roles/network-check/tasks/calico.yaml': line 86, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Calico Validation - Finding MTU for the detected Interface(s)\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
127.0.0.1 : ok=12 changed=6 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$
I am working on Ubuntu 14.04 with Docker version 17.12.1-ce, build 7390fc6.
My host file looks like this:
[master]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[worker]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[proxy]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
#[management]
#4.4.4.4
#[va]
#5.5.5.5
The yaml.cfg file looks like this:
# Licensed Materials - Property of IBM
# IBM Cloud private
# # Copyright IBM Corp. 2017 All Rights Reserved
# US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
---
###### docker0: 172.17.0.1
###### eth0: 192.168.240.14
## Network Settings
network_type: calico
# network_helm_chart_path: < helm chart path >
## Network in IPv4 CIDR format
network_cidr: 127.0.0.1/8
## Kubernetes Settings
service_cluster_ip_range: 127.0.0.1/24
## Makes the Kubelet start if swap is enabled on the node. Remove
## this if your production env want to disble swap.
kubelet_extra_args: ["--fail-swap-on=false"]
# cluster_domain: cluster.local
# cluster_name: mycluster
# cluster_CA_domain: "{{ cluster_name }}.icp"
# cluster_zone: "myzone"
# cluster_region: "myregion"
## Etcd Settings
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0", "--snapshot-count=10000"]
## General Settings
# wait_for_timeout: 600
# docker_api_timeout: 100
## Advanced Settings
default_admin_user: user
default_admin_password: 6CEd29CN
# ansible_user: <username>
# ansible_become: true
# ansible_become_password: <password>
## Kubernetes Settings
# kube_apiserver_extra_args: []
# kube_controller_manager_extra_args: []
# kube_proxy_extra_args: []
# kube_scheduler_extra_args: []
## Enable Kubernetes Audit Log
# auditlog_enabled: false
## GlusterFS Settings
# glusterfs: false
## GlusterFS Storage Settings
# storage:
# - kind: glusterfs
# nodes:
# - ip: <worker_node_m_IP_address>
# device: <link path>/<symlink of device aaa>,<link path>/<symlink of device bbb>
# - ip: <worker_node_n_IP_address>
# device: <link path>/<symlink of device ccc>
# - ip: <worker_node_o_IP_address>
# device: <link path>/<symlink of device ddd>
# storage_class:
# name:
# default: false
# volumetype: replicate:3
## Network Settings
## Calico Network Settings
# calico_ipip_enabled: true
# calico_tunnel_mtu: 1430
calico_ip_autodetection_method: can-reach=127.0.0.1
## IPSec mesh Settings
## If user wants to configure IPSec mesh, the following parameters
## should be configured through config.yaml
# ipsec_mesh:
# enable: true
# interface: <interface name on which IPsec will be enabled>
# subnets: []
# exclude_ips: "<list of IP addresses separated by a comma>"
# kube_apiserver_secure_port: 8001
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# cluster_lb_address: none
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# proxy_lb_address: none
## Install in firewall enabled mode
# firewall_enabled: false
## Allow loopback dns server in cluster nodes
# loopback_dns: false
## High Availability Settings
# vip_manager: etcd
## High Availability Settings for master nodes
# vip_iface: eth0
# cluster_vip: 127.0.1.1
## High Availability Settings for Proxy nodes
# proxy_vip_iface: eth0
# proxy_vip: 127.0.1.1
## Federation cluster Settings
# federation_enabled: false
# federation_cluster: federation-cluster
# federation_domain: cluster.federation
# federation_apiserver_extra_args: []
# federation_controllermanager_extra_args: []
# federation_external_policy_engine_enabled: false
## vSphere cloud provider Settings
## If user wants to configure vSphere as cloud provider, vsphere_conf
## parameters should be configured through config.yaml
# kubelet_nodename: hostname
# cloud_provider: vsphere
# vsphere_conf:
# user: <vCenter username for vSphere cloud provider>
# password: <password for vCenter user>
# server: <vCenter server IP or FQDN>
# port: [vCenter Server Port; default: 443]
# insecure_flag: [set to 1 if vCenter uses a self-signed certificate]
# datacenter: <datacenter name on which Node VMs are deployed>
# datastore: <default datastore to be used for provisioning volumes>
# working_dir: <vCenter VM folder path in which node VMs are located>
## Disabled Management Services Settings
## You can disable the following management services: ["service-catalog", "metering", "monitoring", "istio", "vulnerability-advisor", "custom-metrics-adapter"]
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
## Docker Settings
# docker_env: []
# docker_extra_args: []
## The maximum size of the log before it is rolled
# docker_log_max_size: 50m
## The maximum number of log files that can be present
# docker_log_max_file: 10
## Install/upgrade docker version
# docker_version: 17.12.1
## ICP install docker automatically
# install_docker: true
## Ingress Controller Settings
## You can add your ingress controller configuration, and the allowed configuration can refer to
## https://github.com/kubernetes/ingress-nginx/blob/nginx-0.9.0/docs/user-guide/configmap.md#configuration-options
# ingress_controller:
# disable-access-log: 'true'
## Clean metrics indices in Elasticsearch older than this number of days
# metrics_max_age: 1
## Clean application log indices in Elasticsearch older than this number of days
# logs_maxage: 1
## Uncomment the line below to install Kibana as a managed service.
# kibana_install: true
# STARTING_CLOUDANT
# cloudant:
# namespace: kube-system
# pullPolicy: IfNotPresent
# pvPath: /opt/ibm/cfc/cloudant
# database:
# password: fdrreedfddfreeedffde
# federatorCommand: hostname
# federationIdentifier: "-0"
# readinessProbePeriodSeconds: 2
# readinessProbeInitialDelaySeconds: 90
# END_CLOUDANT
My goal is to set up ICP on my local machine (single node) and I'm very thankful for any help regarding this issue.
So I resolved this error by uncommenting and setting # calico_ipip_enabled: true to false.
After that though I got another error because of my loopback ip:
fatal: [127.0.0.1] => A loopback IP is used in your DNS server configuration. For more details, see https://ibm.biz/dns-fails.
But there is an fix/workaround by setting loopback_dns: true as mentioned in the link.
I can't close this question here but this is how I resolved it.
IBM Cloud private suppoted OS is ubuntu 16.04 .Please check the below url
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/supported_system_config/supported_os.html
Please check the system requirement..
I am also trying to install the setup.. I didnt face this issue.
Normally, we can not specify 127.0.0.1 as ICP node in ICP hosts file, thanks.

Filebeat not pushing logs to Elasticsearch

I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.

Openldap 2.4 within Docker container

I'm setting up an openldap server (slapd) within a docker container, I just took the latest centos image (tags: latest, centos7, 7, image here
and then I've installed following packages:
openldap-servers-2.4.44-5.el7.x86_64
openldap-2.4.44-5.el7.x86_64
openldap-clients-2.4.44-5.el7.x86_64
openldap-devel-2.4.44-5.el7.x86_64
and then just started the service with /usr/sbin/slapd -F /etc/openldap/slapd.d/
Then I'm trying to add the domain and root user to ldap configuration using this ldif file (db.ldif)
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=myadminuser,dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}blablablabla
Then when I run ldapmodify -Y EXTERNAL -H ldapi:/// -f db.ldif
It throws me this ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1)
I see the port is open 'cause telnet can connect to it and actually doing ldapsearch from another machine works ldapsearch -h $myserver -p $myport -x
And response this:
# extended LDIF
#
# LDAPv3
# base <> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 32 No such object
# numResponses: 1
I really don't know what I'm missing.

Resources