How do I set up a simple filebeat to ES cluster? - docker

I'm trying for the first time to set up a cluster where filebeat sends logs to ES which I can view in Kibana. All I'm trying to do is see the logs I write into the file /tmp/aaa.log in Kibana. I'm getting a little lost in all the configuration. Can someone tell me what I'm doing wrong based on the configuration files below?
Here's my docker-compose.yml:
---
version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:${TAG}
container_name: elasticsearch
ports: ['9200:9200']
networks: ['stack']
environment:
- xpack.security.enabled=false
volumes:
- 'es_data:/usr/share/elasticsearch/data'
kibana:
image: docker.elastic.co/kibana/kibana:${TAG}
container_name: kibana
ports: ['5601:5601']
networks: ['stack']
depends_on: ['elasticsearch']
environment:
- xpack.security.enabled=false
logstash:
image: docker.elastic.co/logstash/logstash:${TAG}
container_name: logstash
networks: ['stack']
depends_on: ['elasticsearch']
environment:
- xpack.security.enabled=false
filebeat:
image: docker.elastic.co/beats/filebeat:${TAG}
container_name: filebeat
volumes:
- /tmp/filebeat.yml:/usr/share/filebeat/filebeat.yml
networks: ['stack']
depends_on: ['elasticsearch', 'kibana']
networks: {stack: {}}
And here's filebeat.yml:
filebeat.prospectors:
- input_type: log
paths:
- /tmp/aaa.log
output.elasticsearch:
hosts: ['elasticsearch:9200']
And I run this with TAG=5.6.13 docker-compose up (I have to use ES version 5).
Here are the logs:
2018/11/27 16:20:57.165350 beat.go:297: INFO Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018/11/27 16:20:57.165389 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.13
2018/11/27 16:20:57.165502 output.go:263: INFO Loading template enabled. Reading template file: /usr/share/filebeat/filebeat.template.json
2018/11/27 16:20:57.166247 output.go:274: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /usr/share/filebeat/filebeat.template-es2x.json
2018/11/27 16:20:57.167063 output.go:286: INFO Loading template enabled for Elasticsearch 6.x. Reading template file: /usr/share/filebeat/filebeat.template-es6x.json
2018/11/27 16:20:57.167554 metrics.go:23: INFO Metrics logging every 30s
2018/11/27 16:20:57.167888 client.go:128: INFO Elasticsearch url: http://elasticsearch:9200
2018/11/27 16:20:57.167909 outputs.go:108: INFO Activated elasticsearch as output plugin.
2018/11/27 16:20:57.168015 publish.go:300: INFO Publisher name: 34df7198d027
2018/11/27 16:20:57.168185 async.go:63: INFO Flush Interval set to: 1s
2018/11/27 16:20:57.168194 async.go:64: INFO Max Bulk Size set to: 50
2018/11/27 16:20:57.168512 beat.go:233: INFO filebeat start running.
2018/11/27 16:20:57.168546 registrar.go:68: INFO No registry file found under: /usr/share/filebeat/data/registry. Creating a new registry file.
2018/11/27 16:20:57.174446 registrar.go:106: INFO Loading registrar data from /usr/share/filebeat/data/registry
2018/11/27 16:20:57.174491 registrar.go:123: INFO States Loaded from registrar: 0
2018/11/27 16:20:57.174515 crawler.go:38: INFO Loading Prospectors: 1
2018/11/27 16:20:57.174633 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2018/11/27 16:20:57.174715 prospector.go:124: INFO Starting prospector of type: log; id: 16715230261889747
2018/11/27 16:20:57.174726 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018/11/27 16:20:57.174735 registrar.go:236: INFO Starting Registrar
2018/11/27 16:20:57.174754 sync.go:41: INFO Start sending events to output
2018/11/27 16:20:57.174788 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018/11/27 16:21:27.168018 metrics.go:39: INFO Non-zero metrics in the last 30s: registrar.writes=1
2018/11/27 16:21:57.167828 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/27 16:22:27.167772 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/27 16:22:57.167974 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/27 16:23:27.167752 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/27 16:23:57.167944 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/27 16:24:27.167943 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/27 16:24:32.039122 filebeat.go:267: INFO Stopping filebeat
2018/11/27 16:24:32.039158 crawler.go:90: INFO Stopping Crawler
2018/11/27 16:24:32.039166 crawler.go:100: INFO Stopping 1 prospectors
2018/11/27 16:24:32.039187 prospector.go:180: INFO Prospector ticker stopped
2018/11/27 16:24:32.039187 prospector.go:137: INFO Prospector channel stopped because beat is stopping.
2018/11/27 16:24:32.039198 prospector.go:232: INFO Stopping Prospector: 16715230261889747
2018/11/27 16:24:32.039215 crawler.go:112: INFO Crawler stopped
2018/11/27 16:24:32.039223 spooler.go:101: INFO Stopping spooler
2018/11/27 16:24:32.039249 registrar.go:291: INFO Stopping Registrar
2018/11/27 16:24:32.039264 registrar.go:248: INFO Ending Registrar
2018/11/27 16:24:32.041518 metrics.go:51: INFO Total non-zero values: registrar.writes=2
2018/11/27 16:24:32.041533 metrics.go:52: INFO Uptime: 3m34.878904973s
2018/11/27 16:24:32.041538 beat.go:237: INFO filebeat stopped.
2018/11/28 08:43:17.481376 beat.go:297: INFO Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018/11/28 08:43:17.481411 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.13
2018/11/28 08:43:17.481500 output.go:263: INFO Loading template enabled. Reading template file: /usr/share/filebeat/filebeat.template.json
2018/11/28 08:43:17.482638 output.go:274: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /usr/share/filebeat/filebeat.template-es2x.json
2018/11/28 08:43:17.483675 metrics.go:23: INFO Metrics logging every 30s
2018/11/28 08:43:17.483780 output.go:286: INFO Loading template enabled for Elasticsearch 6.x. Reading template file: /usr/share/filebeat/filebeat.template-es6x.json
2018/11/28 08:43:17.484701 client.go:128: INFO Elasticsearch url: http://elasticsearch:9200
2018/11/28 08:43:17.484745 outputs.go:108: INFO Activated elasticsearch as output plugin.
2018/11/28 08:43:17.484844 publish.go:300: INFO Publisher name: 34df7198d027
2018/11/28 08:43:17.484975 async.go:63: INFO Flush Interval set to: 1s
2018/11/28 08:43:17.484982 async.go:64: INFO Max Bulk Size set to: 50
2018/11/28 08:43:17.485563 beat.go:233: INFO filebeat start running.
2018/11/28 08:43:17.485607 registrar.go:85: INFO Registry file set to: /usr/share/filebeat/data/registry
2018/11/28 08:43:17.485630 registrar.go:106: INFO Loading registrar data from /usr/share/filebeat/data/registry
2018/11/28 08:43:17.485656 registrar.go:123: INFO States Loaded from registrar: 0
2018/11/28 08:43:17.485688 crawler.go:38: INFO Loading Prospectors: 1
2018/11/28 08:43:17.485758 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2018/11/28 08:43:17.485840 prospector.go:124: INFO Starting prospector of type: log; id: 16715230261889747
2018/11/28 08:43:17.485848 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018/11/28 08:43:17.485881 sync.go:41: INFO Start sending events to output
2018/11/28 08:43:17.485898 registrar.go:236: INFO Starting Registrar
2018/11/28 08:43:17.485945 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018/11/28 08:43:47.483962 metrics.go:34: INFO No non-zero metrics in the last 30s
2018/11/28 08:44:17.484051 metrics.go:34: INFO No non-zero metrics in the last 30s

My mistake. I stupidly forgot to map the log file in docker-compose.yml.

Related

Filebeat is not sending logs to logstash on kubernetes

I'm trying to send kubernetes' logs with Filebeat and Logstash. I do have some deployment on the same namespace.
I tried the suggested configuration for filebeat.yml from elastic in this [link].(https://raw.githubusercontent.com/elastic/beats/7.x/deploy/kubernetes/filebeat-kubernetes.yaml)
So, this is my overall configuration:
filebeat.yml
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/*.log'
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
output.logstash:
hosts: ['logstash.default.svc.cluster.local:5044']
Logstash Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.15.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
Logstash Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: default
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
output {
tcp {
mode => "client"
host => "10.184.0.4"
port => 5001
codec => "json_lines"
}
stdout {
codec => rubydebug
}
}
Logstash Service
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: default
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
Filebeat daemonset are running, also the Logstash deployment. Both of them kubectl logs shows:
Filebeat daemonset shows
2021-10-13T04:10:14.201Z INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2021-10-13T04:10:14.219Z INFO instance/beat.go:673 Beat ID: b90d1561-e989-4ed1-88f9-9b88045cee29
2021-10-13T04:10:14.220Z INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2021-10-13T04:10:14.220Z INFO [beat] instance/beat.go:1014 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "b90d1561-e989-4ed1-88f9-9b88045cee29"}}}
2021-10-13T04:10:14.220Z INFO [beat] instance/beat.go:1023 Build info {"system_info": {"build": {"commit": "9023152025ec6251bc6b6c38009b309157f10f17", "libbeat": "7.15.0", "time": "2021-09-16T03:16:09.000Z", "version": "7.15.0"}}}
2021-10-13T04:10:14.220Z INFO [beat] instance/beat.go:1026 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.16.6"}}}
2021-10-13T04:10:14.221Z INFO [beat] instance/beat.go:1030 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-10-06T19:41:55Z","containerized":true,"name":"filebeat-hvqx4","ip":["127.0.0.1/8","10.116.6.42/24"],"kernel_version":"5.4.120+","mac":["ae:ab:28:37:27:2a"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":9,"patch":2009,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0,"id":"38c2fd0d69ba05ae64d8a4d4fc156791"}}}
2021-10-13T04:10:14.221Z INFO [beat] instance/beat.go:1059 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 8, "ppid": 1, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2021-10-13T04:10:12.819Z"}}}
2021-10-13T04:10:14.221Z INFO instance/beat.go:309 Setup Beat: filebeat; Version: 7.15.0
2021-10-13T04:10:14.222Z INFO [publisher] pipeline/module.go:113 Beat name: filebeat-hvqx4
2021-10-13T04:10:14.224Z WARN beater/filebeat.go:178 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-10-13T04:10:14.225Z INFO [monitoring] log/log.go:142 Starting metrics logging every 30s
2021-10-13T04:10:14.225Z INFO instance/beat.go:473 filebeat start running.
2021-10-13T04:10:14.227Z INFO memlog/store.go:119 Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2021-10-13T04:10:14.227Z INFO memlog/store.go:124 Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=0
2021-10-13T04:10:14.227Z WARN beater/filebeat.go:381 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-10-13T04:10:14.228Z INFO [registrar] registrar/registrar.go:109 States Loaded from registrar: 0
2021-10-13T04:10:14.228Z INFO [crawler] beater/crawler.go:71 Loading Inputs: 1
2021-10-13T04:10:14.228Z INFO beater/crawler.go:148 Stopping Crawler
2021-10-13T04:10:14.228Z INFO beater/crawler.go:158 Stopping 0 inputs
2021-10-13T04:10:14.228Z INFO beater/crawler.go:178 Crawler stopped
2021-10-13T04:10:14.228Z INFO [registrar] registrar/registrar.go:132 Stopping Registrar
2021-10-13T04:10:14.228Z INFO [registrar] registrar/registrar.go:166 Ending Registrar
2021-10-13T04:10:14.228Z INFO [registrar] registrar/registrar.go:137 Registrar stopped
2021-10-13T04:10:44.229Z INFO [monitoring] log/log.go:184 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"/"},"cpuacct":{"id":"/","total":{"ns":307409530}},"memory":{"id":"/","mem":{"limit":{"bytes":209715200},"usage":{"bytes":52973568}}}},"cpu":{"system":{"ticks":80,"time":{"ms":85}},"total":{"ticks":270,"time":{"ms":283},"value":270},"user":{"ticks":190,"time":{"ms":198}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"f5abb082-a094-4f99-a046-bc183d415455","uptime":{"ms":30208},"version":"7.15.0"},"memstats":{"gc_next":19502448,"memory_alloc":10052000,"memory_sys":75056136,"memory_total":55390312,"rss":112922624},"runtime":{"goroutines":12}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0},"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":2},"load":{"1":0.14,"15":0.28,"5":0.31,"norm":{"1":0.07,"15":0.14,"5":0.155}}}}}}
Logtash deployment logs shows:
Using bundled JDK: /usr/share/logstash/jdk
warning: no jvm.options file found
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-10-13 08:46:58.674 [main] runner - Starting Logstash {"logstash.version"=>"7.15.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +jit [linux-x86_64]"}
[INFO ] 2021-10-13 08:46:58.698 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2021-10-13 08:46:58.700 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2021-10-13 08:46:59.077 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-10-13 08:46:59.097 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"7a0e5b89-70a1-4004-b38e-c31fadcd7251", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2021-10-13 08:47:00.950 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-10-13 08:47:01.468 [Converge PipelineAction::Create<main>] Reflections - Reflections took 203 ms to scan 1 urls, producing 120 keys and 417 values
[WARN ] 2021-10-13 08:47:02.496 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-13 08:47:02.526 [Converge PipelineAction::Create<main>] beats - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-10-13 08:47:02.664 [Converge PipelineAction::Create<main>] jsonlines - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-10-13 08:47:02.947 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3b822f13#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[INFO ] 2021-10-13 08:47:05.467 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>2.52}
[INFO ] 2021-10-13 08:47:05.473 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2021-10-13 08:47:05.555 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-10-13 08:47:05.588 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2021-10-13 08:47:05.907 [[main]<beats] Server - Starting server on port: 5044
So, my questions are:
Why is Filebeat not ingesting the logs from kubernetes?
Are there different ways to use hosts logstash on filebeat.yml? Because some examples are using DNS name just like my conf. when others are just using service names.
How to trigger/test logs to make sure my configuration is running
well?
My mistake, on filebeat environment I missed initiating the ENV node name. So, from the configuration above I just added
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
the filebeat running well now

How do I start my jar application from a flink docker image inside Kubernetes?

I am trying to use my felipeogutierrez/explore-flink:1.11.1-scala_2.12 image available here into a kubernetes cluster configuration like it is saying here. I compile my project https://github.com/felipegutierrez/explore-flink with maven and I extend the default flink image flink:1.11.1-scala_2.12 with this Dockerfile:
FROM maven:3.6-jdk-8-slim AS builder
# get explore-flink job and compile it
COPY ./java/explore-flink /opt/explore-flink
WORKDIR /opt/explore-flink
RUN mvn clean install
FROM flink:1.11.1-scala_2.12
WORKDIR /opt/flink/usrlib
COPY --from=builder /opt/explore-flink/target/explore-flink.jar /opt/flink/usrlib/explore-flink.jar
ADD /opt/flink/usrlib/explore-flink.jar /opt/flink/usrlib/explore-flink.jar
#USER flink
then the tutorial 2 says to create the common cluster components:
kubectl create -f k8s/flink-configuration-configmap.yaml
kubectl create -f k8s/jobmanager-service.yaml
kubectl proxy
kubectl create -f k8s/jobmanager-rest-service.yaml
kubectl get svc flink-jobmanager-rest
and then create the jobmanager-job.yaml:
kubectl create -f k8s/jobmanager-job.yaml
I am getting a CrashLoopBackOff status error on the flink-jobmanager pod and the log says that it cannot find the class org.sense.flink.examples.stream.tpch.TPCHQuery03 in the flink-dist_2.12-1.11.1.jar:1.11.1 jar file. However, I want that kubernetes try also to look into the /opt/flink/usrlib/explore-flink.jar jar file. I am copying and adding this jar file on the Dockerfile of my image, but it seems that it is not working. What am I missing here? Below is my jobmanager-job.yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: felipeogutierrez/explore-flink:1.11.1-scala_2.12
imagePullPolicy: Always
env:
args: ["standalone-job", "--job-classname", "org.sense.flink.examples.stream.tpch.TPCHQuery03"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
- name: job-artifacts-volume
mountPath: /opt/flink/usrlib
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: job-artifacts-volume
hostPath:
path: /host/path/to/job/artifacts
and my complete log file:
$ kubectl logs flink-jobmanager-qfkjl
Starting Job Manager
sed: couldn't open temporary file /opt/flink/conf/sedSg30ro: Read-only file system
sed: couldn't open temporary file /opt/flink/conf/sed1YrBco: Read-only file system
/docker-entrypoint.sh: 72: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml: Permission denied
/docker-entrypoint.sh: 91: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml.tmp: Read-only file system
Starting standalonejob as a console application on host flink-jobmanager-qfkjl.
2020-09-21 08:08:29,528 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-09-21 08:08:29,531 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Preconfiguration:
2020-09-21 08:08:29,532 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] -
JM_RESOURCE_PARAMS extraction logs:
jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456
logs: INFO [] - Loading configuration property: jobmanager.rpc.address, flink-jobmanager
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 4
INFO [] - Loading configuration property: blob.server.port, 6124
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123
INFO [] - Loading configuration property: taskmanager.rpc.port, 6122
INFO [] - Loading configuration property: queryable-state.proxy.ports, 6125
INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m
INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m
INFO [] - Loading configuration property: parallelism.default, 2
INFO [] - The derived from fraction jvm overhead memory (160.000mb (167772162 bytes)) is less than its min value 192.000mb (201326592 bytes), min value will be used instead
INFO [] - Final Master Memory configuration:
INFO [] - Total Process Memory: 1.563gb (1677721600 bytes)
INFO [] - Total Flink Memory: 1.125gb (1207959552 bytes)
INFO [] - JVM Heap: 1024.000mb (1073741824 bytes)
INFO [] - Off-heap: 128.000mb (134217728 bytes)
INFO [] - JVM Metaspace: 256.000mb (268435456 bytes)
INFO [] - JVM Overhead: 192.000mb (201326592 bytes)
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Starting StandaloneApplicationClusterEntryPoint (Version: 1.11.1, Scala: 2.12, Rev:7eb514a, Date:2020-07-15T07:02:09+02:00)
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - OS current user: flink
2020-09-21 08:08:29,533 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Current Hadoop/Kerberos user: <no hadoop dependency found>
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.265-b01
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Maximum heap size: 989 MiBytes
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JAVA_HOME: /usr/local/openjdk-8
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - No Hadoop Dependency available
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - JVM Options:
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xmx1073741824
2020-09-21 08:08:29,534 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Xms1073741824
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -XX:MaxMetaspaceSize=268435456
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog.file=/opt/flink/log/flink--standalonejob-0-flink-jobmanager-qfkjl.log
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlog4j.configurationFile=file:/opt/flink/conf/log4j-console.properties
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
2020-09-21 08:08:29,535 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Program Arguments:
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --configDir
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - /opt/flink/conf
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --job-classname
2020-09-21 08:08:29,536 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - org.sense.flink.examples.stream.tpch.TPCHQuery03
2020-09-21 08:08:29,537 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Classpath: /opt/flink/lib/flink-csv-1.11.1.jar:/opt/flink/lib/flink-json-1.11.1.jar:/opt/flink/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/lib/flink-table-blink_2.12-1.11.1.jar:/opt/flink/lib/flink-table_2.12-1.11.1.jar:/opt/flink/lib/log4j-1.2-api-2.12.1.jar:/opt/flink/lib/log4j-api-2.12.1.jar:/opt/flink/lib/log4j-core-2.12.1.jar:/opt/flink/lib/log4j-slf4j-impl-2.12.1.jar:/opt/flink/lib/flink-dist_2.12-1.11.1.jar:::
2020-09-21 08:08:29,538 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - --------------------------------------------------------------------------------
2020-09-21 08:08:29,540 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Registered UNIX signal handlers for [TERM, HUP, INT]
2020-09-21 08:08:29,577 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Could not create application program.
org.apache.flink.util.FlinkException: Could not find the provided job class (org.sense.flink.examples.stream.tpch.TPCHQuery03) in the user lib directory (/opt/flink/usrlib).
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getJobClassNameOrScanClassPath(ClassPathPackagedProgramRetriever.java:140) ~[flink-dist_2.12-1.11.1.jar:1.11.1]
at org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.getPackagedProgram(ClassPathPackagedProgramRetriever.java:123) ~[flink-dist_2.12-1.11.1.jar:1.11.1]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.getPackagedProgram(StandaloneApplicationClusterEntryPoint.java:110) ~[flink-dist_2.12-1.11.1.jar:1.11.1]
at org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:78) [flink-dist_2.12-1.11.1.jar:1.11.1]
I had two problems with my configurations. First the Dockerfile was not copying the explore-flink.jar to the right location. Second I did not need to mount the volume job-artifacts-volume on the Kubernetes file jobmanager-job.yaml. Here is my Dockerfile:
FROM maven:3.6-jdk-8-slim AS builder
# get explore-flink job and compile it
COPY ./java/explore-flink /opt/explore-flink
WORKDIR /opt/explore-flink
RUN mvn clean install
FROM flink:1.11.1-scala_2.12
WORKDIR /opt/flink/lib
COPY --from=builder --chown=flink:flink /opt/explore-flink/target/explore-flink.jar /opt/flink/lib/explore-flink.jar
and the jobmanager-job.yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: felipeogutierrez/explore-flink:1.11.1-scala_2.12
imagePullPolicy: Always
env:
#command: ["ls"]
args: ["standalone-job", "--job-classname", "org.sense.flink.App", "-app", "36"] #, <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
#args: ["standalone-job", "--job-classname", "org.sense.flink.examples.stream.tpch.TPCHQuery03"] #, <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties

org.openqa.selenium.NoSuchSessionException: Unable to find session with ID error testing with Behat/Mink and Selenium2Driver in docker container

I'm trying to test a Symfony3 web application with Behat/Mink and Selenium2Driver so that I can test Javascript functionallity too.
The application runs in a docker container, so I added a new docker container for selenium-hub and chrome as described here:
# docker-compose.yml
version: '3.5' # Docker Engine release 17.12.0+
networks:
servicesnet:
driver: bridge
services:
apache:
build:
context: './apache2'
container_name: apache-service
ports:
- "80:80"
- "443:443"
tty: true
networks:
- servicesnet
volumes:
- ${HOST_APACHE_CONFIG}:/etc/apache2
- ${HOST_PAGES_PATH}:/var/www/localhost/htdocs
selenium-hub:
image: selenium/hub:4.0.0-alpha-6-20200730
container_name: selenium-hub
ports:
- "4444:4444"
networks:
- servicesnet
chrome:
image: selenium/node-chrome:4.0.0-alpha-6-20200730
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
networks:
- servicesnet
When I run docker-compose up it outputs for the new containers:
chrome | 2020-08-12 07:36:19,917 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
chrome | 2020-08-12 07:36:19,918 INFO supervisord started with pid 7
selenium-hub | 2020-08-12 07:36:19,297 INFO Included extra file "/etc/supervisor/conf.d/selenium-grid-hub.conf" during parsing
selenium-hub | 2020-08-12 07:36:19,298 INFO supervisord started with pid 7
selenium-hub | 2020-08-12 07:36:20,301 INFO spawned: 'selenium-grid-hub' with pid 10
selenium-hub | Starting Selenium Grid Hub...
selenium-hub | 2020-08-12 07:36:20,311 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
selenium-hub | 07:36:20.588 INFO [LoggingOptions.getTracer] - Using OpenTelemetry for tracing
selenium-hub | 07:36:20.589 INFO [LoggingOptions.createTracer] - Using OpenTelemetry for tracing
selenium-hub | 07:36:20.607 INFO [EventBusOptions.createBus] - Creating event bus: org.openqa.selenium.events.zeromq.ZeroMqEventBus
selenium-hub | 07:36:20.638 INFO [BoundZmqEventBus.<init>] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://172.28.0.3:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://172.28.0.3:4443]
selenium-hub | 07:36:20.676 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://172.28.0.3:4442 and tcp://172.28.0.3:4443
selenium-hub | 07:36:20.680 INFO [UnboundZmqEventBus.<init>] - Sockets created
selenium-hub | 07:36:20.681 INFO [UnboundZmqEventBus.lambda$new$2] - Bus started
chrome | 2020-08-12 07:36:21,136 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
chrome | 2020-08-12 07:36:21,136 INFO success: fluxbox entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
chrome | 2020-08-12 07:36:21,136 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
chrome | 2020-08-12 07:36:21,137 INFO success: selenium-node entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
selenium-hub | 07:36:21.308 INFO [Hub.execute] - Started Selenium hub 4.0.0-alpha-6 (revision 5f43a29cfc): http://172.28.0.3:4444
chrome | 07:36:21.774 INFO [LoggingOptions.getTracer] - Using OpenTelemetry for tracing
chrome | 07:36:21.775 INFO [LoggingOptions.createTracer] - Using OpenTelemetry for tracing
chrome | 07:36:21.791 INFO [EventBusOptions.createBus] - Creating event bus: org.openqa.selenium.events.zeromq.ZeroMqEventBus
chrome | 07:36:21.829 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://selenium-hub:4442 and tcp://selenium-hub:4443
chrome | 07:36:21.857 INFO [UnboundZmqEventBus.<init>] - Sockets created
chrome | 07:36:21.859 INFO [UnboundZmqEventBus.lambda$new$2] - Bus started
chrome | 07:36:22.121 INFO [NodeServer.execute] - Reporting self as: http://172.28.0.5:5555
chrome | 07:36:22.175 INFO [NodeOptions.report] - Adding Chrome for {"browserName": "chrome"} 8 times
chrome | 07:36:22.298 INFO [NodeServer.execute] - Started Selenium node 4.0.0-alpha-6 (revision 5f43a29cfc): http://172.28.0.5:5555
chrome | 07:36:22.302 INFO [NodeServer.execute] - Starting registration process for node id ff0154a7-ed4b-438a-887c-0a7f3a988cb4
selenium-hub | 07:36:22.355 INFO [LocalDistributor.refresh] - Creating a new remote node for http://172.28.0.5:5555
selenium-hub | 07:36:22.763 INFO [LocalDistributor.add] - Added node ff0154a7-ed4b-438a-887c-0a7f3a988cb4.
selenium-hub | 07:36:22.770 INFO [Host.lambda$new$0] - Changing status of node ff0154a7-ed4b-438a-887c-0a7f3a988cb4 from DOWN to UP. Reason: http://172.28.0.5:5555 is ok
chrome | 07:36:22.774 INFO [NodeServer.lambda$execute$0] - Node has been added
Then I have the next method for every test:
<?php
namespace Tests\AppBundle\Controller;
use Behat\Mink\Driver\Selenium2Driver;
use Behat\Mink\Mink;
use Behat\Mink\Session;
use Symfony\Bundle\FrameworkBundle\Client;
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase;
abstract class BaseControllerTest extends WebTestCase
{
/**
* #var Client
*/
protected $client;
/**
* #var Session
*/
protected $session;
public function visitUri($uri)
{
$this->client = static::createClient();
$pass = $this->client->getKernel()->getContainer()->getParameter('http_basic_auth_pass');
$host = 'localhost'; // I've tried several things here (like 172.28.0.5:5555)
$driver = new Selenium2Driver('chrome');
$mink = new Mink(array(
'chrome' => new Session($driver)
));
$driver->setTimeouts(['page load' => 900000]);
$mink->setDefaultSessionName('chrome');
$this->session = $mink->getSession();
$this->session->visit('http://user:' . $pass . '#' . $host . $uri);
}
}
And I call this method from a specific test:
public function testClickOnSearch()
{
$this->visitUri(/mi-custom-uri);
$page = $this->session->getPage();
$this->session->wait(
200000,
"typeof jQuery !== 'undefined'"
);
$page->findButton('Buton text')->click();
$this->assertContains('my-custom-uri-2', $this->session->getCurrentUrl());
}
but I never get the session started. If I go to http://localhost:4444/wd/hub/session/url I see this error message:
"org.openqa.selenium.NoSuchSessionException: Unable to find session with ID: url\nBuild info: version: '4.0.0-alpha-6', revision: '5f43a29cfc'\nSystem info: host: 'fca78c7f81e6', ip: '172.28.0.3', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.0-42-generic', java.version: '1.8.0_252'\nDriver info: driver.version: unknown"
And executing the test, after 200 seconds this error is thrown:
PHP Fatal error: Call to a member function click() on null
I'm sure something is missing but don't know what. Any idea?
This error message...
org.openqa.selenium.NoSuchSessionException: Unable to find session with ID: url\n
Build info: version: '4.0.0-alpha-6', revision: '5f43a29cfc'\n
System info: host: 'fca78c7f81e6', ip: '172.28.0.3', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.0-42-generic', java.version: '1.8.0_252'\n
Driver info: driver.version: unknown
...implies that the ChromeDriver was unable to initiate/spawn a new Browsing Context i.e. Chrome Browser session which gets reflected in the logs as:
Driver info: driver.version: unknown
Hence moving forward you see the error:
PHP Fatal error: Call to a member function click() on null
and the most probhable cause is the incompatibility between the version of the binaries you are using.
Solution
Ensure that:
ChromeDriver is updated to current ChromeDriver v84.0 level.
Chrome is updated to current Chrome Version 84.0 level. (as per ChromeDriver v84.0 release notes)
If your base Web Client version is too old, then uninstall it and install a recent GA and released version of Web Client.
Take a System Reboot.
Always invoke driver.quit() within tearDown(){} method to close & destroy the WebDriver and Web Client instances gracefully.
References
You can find a couple of relevant detailed discussions in:
org.openqa.selenium.NoSuchSessionException: no such session error in Selenium automation tests using ChromeDriver Chrome with Java
Found you using selenium inside docker container. Than you can try environment variable SE_NODE_SESSION_TIMEOUT=999999

Sonarqube + Postgresql containers unable to determine database dialect

I defined 2 services to run Sonarqube in a docker swarm :
version: "3"
services:
sonar-a:
image: library/sonarqube:6.7.5
ports:
- "9000:9000"
environment:
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
- SONARQUBE_JDBC_URL="jdbc:postgresql://10.11.12.13:5432/sonar"
deploy:
placement:
constraints:
- node.hostname == some-node
sonar-a-db:
image: library/postgres:10.5
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
- POSTGRES_DB=sonar
deploy:
placement:
constraints:
- node.hostname == some-node
(I removed the volumes to ease the tests)
But I always get this error that tells me Sonar cant "determine database dialect" :
...
2018.09.28 15:22:42 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2018.09.28 15:22:42 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2018.09.28 15:22:42 INFO web[][o.e.p.PluginsService] no modules loaded
2018.09.28 15:22:42 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2018.09.28 15:22:42 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2018.09.28 15:22:42 INFO web[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.09.28 15:22:43 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2018.09.28 15:22:43 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 7.1.0.11001 / 9f47ce9daecebb16fc777249a418252625ae774a
2018.09.28 15:22:43 ERROR web[][o.s.s.p.Platform] Web server startup failed: Unable to determine database dialect to use within sonar with
dialect null jdbc url "jdbc:postgresql://10.4.140.56:5432/sonar"
2018.09.28 15:22:48 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
...
I tried different versions, I tried with mysql, I tried to pass a dialect variable in the SONARQUBE_JDBC_URL but nothing changes
Any idea ?
Strings in YAML should be defined without quotes. Right now your JDBC URL is equal to "jdbc:postgresql://10.11.12.13:5432/sonar" instead of the jdbc:postgresql://10.11.12.13:5432/sonar.
You have to change this:
- SONARQUBE_JDBC_URL="jdbc:postgresql://10.11.12.13:5432/sonar"
to:
- SONARQUBE_JDBC_URL=jdbc:postgresql://10.11.12.13:5432/sonar

PushCacheFilter with HAPROXY in DOCKER not pushing anything

i'm trying to use jetty's serverpush-feature with haproxy. I've set up Jetty 9.4.7 with PushCacheFilter and haproxy in two docker-containers.
I think jetty tries to push something, but no PUSH_PROMISE-frames are delivered to the client (I've checked chrome's net-internals-tab).
I'm not sure if this is an issue with jetty (maybe with h2c)!
Here's my haporxy-config (taken from jetty's documentation):
global
tune.ssl.default-dh-param 1024
defaults
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
frontend fe_http
mode http
bind *:80
# Redirect to https
redirect scheme https code 301
frontend fe_https
mode tcp
bind *:443 ssl no-sslv3 crt /usr/local/etc/domain.pem ciphers TLSv1.2
alpn h2,http/1.1
default_backend be_http
backend be_http
mode tcp
server domain basexhttp:8984
and here's how jetty starts:
[main] INFO org.eclipse.jetty.util.log - Logging initialized #377ms to org.eclipse.jetty.util.log.Slf4jLog
BaseX 9.0 beta 5cc42ae [HTTP Server]
[main] INFO org.eclipse.jetty.server.Server - jetty-9.4.7.v20170914
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
[main] INFO org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
[main] INFO org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
[main] INFO org.eclipse.jetty.server.session - Scavenging every 600000ms
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7dc222ae{/,file:///opt/basex/webapp/,AVAILABLE}{/opt/basex/webapp}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#3439f68d{h2c,[h2c, http/1.1]}{0.0.0.0:8984}
[main] INFO org.eclipse.jetty.server.Server - Started #784ms
HTTP Server was started (port: 8984).
HTTP Stop Server was started (port: 8985).
And here's the simple docker-compose.yml
version: '3.3'
services:
basexhttp:
container_name: pushcachefilter-basexhttp
build: pushcachefilter-basexhttp/
image: "pushcachefilter/basexhttp"
volumes:
- "${HOME}/data:/opt/basex/data"
- "${HOME}/base/app-web/webapp:/opt/basex/webapp"
networks:
- web
haproxy:
container_name: haproxy_container
build: ha-proxy/
image: "my_haproxy"
depends_on:
- basexhttp
ports:
- 80:80
- 443:443
networks:
- web
networks:
web:
driver: overlay
Please note that i have to configure jetty using my own jetty.xml - the way shown in the documentation is not working for me.
Thx in advance
Bodo

Resources